Search Results

Search found 19097 results on 764 pages for 'form lifecycle'.

Page 611/764 | < Previous Page | 607 608 609 610 611 612 613 614 615 616 617 618  | Next Page >

  • Windows Azure: Backup Services Release, Hyper-V Recovery Manager, VM Enhancements, Enhanced Enterprise Management Support

    - by ScottGu
    This morning we released a huge set of updates to Windows Azure.  These new capabilities include: Backup Services: General Availability of Windows Azure Backup Services Hyper-V Recovery Manager: Public preview of Windows Azure Hyper-V Recovery Manager Virtual Machines: Delete Attached Disks, Availability Set Warnings, SQL AlwaysOn Configuration Active Directory: Securely manage hundreds of SaaS applications Enterprise Management: Use Active Directory to Better Manage Windows Azure Windows Azure SDK 2.2: A massive update of our SDK + Visual Studio tooling support All of these improvements are now available to use immediately.  Below are more details about them. Backup Service: General Availability Release of Windows Azure Backup Today we are releasing Windows Azure Backup Service as a general availability service.  This release is now live in production, backed by an enterprise SLA, supported by Microsoft Support, and is ready to use for production scenarios. Windows Azure Backup is a cloud based backup solution for Windows Server which allows files and folders to be backed up and recovered from the cloud, and provides off-site protection against data loss. The service provides IT administrators and developers with the option to back up and protect critical data in an easily recoverable way from any location with no upfront hardware cost. Windows Azure Backup is built on the Windows Azure platform and uses Windows Azure blob storage for storing customer data. Windows Server uses the downloadable Windows Azure Backup Agent to transfer file and folder data securely and efficiently to the Windows Azure Backup Service. Along with providing cloud backup for Windows Server, Windows Azure Backup Service also provides capability to backup data from System Center Data Protection Manager and Windows Server Essentials, to the cloud. All data is encrypted onsite before it is sent to the cloud, and customers retain and manage the encryption key (meaning the data is stored entirely secured and can’t be decrypted by anyone but yourself). Getting Started To get started with the Windows Azure Backup Service, create a new Backup Vault within the Windows Azure Management Portal.  Click New->Data Services->Recovery Services->Backup Vault to do this: Once the backup vault is created you’ll be presented with a simple tutorial that will help guide you on how to register your Windows Servers with it: Once the servers you want to backup are registered, you can use the appropriate local management interface (such as the Microsoft Management Console snap-in, System Center Data Protection Manager Console, or Windows Server Essentials Dashboard) to configure the scheduled backups and to optionally initiate recoveries. You can follow these tutorials to learn more about how to do this: Tutorial: Schedule Backups Using the Windows Azure Backup Agent This tutorial helps you with setting up a backup schedule for your registered Windows Servers. Additionally, it also explains how to use Windows PowerShell cmdlets to set up a custom backup schedule. Tutorial: Recover Files and Folders Using the Windows Azure Backup Agent This tutorial helps you with recovering data from a backup. Additionally, it also explains how to use Windows PowerShell cmdlets to do the same tasks. Below are some of the key benefits the Windows Azure Backup Service provides: Simple configuration and management. Windows Azure Backup Service integrates with the familiar Windows Server Backup utility in Windows Server, the Data Protection Manager component in System Center and Windows Server Essentials, in order to provide a seamless backup and recovery experience to a local disk, or to the cloud. Block level incremental backups. The Windows Azure Backup Agent performs incremental backups by tracking file and block level changes and only transferring the changed blocks, hence reducing the storage and bandwidth utilization. Different point-in-time versions of the backups use storage efficiently by only storing the changes blocks between these versions. Data compression, encryption and throttling. The Windows Azure Backup Agent ensures that data is compressed and encrypted on the server before being sent to the Windows Azure Backup Service over the network. As a result, the Windows Azure Backup Service only stores encrypted data in the cloud storage. The encryption key is not available to the Windows Azure Backup Service, and as a result the data is never decrypted in the service. Also, users can setup throttling and configure how the Windows Azure Backup service utilizes the network bandwidth when backing up or restoring information. Data integrity is verified in the cloud. In addition to the secure backups, the backed up data is also automatically checked for integrity once the backup is done. As a result, any corruptions which may arise due to data transfer can be easily identified and are fixed automatically. Configurable retention policies for storing data in the cloud. The Windows Azure Backup Service accepts and implements retention policies to recycle backups that exceed the desired retention range, thereby meeting business policies and managing backup costs. Hyper-V Recovery Manager: Now Available in Public Preview I’m excited to also announce the public preview of a new Windows Azure Service – the Windows Azure Hyper-V Recovery Manager (HRM). Windows Azure Hyper-V Recovery Manager helps protect your business critical services by coordinating the replication and recovery of System Center Virtual Machine Manager 2012 SP1 and System Center Virtual Machine Manager 2012 R2 private clouds at a secondary location. With automated protection, asynchronous ongoing replication, and orderly recovery, the Hyper-V Recovery Manager service can help you implement Disaster Recovery and restore important services accurately, consistently, and with minimal downtime. Application data in an Hyper-V Recovery Manager scenarios always travels on your on-premise replication channel. Only metadata (such as names of logical clouds, virtual machines, networks etc.) that is needed for orchestration is sent to Azure. All traffic sent to/from Azure is encrypted. You can begin using Windows Azure Hyper-V Recovery today by clicking New->Data Services->Recovery Services->Hyper-V Recovery Manager within the Windows Azure Management Portal.  You can read more about Windows Azure Hyper-V Recovery Manager in Brad Anderson’s 9-part series, Transform the datacenter. To learn more about setting up Hyper-V Recovery Manager follow our detailed step-by-step guide. Virtual Machines: Delete Attached Disks, Availability Set Warnings, SQL AlwaysOn Today’s Windows Azure release includes a number of nice updates to Windows Azure Virtual Machines.  These improvements include: Ability to Delete both VM Instances + Attached Disks in One Operation Prior to today’s release, when you deleted VMs within Windows Azure we would delete the VM instance – but not delete the drives attached to the VM.  You had to manually delete these yourself from the storage account.  With today’s update we’ve added a convenience option that now allows you to either retain or delete the attached disks when you delete the VM:   We’ve also added the ability to delete a cloud service, its deployments, and its role instances with a single action. This can either be a cloud service that has production and staging deployments with web and worker roles, or a cloud service that contains virtual machines.  To do this, simply select the Cloud Service within the Windows Azure Management Portal and click the “Delete” button: Warnings on Availability Sets with Only One Virtual Machine In Them One of the nice features that Windows Azure Virtual Machines supports is the concept of “Availability Sets”.  An “availability set” allows you to define a tier/role (e.g. webfrontends, databaseservers, etc) that you can map Virtual Machines into – and when you do this Windows Azure separates them across fault domains and ensures that at least one of them is always available during servicing operations.  This enables you to deploy applications in a high availability way. One issue we’ve seen some customers run into is where they define an availability set, but then forget to map more than one VM into it (which defeats the purpose of having an availability set).  With today’s release we now display a warning in the Windows Azure Management Portal if you have only one virtual machine deployed in an availability set to help highlight this: You can learn more about configuring the availability of your virtual machines here. Configuring SQL Server Always On SQL Server Always On is a great feature that you can use with Windows Azure to enable high availability and DR scenarios with SQL Server. Today’s Windows Azure release makes it even easier to configure SQL Server Always On by enabling “Direct Server Return” endpoints to be configured and managed within the Windows Azure Management Portal.  Previously, setting this up required using PowerShell to complete the endpoint configuration.  Starting today you can enable this simply by checking the “Direct Server Return” checkbox: You can learn more about how to use direct server return for SQL Server AlwaysOn availability groups here. Active Directory: Application Access Enhancements This summer we released our initial preview of our Application Access Enhancements for Windows Azure Active Directory.  This service enables you to securely implement single-sign-on (SSO) support against SaaS applications (including Office 365, SalesForce, Workday, Box, Google Apps, GitHub, etc) as well as LOB based applications (including ones built with the new Windows Azure AD support we shipped last week with ASP.NET and VS 2013). Since the initial preview we’ve enhanced our SAML federation capabilities, integrated our new password vaulting system, and shipped multi-factor authentication support. We've also turned on our outbound identity provisioning system and have it working with hundreds of additional SaaS Applications: Earlier this month we published an update on dates and pricing for when the service will be released in general availability form.  In this blog post we announced our intention to release the service in general availability form by the end of the year.  We also announced that the below features would be available in a free tier with it: SSO to every SaaS app we integrate with – Users can Single Sign On to any app we are integrated with at no charge. This includes all the top SAAS Apps and every app in our application gallery whether they use federation or password vaulting. Application access assignment and removal – IT Admins can assign access privileges to web applications to the users in their active directory assuring that every employee has access to the SAAS Apps they need. And when a user leaves the company or changes jobs, the admin can just as easily remove their access privileges assuring data security and minimizing IP loss User provisioning (and de-provisioning) – IT admins will be able to automatically provision users in 3rd party SaaS applications like Box, Salesforce.com, GoToMeeting, DropBox and others. We are working with key partners in the ecosystem to establish these connections, meaning you no longer have to continually update user records in multiple systems. Security and auditing reports – Security is a key priority for us. With the free version of these enhancements you'll get access to our standard set of access reports giving you visibility into which users are using which applications, when they were using them and where they are using them from. In addition, we'll alert you to un-usual usage patterns for instance when a user logs in from multiple locations at the same time. Our Application Access Panel – Users are logging in from every type of devices including Windows, iOS, & Android. Not all of these devices handle authentication in the same manner but the user doesn't care. They need to access their apps from the devices they love. Our Application Access Panel will support the ability for users to access access and launch their apps from any device and anywhere. You can learn more about our plans for application management with Windows Azure Active Directory here.  Try out the preview and start using it today. Enterprise Management: Use Active Directory to Better Manage Windows Azure Windows Azure Active Directory provides the ability to manage your organization in a directory which is hosted entirely in the cloud, or alternatively kept in sync with an on-premises Windows Server Active Directory solution (allowing you to seamlessly integrate with the directory you already have).  With today’s Windows Azure release we are integrating Windows Azure Active Directory even more within the core Windows Azure management experience, and enabling an even richer enterprise security offering.  Specifically: 1) All Windows Azure accounts now have a default Windows Azure Active Directory created for them.  You can create and map any users you want into this directory, and grant administrative rights to manage resources in Windows Azure to these users. 2) You can keep this directory entirely hosted in the cloud – or optionally sync it with your on-premises Windows Server Active Directory.  Both options are free.  The later approach is ideal for companies that wish to use their corporate user identities to sign-in and manage Windows Azure resources.  It also ensures that if an employee leaves an organization, his or her access control rights to the company’s Windows Azure resources are immediately revoked. 3) The Windows Azure Service Management APIs have been updated to support using Windows Azure Active Directory credentials to sign-in and perform management operations.  Prior to today’s release customers had to download and use management certificates (which were not scoped to individual users) to perform management operations.  We still support this management certificate approach (don’t worry – nothing will stop working).  But we think the new Windows Azure Active Directory authentication support enables an even easier and more secure way for customers to manage resources going forward.  4) The Windows Azure SDK 2.2 release (which is also shipping today) includes built-in support for the new Service Management APIs that authenticate with Windows Azure Active Directory, and now allow you to create and manage Windows Azure applications and resources directly within Visual Studio using your Active Directory credentials.  This, combined with updated PowerShell scripts that also support Active Directory, enables an end-to-end enterprise authentication story with Windows Azure. Below are some details on how all of this works: Subscriptions within a Directory As part of today’s update, we have associated all existing Window Azure accounts with a Windows Azure Active Directory (and created one for you if you don’t already have one). When you login to the Windows Azure Management Portal you’ll now see the directory name in the URI of the browser.  For example, in the screen-shot below you can see that I have a “scottgu” directory that my subscriptions are hosted within: Note that you can continue to use Microsoft Accounts (formerly known as Microsoft Live IDs) to sign-into Windows Azure.  These map just fine to a Windows Azure Active Directory – so there is no need to create new usernames that are specific to a directory if you don’t want to.  In the scenario above I’m actually logged in using my @hotmail.com based Microsoft ID which is now mapped to a “scottgu” active directory that was created for me.  By default everything will continue to work just like you used to before. Manage your Directory You can manage an Active Directory (including the one we now create for you by default) by clicking the “Active Directory” tab in the left-hand side of the portal.  This will list all of the directories in your account.  Clicking one the first time will display a getting started page that provides documentation and links to perform common tasks with it: You can use the built-in directory management support within the Windows Azure Management Portal to add/remove/manage users within the directory, enable multi-factor authentication, associate a custom domain (e.g. mycompanyname.com) with the directory, and/or rename the directory to whatever friendly name you want (just click the configure tab to do this).  You can also setup the directory to automatically sync with an on-premises Active Directory using the “Directory Integration” tab. Note that users within a directory by default do not have admin rights to login or manage Windows Azure based resources.  You still need to explicitly grant them co-admin permissions on a subscription for them to login or manage resources in Windows Azure.  You can do this by clicking the Settings tab on the left-hand side of the portal and then by clicking the administrators tab within it. Sign-In Integration within Visual Studio If you install the new Windows Azure SDK 2.2 release, you can now connect to Windows Azure from directly inside Visual Studio without having to download any management certificates.  You can now just right-click on the “Windows Azure” icon within the Server Explorer and choose the “Connect to Windows Azure” context menu option to do so: Doing this will prompt you to enter the email address of the username you wish to sign-in with (make sure this account is a user in your directory with co-admin rights on a subscription): You can use either a Microsoft Account (e.g. Windows Live ID) or an Active Directory based Organizational account as the email.  The dialog will update with an appropriate login prompt depending on which type of email address you enter: Once you sign-in you’ll see the Windows Azure resources that you have permissions to manage show up automatically within the Visual Studio server explorer and be available to start using: No downloading of management certificates required.  All of the authentication was handled using your Windows Azure Active Directory! Manage Subscriptions across Multiple Directories If you have already have multiple directories and multiple subscriptions within your Windows Azure account, we have done our best to create a good default mapping of your subscriptions->directories as part of today’s update.  If you don’t like the default subscription-to-directory mapping we have done you can click the Settings tab in the left-hand navigation of the Windows Azure Management Portal and browse to the Subscriptions tab within it: If you want to map a subscription under a different directory in your account, simply select the subscription from the list, and then click the “Edit Directory” button to choose which directory to map it to.  Mapping a subscription to a different directory takes only seconds and will not cause any of the resources within the subscription to recycle or stop working.  We’ve made the directory->subscription mapping process self-service so that you always have complete control and can map things however you want. Filtering By Directory and Subscription Within the Windows Azure Management Portal you can filter resources in the portal by subscription (allowing you to show/hide different subscriptions).  If you have subscriptions mapped to multiple directory tenants, we also now have a filter drop-down that allows you to filter the subscription list by directory tenant.  This filter is only available if you have multiple subscriptions mapped to multiple directories within your Windows Azure Account:   Windows Azure SDK 2.2 Today we are also releasing a major update of our Windows Azure SDK.  The Windows Azure SDK 2.2 release adds some great new features including: Visual Studio 2013 Support Integrated Windows Azure Sign-In support within Visual Studio Remote Debugging Cloud Services with Visual Studio Firewall Management support within Visual Studio for SQL Databases Visual Studio 2013 RTM VM Images for MSDN Subscribers Windows Azure Management Libraries for .NET Updated Windows Azure PowerShell Cmdlets and ScriptCenter I’ll post a follow-up blog shortly with more details about all of the above. Additional Updates In addition to the above enhancements, today’s release also includes a number of additional improvements: AutoScale: Richer time and date based scheduling support (set different rules on different dates) AutoScale: Ability to Scale to Zero Virtual Machines (very useful for Dev/Test scenarios) AutoScale: Support for time-based scheduling of Mobile Service AutoScale rules Operation Logs: Auditing support for Service Bus management operations Today we also shipped a major update to the Windows Azure SDK – Windows Azure SDK 2.2.  It has so much goodness in it that I have a whole second blog post coming shortly on it! :-) Summary Today’s Windows Azure release enables a bunch of great new scenarios, and enables a much richer enterprise authentication offering. If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Windows Azure Developer Center to learn more about how to build apps with it. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • .NET Reflector Pro Coming…

    The very best software is almost always originally the creation of a single person. Readers of our 'Geek of the Week' will know of a few of them.  Even behemoths such as MS Word or Excel started out with one programmer.  There comes a time with any software that it starts to grow up, and has to move from this form of close parenting to being developed by a team.  This has happened several times within Red-Gate: SQL Refactor, SQL Compare, and SQL Dependency Tracker, not to mention SQL Backup, were all originally the work of a lone coder, who subsequently handed over the development to a structured team of programmers, test engineers and usability designers. Because we loved .NET Reflector when Lutz Roeder wrote and nurtured it, and, like many other .NET developers, used it as a development tool ourselves, .NET Reflector's progress from being the apple of Lutz's eye to being a Red-Gate team-based development  seemed natural.  Lutz, after all, eventually felt he couldn't afford the time to develop it to the extent it deserved. Why, then, did we want to take on .NET Reflector?  Different people may give you different answers, but for us in the .NET team, it just seemed a natural progression. We're always very surprised when anyone suggests that we want to change the nature of the tool since it seems right just as it is. .NET Reflector will stay very much the tool we all use and appreciate, although the new version will support .NET 4, and will have many improvements in the accuracy of its decompiling. Whilst we've made a lot of improvements to Reflector, the radical addition, which we hope you'll want to try out as well, is '.NET Reflector Pro'. This is an extension to .NET Reflector that allows the debugging of decompiled code using the Visual Studio debugger. It is an add-in, but we'll be charging for it, mainly because we prefer to live indoors with a warm meal, rather than outside in tents, particularly when the winter's been as cold as this one has. We're hoping (we're even pretty confident!) that you'll share our excitement about .NET Reflector Pro. .NET Reflector Pro integrates .NET Reflector into Visual Studio, allowing you to seamlessly debug into third-party code and assemblies, even if you don't have the source code for them. You can now treat decompiled assemblies much like your own code: you can step through them and use all the debugging techniques that you would use on your own code. Try the beta now. span.fullpost {display:none;}

    Read the article

  • Effectively implementing a game view using java

    - by kdavis8
    I am writing a 2d game in java. The game mechanics are similar to the Pokémon game boy advance series e.g. fire red, ruby, diamond and so on. I need a way to draw a huge map maybe 5000 by 5000 pixels and then load individual in game sprites to across the entirety of the map, like rendering a scene. Game sprites would be things like terrain objects, trees, rocks, bushes, also houses, castles, NPC's and so on. But i also need to implement some kind of camera view class that focuses on the player. the camera view class needs to follow the characters movements throughout the game map but it also needs to clip the rest of the map away from the user's field of view, so that the user can only see the arbitrary proximity adjacent to the player's sprite. The proximity's range could be something like 500 pixels in every direction around the player’s sprite. On top of this, i need to implement an independent resolution for the game world so that the game view will be uniform on all screen sizes and screen resolutions. I know that this does sound like a handful and may fall under the category of multiple questions, but the questions are all related and any advice would be very much appreciated. I don’t need a full source code listing but maybe some pointers to effective java API classes that could make doing what i need to do a lot simpler. Also any algorithmic/ design advice would greatly benefit me as well. example of what i am trying to do in source code form below package myPackage; /** * The Purpose of GameView is to: Render a scene using Scene class, Create a * clipping pane using CameraView class, and finally instantiate a coordinate * grid using Path class. * * Once all of these things have been done, GameView class should then be * instantiated and used jointly with its helper classes. CameraView should be * used as the main drawing image. CameraView is the the window to the game * world.Scene passes data constantly to CameraView so that the entire map flows * smoothly. Path uses the x and y coordinates from camera view to construct * cells for path finding algorithms. */ public class GameView { // Scene is a helper class to game view. it renders the entire map to memory // for the camera view. Scene scene; // Camera View is a helper class to game view. It clips the Scene into a // small image that follows the players coordinates. CameraView Camera; // Path is a helper class to game view. It observes and calculates the // coordinates of camera view and divides them into Grids/Cells for Path // finding. Path path; // this represents the player and has a getSprite() method that will return // the current frame column row combination of the passed sprite sheet. Sprite player; }

    Read the article

  • The broken Promise of the Mobile Web

    - by Rick Strahl
    High end mobile devices have been with us now for almost 7 years and they have utterly transformed the way we access information. Mobile phones and smartphones that have access to the Internet and host smart applications are in the hands of a large percentage of the population of the world. In many places even very remote, cell phones and even smart phones are a common sight. I’ll never forget when I was in India in 2011 I was up in the Southern Indian mountains riding an elephant out of a tiny local village, with an elephant herder in front riding atop of the elephant in front of us. He was dressed in traditional garb with the loin wrap and head cloth/turban as did quite a few of the locals in this small out of the way and not so touristy village. So we’re slowly trundling along in the forest and he’s lazily using his stick to guide the elephant and… 10 minutes in he pulls out his cell phone from his sash and starts texting. In the middle of texting a huge pig jumps out from the side of the trail and he takes a picture running across our path in the jungle! So yeah, mobile technology is very pervasive and it’s reached into even very buried and unexpected parts of this world. Apps are still King Apps currently rule the roost when it comes to mobile devices and the applications that run on them. If there’s something that you need on your mobile device your first step usually is to look for an app, not use your browser. But native app development remains a pain in the butt, with the requirement to have to support 2 or 3 completely separate platforms. There are solutions that try to bridge that gap. Xamarin is on a tear at the moment, providing their cross-device toolkit to build applications using C#. While Xamarin tools are impressive – and also *very* expensive – they only address part of the development madness that is app development. There are still specific device integration isssues, dealing with the different developer programs, security and certificate setups and all that other noise that surrounds app development. There’s also PhoneGap/Cordova which provides a hybrid solution that involves creating local HTML/CSS/JavaScript based applications, and then packaging them to run in a specialized App container that can run on most mobile device platforms using a WebView interface. This allows for using of HTML technology, but it also still requires all the set up, configuration of APIs, security keys and certification and submission and deployment process just like native applications – you actually lose many of the benefits that  Web based apps bring. The big selling point of Cordova is that you get to use HTML have the ability to build your UI once for all platforms and run across all of them – but the rest of the app process remains in place. Apps can be a big pain to create and manage especially when we are talking about specialized or vertical business applications that aren’t geared at the mainstream market and that don’t fit the ‘store’ model. If you’re building a small intra department application you don’t want to deal with multiple device platforms and certification etc. for various public or corporate app stores. That model is simply not a good fit both from the development and deployment perspective. Even for commercial, big ticket apps, HTML as a UI platform offers many advantages over native, from write-once run-anywhere, to remote maintenance, single point of management and failure to having full control over the application as opposed to have the app store overloads censor you. In a lot of ways Web based HTML/CSS/JavaScript applications have so much potential for building better solutions based on existing Web technologies for the very same reasons a lot of content years ago moved off the desktop to the Web. To me the Web as a mobile platform makes perfect sense, but the reality of today’s Mobile Web unfortunately looks a little different… Where’s the Love for the Mobile Web? Yet here we are in the middle of 2014, nearly 7 years after the first iPhone was released and brought the promise of rich interactive information at your fingertips, and yet we still don’t really have a solid mobile Web platform. I know what you’re thinking: “But we have lots of HTML/JavaScript/CSS features that allows us to build nice mobile interfaces”. I agree to a point – it’s actually quite possible to build nice looking, rich and capable Web UI today. We have media queries to deal with varied display sizes, CSS transforms for smooth animations and transitions, tons of CSS improvements in CSS 3 that facilitate rich layout, a host of APIs geared towards mobile device features and lately even a number of JavaScript framework choices that facilitate development of multi-screen apps in a consistent manner. Personally I’ve been working a lot with AngularJs and heavily modified Bootstrap themes to build mobile first UIs and that’s been working very well to provide highly usable and attractive UI for typical mobile business applications. From the pure UI perspective things actually look very good. Not just about the UI But it’s not just about the UI - it’s also about integration with the mobile device. When it comes to putting all those pieces together into what amounts to a consolidated platform to build mobile Web applications, I think we still have a ways to go… there are a lot of missing pieces to make it all work together and integrate with the device more smoothly, and more importantly to make it work uniformly across the majority of devices. I think there are a number of reasons for this. Slow Standards Adoption HTML standards implementations and ratification has been dreadfully slow, and browser vendors all seem to pick and choose different pieces of the technology they implement. The end result is that we have a capable UI platform that’s missing some of the infrastructure pieces to make it whole on mobile devices. There’s lots of potential but what is lacking that final 10% to build truly compelling mobile applications that can compete favorably with native applications. Some of it is the fragmentation of browsers and the slow evolution of the mobile specific HTML APIs. A host of mobile standards exist but many of the standards are in the early review stage and they have been there stuck for long periods of time and seem to move at a glacial pace. Browser vendors seem even slower to implement them, and for good reason – non-ratified standards mean that implementations may change and vendor implementations tend to be experimental and  likely have to be changed later. Neither Vendors or developers are not keen on changing standards. This is the typical chicken and egg scenario, but without some forward momentum from some party we end up stuck in the mud. It seems that either the standards bodies or the vendors need to carry the torch forward and that doesn’t seem to be happening quickly enough. Mobile Device Integration just isn’t good enough Current standards are not far reaching enough to address a number of the use case scenarios necessary for many mobile applications. While not every application needs to have access to all mobile device features, almost every mobile application could benefit from some integration with other parts of the mobile device platform. Integration with GPS, phone, media, messaging, notifications, linking and contacts system are benefits that are unique to mobile applications and could be widely used, but are mostly (with the exception of GPS) inaccessible for Web based applications today. Unfortunately trying to do most of this today only with a mobile Web browser is a losing battle. Aside from PhoneGap/Cordova’s app centric model with its own custom API accessing mobile device features and the token exception of the GeoLocation API, most device integration features are not widely supported by the current crop of mobile browsers. For example there’s no usable messaging API that allows access to SMS or contacts from HTML. Even obvious components like the Media Capture API are only implemented partially by mobile devices. There are alternatives and workarounds for some of these interfaces by using browser specific code, but that’s might ugly and something that I thought we were trying to leave behind with newer browser standards. But it’s not quite working out that way. It’s utterly perplexing to me that mobile standards like Media Capture and Streams, Media Gallery Access, Responsive Images, Messaging API, Contacts Manager API have only minimal or no traction at all today. Keep in mind we’ve had mobile browsers for nearly 7 years now, and yet we still have to think about how to get access to an image from the image gallery or the camera on some devices? Heck Windows Phone IE Mobile just gained the ability to upload images recently in the Windows 8.1 Update – that’s feature that HTML has had for 20 years! These are simple concepts and common problems that should have been solved a long time ago. It’s extremely frustrating to see build 90% of a mobile Web app with relative ease and then hit a brick wall for the remaining 10%, which often can be show stoppers. The remaining 10% have to do with platform integration, browser differences and working around the limitations that browsers and ‘pinned’ applications impose on HTML applications. The maddening part is that these limitations seem arbitrary as they could easily work on all mobile platforms. For example, SMS has a URL Moniker interface that sort of works on Android, works badly with iOS (only works if the address is already in the contact list) and not at all on Windows Phone. There’s no reason this shouldn’t work universally using the same interface – after all all phones have supported SMS since before the year 2000! But, it doesn’t have to be this way Change can happen very quickly. Take the GeoLocation API for example. Geolocation has taken off at the very beginning of the mobile device era and today it works well, provides the necessary security (a big concern for many mobile APIs), and is supported by just about all major mobile and even desktop browsers today. It handles security concerns via prompts to avoid unwanted access which is a model that would work for most other device APIs in a similar fashion. One time approval and occasional re-approval if code changes or caches expire. Simple and only slightly intrusive. It all works well, even though GeoLocation actually has some physical limitations, such as representing the current location when no GPS device is present. Yet this is a solved problem, where other APIs that are conceptually much simpler to implement have failed to gain any traction at all. Technically none of these APIs should be a problem to implement, but it appears that the momentum is just not there. Inadequate Web Application Linking and Activation Another important piece of the puzzle missing is the integration of HTML based Web applications. Today HTML based applications are not first class citizens on mobile operating systems. When talking about HTML based content there’s a big difference between content and applications. Content is great for search engine discovery and plain browser usage. Content is usually accessed intermittently and permanent linking is not so critical for this type of content.  But applications have different needs. Applications need to be started up quickly and must be easily switchable to support a multi-tasking user workflow. Therefore, it’s pretty crucial that mobile Web apps are integrated into the underlying mobile OS and work with the standard task management features. Unfortunately this integration is not as smooth as it should be. It starts with actually trying to find mobile Web applications, to ‘installing’ them onto a phone in an easily accessible manner in a prominent position. The experience of discovering a Mobile Web ‘App’ and making it sticky is by no means as easy or satisfying. Today the way you’d go about this is: Open the browser Search for a Web Site in the browser with your search engine of choice Hope that you find the right site Hope that you actually find a site that works for your mobile device Click on the link and run the app in a fully chrome’d browser instance (read tiny surface area) Pin the app to the home screen (with all the limitations outline above) Hope you pointed at the right URL when you pinned Even for you and me as developers, there are a few steps in there that are painful and annoying, but think about the average user. First figuring out how to search for a specific site or URL? And then pinning the app and hopefully from the right location? You’ve probably lost more than half of your audience at that point. This experience sucks. For developers too this process is painful since app developers can’t control the shortcut creation directly. This problem often gets solved by crazy coding schemes, with annoying pop-ups that try to get people to create shortcuts via fancy animations that are both annoying and add overhead to each and every application that implements this sort of thing differently. And that’s not the end of it - getting the link onto the home screen with an application icon varies quite a bit between browsers. Apple’s non-standard meta tags are prominent and they work with iOS and Android (only more recent versions), but not on Windows Phone. Windows Phone instead requires you to create an actual screen or rather a partial screen be captured for a shortcut in the tile manager. Who had that brilliant idea I wonder? Surprisingly Chrome on recent Android versions seems to actually get it right – icons use pngs, pinning is easy and pinned applications properly behave like standalone apps and retain the browser’s active page state and content. Each of the platforms has a different way to specify icons (WP doesn’t allow you to use an icon image at all), and the most widely used interface in use today is a bunch of Apple specific meta tags that other browsers choose to support. The question is: Why is there no standard implementation for installing shortcuts across mobile platforms using an official format rather than a proprietary one? Then there’s iOS and the crazy way it treats home screen linked URLs using a crazy hybrid format that is neither as capable as a Web app running in Safari nor a WebView hosted application. Moving off the Web ‘app’ link when switching to another app actually causes the browser and preview it to ‘blank out’ the Web application in the Task View (see screenshot on the right). Then, when the ‘app’ is reactivated it ends up completely restarting the browser with the original link. This is crazy behavior that you can’t easily work around. In some situations you might be able to store the application state and restore it using LocalStorage, but for many scenarios that involve complex data sources (like say Google Maps) that’s not a possibility. The only reason for this screwed up behavior I can think of is that it is deliberate to make Web apps a pain in the butt to use and forcing users trough the App Store/PhoneGap/Cordova route. App linking and management is a very basic problem – something that we essentially have solved in every desktop browser – yet on mobile devices where it arguably matters a lot more to have easy access to web content we have to jump through hoops to have even a remotely decent linking/activation experience across browsers. Where’s the Money? It’s not surprising that device home screen integration and Mobile Web support in general is in such dismal shape – the mobile OS vendors benefit financially from App store sales and have little to gain from Web based applications that bypass the App store and the cash cow that it presents. On top of that, platform specific vendor lock-in of both end users and developers who have invested in hardware, apps and consumables is something that mobile platform vendors actually aspire to. Web based interfaces that are cross-platform are the anti-thesis of that and so again it’s no surprise that the mobile Web is on a struggling path. But – that may be changing. More and more we’re seeing operations shifting to services that are subscription based or otherwise collect money for usage, and that may drive more progress into the Web direction in the end . Nothing like the almighty dollar to drive innovation forward. Do we need a Mobile Web App Store? As much as I dislike moderated experiences in today’s massive App Stores, they do at least provide one single place to look for apps for your device. I think we could really use some sort of registry, that could provide something akin to an app store for mobile Web apps, to make it easier to actually find mobile applications. This could take the form of a specialized search engine, or maybe a more formal store/registry like structure. Something like apt-get/chocolatey for Web apps. It could be curated and provide at least some feedback and reviews that might help with the integrity of applications. Coupled to that could be a native application on each platform that would allow searching and browsing of the registry and then also handle installation in the form of providing the home screen linking, plus maybe an initial security configuration that determines what features are allowed access to for the app. I’m not holding my breath. In order for this sort of thing to take off and gain widespread appeal, a lot of coordination would be required. And in order to get enough traction it would have to come from a well known entity – a mobile Web app store from a no name source is unlikely to gain high enough usage numbers to make a difference. In a way this would eliminate some of the freedom of the Web, but of course this would also be an optional search path in addition to the standard open Web search mechanisms to find and access content today. Security Security is a big deal, and one of the perceived reasons why so many IT professionals appear to be willing to go back to the walled garden of deployed apps is that Apps are perceived as safe due to the official review and curation of the App stores. Curated stores are supposed to protect you from malware, illegal and misleading content. It doesn’t always work out that way and all the major vendors have had issues with security and the review process at some time or another. Security is critical, but I also think that Web applications in general pose less of a security threat than native applications, by nature of the sandboxed browser and JavaScript environments. Web applications run externally completely and in the HTML and JavaScript sandboxes, with only a very few controlled APIs allowing access to device specific features. And as discussed earlier – security for any device interaction can be granted the same for mobile applications through a Web browser, as they can for native applications either via explicit policies loaded from the Web, or via prompting as GeoLocation does today. Security is important, but it’s certainly solvable problem for Web applications even those that need to access device hardware. Security shouldn’t be a reason for Web apps to be an equal player in mobile applications. Apps are winning, but haven’t we been here before? So now we’re finding ourselves back in an era of installed app, rather than Web based and managed apps. Only it’s even worse today than with Desktop applications, in that the apps are going through a gatekeeper that charges a toll and censors what you can and can’t do in your apps. Frankly it’s a mystery to me why anybody would buy into this model and why it’s lasted this long when we’ve already been through this process. It’s crazy… It’s really a shame that this regression is happening. We have the technology to make mobile Web apps much more prominent, but yet we’re basically held back by what seems little more than bureaucracy, partisan bickering and self interest of the major parties involved. Back in the day of the desktop it was Internet Explorer’s 98+%  market shareholding back the Web from improvements for many years – now it’s the combined mobile OS market in control of the mobile browsers. If mobile Web apps were allowed to be treated the same as native apps with simple ways to install and run them consistently and persistently, that would go a long way to making mobile applications much more usable and seriously viable alternatives to native apps. But as it is mobile apps have a severe disadvantage in placement and operation. There are a few bright spots in all of this. Mozilla’s FireFoxOs is embracing the Web for it’s mobile OS by essentially building every app out of HTML and JavaScript based content. It supports both packaged and certified package modes (that can be put into the app store), and Open Web apps that are loaded and run completely off the Web and can also cache locally for offline operation using a manifest. Open Web apps are treated as full class citizens in FireFoxOS and run using the same mechanism as installed apps. Unfortunately FireFoxOs is getting a slow start with minimal device support and specifically targeting the low end market. We can hope that this approach will change and catch on with other vendors, but that’s also an uphill battle given the conflict of interest with platform lock in that it represents. Recent versions of Android also seem to be working reasonably well with mobile application integration onto the desktop and activation out of the box. Although it still uses the Apple meta tags to find icons and behavior settings, everything at least works as you would expect – icons to the desktop on pinning, WebView based full screen activation, and reliable application persistence as the browser/app is treated like a real application. Hopefully iOS will at some point provide this same level of rudimentary Web app support. What’s also interesting to me is that Microsoft hasn’t picked up on the obvious need for a solid Web App platform. Being a distant third in the mobile OS war, Microsoft certainly has nothing to lose and everything to gain by using fresh ideas and expanding into areas that the other major vendors are neglecting. But instead Microsoft is trying to beat the market leaders at their own game, fighting on their adversary’s terms instead of taking a new tack. Providing a kick ass mobile Web platform that takes the lead on some of the proposed mobile APIs would be something positive that Microsoft could do to improve its miserable position in the mobile device market. Where are we at with Mobile Web? It sure sounds like I’m really down on the Mobile Web, right? I’ve built a number of mobile apps in the last year and while overall result and response has been very positive to what we were able to accomplish in terms of UI, getting that final 10% that required device integration dialed was an absolute nightmare on every single one of them. Big compromises had to be made and some features were left out or had to be modified for some devices. In two cases we opted to go the Cordova route in order to get the integration we needed, along with the extra pain involved in that process. Unless you’re not integrating with device features and you don’t care deeply about a smooth integration with the mobile desktop, mobile Web development is fraught with frustration. So, yes I’m frustrated! But it’s not for lack of wanting the mobile Web to succeed. I am still a firm believer that we will eventually arrive a much more functional mobile Web platform that allows access to the most common device features in a sensible way. It wouldn't be difficult for device platform vendors to make Web based applications first class citizens on mobile devices. But unfortunately it looks like it will still be some time before this happens. So, what’s your experience building mobile Web apps? Are you finding similar issues? Just giving up on raw Web applications and building PhoneGap apps instead? Completely skipping the Web and going native? Leave a comment for discussion. Resources Rick Strahl on DotNet Rocks talking about Mobile Web© Rick Strahl, West Wind Technologies, 2005-2014Posted in HTML5  Mobile   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • URL Routing in ASP.NET 4.0

    In the .NET Framework 3.5 SP1, Microsoft introduced ASP.NET Routing, which decouples the URL of a resource from the physical file on the web server. With ASP.NET Routing you, the developer, define routing rules map route patterns to a class that generates the content. For example, you might indicate that the URL Categories/CategoryName maps to a class that takes the CategoryName and generates HTML that lists that category's products in a grid. With such a mapping, users could view products for the Beverages category by visiting www.yoursite.com/Categories/Beverages. In .NET 3.5 SP1, ASP.NET Routing was primarily designed for ASP.NET MVC applications, although as discussed in Using ASP.NET Routing Without ASP.NET MVC it is possible to implement ASP.NET Routing in a Web Forms application, as well. However, implementing ASP.NET Routing in a Web Forms application involves a bit of seemingly excessive legwork. In a Web Forms scenario we typically want to map a routing pattern to an actual ASP.NET page. To do so we need to create a route handler class that is invoked when the routing URL is requested and, in a sense, dispatches the request to the appropriate ASP.NET page. For instance, to map a route to a physical file, such as mapping Categories/CategoryName to ShowProductsByCategory.aspx - requires three steps: (1) Define the mapping in Global.asax, which maps a route pattern to a route handler class; (2) Create the route handler class, which is responsible for parsing the URL, storing any route parameters into some location that is accessible to the target page (such as HttpContext.Items), and returning an instance of the target page or HTTP Handler that handles the requested route; and (3) writing code in the target page to grab the route parameters and use them in rendering its content. Given how much effort it took to just read the preceding sentence (let alone write it) you can imagine that implementing ASP.NET Routing in a Web Forms application is not necessarily the most straightforward task. The good news is that ASP.NET 4.0 has greatly simplified ASP.NET Routing for Web Form applications by adding a number of classes and helper methods that can be used to encapsulate the aforementioned complexity. With ASP.NET 4.0 it's easier to define the routing rules and there's no need to create a custom route handling class. This article details these enhancements. Read on to learn more! Read More >

    Read the article

  • Setting up a local AI server - easy with Solaris 11

    - by Stefan Hinker
    Many things are new in Solaris 11, Autoinstall is one of them.  If, like me, you've known Jumpstart for the last 2 centuries or so, you'll have to start from scratch.  Well, almost, as the concepts are similar, and it's not all that difficult.  Just new. I wanted to have an AI server that I could use for demo purposes, on the train if need be.  That answers the question of hardware requirements: portable.  But let's start at the beginning. First, you need an OS image, of course.  In the new world of Solaris 11, it is now called a repository.  The original can be downloaded from the Solaris 11 page at Oracle.   What you want is the "Oracle Solaris 11 11/11 Repository Image", which comes in two parts that can be combined using cat.  MD5 checksums for these (and all other downloads from that page) are available closer to the top of the page. With that, building the repository is quick and simple: # zfs create -o mountpoint=/export/repo rpool/ai/repo # zfs create rpool/ai/repo/s11 # mount -o ro -F hsfs /tmp/sol-11-1111-repo-full.iso /mnt # rsync -aP /mnt/repo /export/repo/s11 # umount /mnt # pkgrepo rebuild -s /export/repo/sol11/repo # zfs snapshot rpool/ai/repo/sol11@fcs # pkgrepo info -s /export/repo/sol11/repo PUBLISHER PACKAGES STATUS UPDATED solaris 4292 online 2012-03-12T20:47:15.378639Z That's all there's to it.  Let's make a snapshot, just to be on the safe side.  You never know when one will come in handy.  To use this repository, you could just add it as a file-based publisher: # pkg set-publisher -g file:///export/repo/sol11/repo solaris In case I'd want to access this repository through a (virtual) network, i'll now quickly activate the repository-service: # svccfg -s application/pkg/server \ setprop pkg/inst_root=/export/repo/sol11/repo # svccfg -s application/pkg/server setprop pkg/readonly=true # svcadm refresh application/pkg/server # svcadm enable application/pkg/server That's all you need - now point your browser to http://localhost/ to view your beautiful repository-server. Step 1 is done.  All of this, by the way, is nicely documented in the README file that's contained in the repository image. Of course, we already have updates to the original release.  You can find them in MOS in the Oracle Solaris 11 Support Repository Updates (SRU) Index.  You can simply add these to your existing repository or create separate repositories for each SRU.  The individual SRUs are self-sufficient and incremental - SRU4 includes all updates from SRU2 and SRU3.  With ZFS, you can also get both: A full repository with all updates and at the same time incremental ones up to each of the updates: # mount -o ro -F hsfs /tmp/sol-11-1111-sru4-05-incr-repo.iso /mnt # pkgrecv -s /mnt/repo -d /export/repo/sol11/repo '*' # umount /mnt # pkgrepo rebuild -s /export/repo/sol11/repo # zfs snapshot rpool/ai/repo/sol11@sru4 # zfs set snapdir=visible rpool/ai/repo/sol11 # svcadm restart svc:/application/pkg/server:default The normal repository is now updated to SRU4.  Thanks to the ZFS snapshots, there is also a valid repository of Solaris 11 11/11 without the update located at /export/repo/sol11/.zfs/snapshot/fcs . If you like, you can also create another repository service for each update, running on a separate port. But now lets continue with the AI server.  Just a little bit of reading in the dokumentation makes it clear that we will need to run a DHCP server for this.  Since I already have one active (for my SunRay installation) and since it's a good idea to have these kinds of services separate anyway, I decided to create this in a Zone.  So, let's create one first: # zfs create -o mountpoint=/export/install rpool/ai/install # zfs create -o mountpoint=/zones rpool/zones # zonecfg -z ai-server zonecfg:ai-server> create create: Using system default template 'SYSdefault' zonecfg:ai-server> set zonepath=/zones/ai-server zonecfg:ai-server> add dataset zonecfg:ai-server:dataset> set name=rpool/ai/install zonecfg:ai-server:dataset> set alias=install zonecfg:ai-server:dataset> end zonecfg:ai-server> commit zonecfg:ai-server> exit # zoneadm -z ai-server install # zoneadm -z ai-server boot ; zlogin -C ai-server Give it a hostname and IP address at first boot, and there's the Zone.  For a publisher for Solaris packages, it will be bound to the "System Publisher" from the Global Zone.  The /export/install filesystem, of course, is intended to be used by the AI server.  Let's configure it now: #zlogin ai-server root@ai-server:~# pkg install install/installadm root@ai-server:~# installadm create-service -n x86-fcs -a i386 \ -s pkg://solaris/install-image/[email protected],5.11-0.175.0.0.0.2.1482 \ -d /export/install/fcs -i 192.168.2.20 -c 3 With that, the core AI server is already done.  What happened here?  First, I installed the AI server software.  IPS makes that nice and easy.  If necessary, it'll also pull in the required DHCP-Server and anything else that might be missing.  Watch out for that DHCP server software.  In Solaris 11, there are two different versions.  There's the one you might know from Solaris 10 and earlier, and then there's a new one from ISC.  The latter is the one we need for AI.  The SMF service names of both are very similar.  The "old" one is "svc:/network/dhcp-server:default". The ISC-server comes with several SMF-services. We at least need "svc:/network/dhcp/server:ipv4".  The command "installadm create-service" creates the installation-service. It's called "x86-fcs", serves the "i386" architecture and gets its boot image from the repository of the system publisher, using version 5.11,5.11-0.175.0.0.0.2.1482, which is Solaris 11 11/11.  (The option "-a i386" in this example is optional, since the installserver itself runs on a x86 machine.) The boot-environment for clients is created in /export/install/fcs and the DHCP-server is configured for 3 IP-addresses starting at 192.168.2.20.  This configuration is stored in a very human readable form in /etc/inet/dhcpd4.conf.  An AI-service for SPARC systems could be created in the very same way, using "-a sparc" as the architecture option. Now we would be ready to register and install the first client.  It would be installed with the default "solaris-large-server" using the publisher "http://pkg.oracle.com/solaris/release" and would query it's configuration interactively at first boot.  This makes it very clear that an AI-server is really only a boot-server.  The true source of packets to install can be different.  Since I don't like these defaults for my demo setup, I did some extra config work for my clients. The configuration of a client is controlled by manifests and profiles.  The manifest controls which packets are installed and how the filesystems are layed out.  In that, it's very much like the old "rules.ok" file in Jumpstart.  Profiles contain additional configuration like root passwords, primary user account, IP addresses, keyboard layout etc.  Hence, profiles are very similar to the old sysid.cfg file. The easiest way to get your hands on a manifest is to ask the AI server we just created to give us it's default one.  Then modify that to our liking and give it back to the installserver to use: root@ai-server:~# mkdir -p /export/install/configs/manifests root@ai-server:~# cd /export/install/configs/manifests root@ai-server:~# installadm export -n x86-fcs -m orig_default \ -o orig_default.xml root@ai-server:~# cp orig_default.xml s11-fcs.small.local.xml root@ai-server:~# vi s11-fcs.small.local.xml root@ai-server:~# more s11-fcs.small.local.xml <!DOCTYPE auto_install SYSTEM "file:///usr/share/install/ai.dtd.1"> <auto_install> <ai_instance name="S11 Small fcs local"> <target> <logical> <zpool name="rpool" is_root="true"> <filesystem name="export" mountpoint="/export"/> <filesystem name="export/home"/> <be name="solaris"/> </zpool> </logical> </target> <software type="IPS"> <destination>  </destination> <source> <publisher name="solaris"> <origin name="http://192.168.2.12/"/> </publisher> </source> <!-- By default the latest build available, in the specified IPS repository, is installed. If another build is required, the build number has to be appended to the 'entire' package in the following form: <name>pkg:/[email protected]#</name> --> <software_data action="install"> <name>pkg:/[email protected],5.11-0.175.0.0.0.2.0</name> <name>pkg:/group/system/solaris-small-server</name> </software_data> </software> </ai_instance> </auto_install> root@ai-server:~# installadm create-manifest -n x86-fcs -d \ -f ./s11-fcs.small.local.xml root@ai-server:~# installadm list -m -n x86-fcs Manifest Status Criteria -------- ------ -------- S11 Small fcs local Default None orig_default Inactive None The major points in this new manifest are: Install "solaris-small-server" Install a few locales less than the default.  I'm not that fluid in French or Japanese... Use my own package service as publisher, running on IP address 192.168.2.12 Install the initial release of Solaris 11:  pkg:/[email protected],5.11-0.175.0.0.0.2.0 Using a similar approach, I'll create a default profile interactively and use it as a template for a few customized building blocks, each defining a part of the overall system configuration.  The modular approach makes it easy to configure numerous clients later on: root@ai-server:~# mkdir -p /export/install/configs/profiles root@ai-server:~# cd /export/install/configs/profiles root@ai-server:~# sysconfig create-profile -o default.xml root@ai-server:~# cp default.xml general.xml; cp default.xml mars.xml root@ai-server:~# cp default.xml user.xml root@ai-server:~# vi general.xml mars.xml user.xml root@ai-server:~# more general.xml mars.xml user.xml :::::::::::::: general.xml :::::::::::::: <!DOCTYPE service_bundle SYSTEM "/usr/share/lib/xml/dtd/service_bundle.dtd.1"> <service_bundle type="profile" name="sysconfig"> <service version="1" type="service" name="system/timezone"> <instance enabled="true" name="default"> <property_group type="application" name="timezone"> <propval type="astring" name="localtime" value="Europe/Berlin"/> </property_group> </instance> </service> <service version="1" type="service" name="system/environment"> <instance enabled="true" name="init"> <property_group type="application" name="environment"> <propval type="astring" name="LANG" value="C"/> </property_group> </instance> </service> <service version="1" type="service" name="system/keymap"> <instance enabled="true" name="default"> <property_group type="system" name="keymap"> <propval type="astring" name="layout" value="US-English"/> </property_group> </instance> </service> <service version="1" type="service" name="system/console-login"> <instance enabled="true" name="default"> <property_group type="application" name="ttymon"> <propval type="astring" name="terminal_type" value="vt100"/> </property_group> </instance> </service> <service version="1" type="service" name="network/physical"> <instance enabled="true" name="default"> <property_group type="application" name="netcfg"> <propval type="astring" name="active_ncp" value="DefaultFixed"/> </property_group> </instance> </service> <service version="1" type="service" name="system/name-service/switch"> <property_group type="application" name="config"> <propval type="astring" name="default" value="files"/> <propval type="astring" name="host" value="files dns"/> <propval type="astring" name="printer" value="user files"/> </property_group> <instance enabled="true" name="default"/> </service> <service version="1" type="service" name="system/name-service/cache"> <instance enabled="true" name="default"/> </service> <service version="1" type="service" name="network/dns/client"> <property_group type="application" name="config"> <property type="net_address" name="nameserver"> <net_address_list> <value_node value="192.168.2.1"/> </net_address_list> </property> </property_group> <instance enabled="true" name="default"/> </service> </service_bundle> :::::::::::::: mars.xml :::::::::::::: <!DOCTYPE service_bundle SYSTEM "/usr/share/lib/xml/dtd/service_bundle.dtd.1"> <service_bundle type="profile" name="sysconfig"> <service version="1" type="service" name="network/install"> <instance enabled="true" name="default"> <property_group type="application" name="install_ipv4_interface"> <propval type="astring" name="address_type" value="static"/> <propval type="net_address_v4" name="static_address" value="192.168.2.100/24"/> <propval type="astring" name="name" value="net0/v4"/> <propval type="net_address_v4" name="default_route" value="192.168.2.1"/> </property_group> <property_group type="application" name="install_ipv6_interface"> <propval type="astring" name="stateful" value="yes"/> <propval type="astring" name="stateless" value="yes"/> <propval type="astring" name="address_type" value="addrconf"/> <propval type="astring" name="name" value="net0/v6"/> </property_group> </instance> </service> <service version="1" type="service" name="system/identity"> <instance enabled="true" name="node"> <property_group type="application" name="config"> <propval type="astring" name="nodename" value="mars"/> </property_group> </instance> </service> </service_bundle> :::::::::::::: user.xml :::::::::::::: <!DOCTYPE service_bundle SYSTEM "/usr/share/lib/xml/dtd/service_bundle.dtd.1"> <service_bundle type="profile" name="sysconfig"> <service version="1" type="service" name="system/config-user"> <instance enabled="true" name="default"> <property_group type="application" name="root_account"> <propval type="astring" name="login" value="root"/> <propval type="astring" name="password" value="noIWillNotTellYouMyPasswordNotEvenEncrypted"/> <propval type="astring" name="type" value="role"/> </property_group> <property_group type="application" name="user_account"> <propval type="astring" name="login" value="stefan"/> <propval type="astring" name="password" value="noIWillNotTellYouMyPasswordNotEvenEncrypted"/> <propval type="astring" name="type" value="normal"/> <propval type="astring" name="description" value="Stefan Hinker"/> <propval type="count" name="uid" value="12345"/> <propval type="count" name="gid" value="10"/> <propval type="astring" name="shell" value="/usr/bin/bash"/> <propval type="astring" name="roles" value="root"/> <propval type="astring" name="profiles" value="System Administrator"/> <propval type="astring" name="sudoers" value="ALL=(ALL) ALL"/> </property_group> </instance> </service> </service_bundle> root@ai-server:~# installadm create-profile -n x86-fcs -f general.xml root@ai-server:~# installadm create-profile -n x86-fcs -f user.xml root@ai-server:~# installadm create-profile -n x86-fcs -f mars.xml \ -c ipv4=192.168.2.100 root@ai-server:~# installadm list -p Service Name Profile ------------ ------- x86-fcs general.xml mars.xml user.xml root@ai-server:~# installadm list -n x86-fcs -p Profile Criteria ------- -------- general.xml None mars.xml ipv4 = 192.168.2.100 user.xml None Here's the idea behind these files: "general.xml" contains settings valid for all my clients.  Stuff like DNS servers, for example, which in my case will always be the same. "user.xml" only contains user definitions.  That is, a root password and a primary user.Both of these profiles will be valid for all clients (for now). "mars.xml" defines network settings for an individual client.  This profile is associated with an IP-Address.  For this to work, I'll have to tweak the DHCP-settings in the next step: root@ai-server:~# installadm create-client -e 08:00:27:AA:3D:B1 -n x86-fcs root@ai-server:~# vi /etc/inet/dhcpd4.conf root@ai-server:~# tail -5 /etc/inet/dhcpd4.conf host 080027AA3DB1 { hardware ethernet 08:00:27:AA:3D:B1; fixed-address 192.168.2.100; filename "01080027AA3DB1"; } This completes the client preparations.  I manually added the IP-Address for mars to /etc/inet/dhcpd4.conf.  This is needed for the "mars.xml" profile.  Disabling arbitrary DHCP-replies will shut up this DHCP server, making my life in a shared environment a lot more peaceful ;-)Now, I of course want this installation to be completely hands-off.  For this to work, I'll need to modify the grub boot menu for this client slightly.  You can find it in /etc/netboot.  "installadm create-client" will create a new boot menu for every client, identified by the client's MAC address.  The template for this can be found in a subdirectory with the name of the install service, /etc/netboot/x86-fcs in our case.  If you don't want to change this manually for every client, modify that template to your liking instead. root@ai-server:~# cd /etc/netboot root@ai-server:~# cp menu.lst.01080027AA3DB1 menu.lst.01080027AA3DB1.org root@ai-server:~# vi menu.lst.01080027AA3DB1 root@ai-server:~# diff menu.lst.01080027AA3DB1 menu.lst.01080027AA3DB1.org 1,2c1,2 < default=1 < timeout=10 --- > default=0 > timeout=30 root@ai-server:~# more menu.lst.01080027AA3DB1 default=1 timeout=10 min_mem64=0 title Oracle Solaris 11 11/11 Text Installer and command line kernel$ /x86-fcs/platform/i86pc/kernel/$ISADIR/unix -B install_media=htt p://$serverIP:5555//export/install/fcs,install_service=x86-fcs,install_svc_addre ss=$serverIP:5555 module$ /x86-fcs/platform/i86pc/$ISADIR/boot_archive title Oracle Solaris 11 11/11 Automated Install kernel$ /x86-fcs/platform/i86pc/kernel/$ISADIR/unix -B install=true,inst all_media=http://$serverIP:5555//export/install/fcs,install_service=x86-fcs,inst all_svc_address=$serverIP:5555,livemode=text module$ /x86-fcs/platform/i86pc/$ISADIR/boot_archive Now just boot the client off the network using PXE-boot.  For my demo purposes, that's a client from VirtualBox, of course.  That's all there's to it.  And despite the fact that this blog entry is a little longer - that wasn't that hard now, was it?

    Read the article

  • Podcast Show Notes: The Red Room Interview &ndash; Part 1

    - by Bob Rhubart
      The latest OTN Arch2Arch podcast is Part 1 of a three-part series featuring a discussion of a broad range of SOA  issues with three members of the small army of contributors to The Red Room Blog, now part of the OJam.biz site, the Australia-New Zealand outpost of the global Oracle community. The panelists for this program are: Sean Boiling - Sales Consulting Manager for Oracle Fusion Middleware LinkedIn | Twitter | Blog Richard Ward - SOA Channel Development Manager at Oracle LinkedIn | Blog Mervin Chiang - Consulting Principal at Leonardo Consulting LinkedIn | Twitter | Blog (You can also follow the Red Room itself on Twitter: @OracleRedRoom.) The genesis of this interview goes back to 2009, and the original Red Room blog, on which Sean, Richard, Mervin, and other Red Roomers published a 10-part series of posts that, taken together, form a kind of SOA best-practices guide, presented in an irreverent style that is rare in a lot of technical writing. It was on the basis of their expertise and irreverence that I wanted to get a few of the Red Room bloggers on an Arch2Arch podcast.  Easier said than done. Trying to schedule a group interview with very busy people on the other side of world (they’re actually 15 hours in the future, relative to my location) is not a simple process. The conversations about getting some of the Red Room people on the program began in the summer of 2009. The interview finally happened at 5:30 PM EDT on Tuesday March 30, 2010, which for the panelists, located in Australia, was 8:30 AM on Wednesday March 31, 2010. I was waiting for dinner, and Sean, Richard, and Mervin were waiting for breakfast. But the call went off without a hitch, and the panelists carried on a great discussion of SOA issues. Listen to Part 1 Many thanks to Gareth Llewellyn for his help in putting this together. SOA Best Practices Here’s a complete list of the posts in the original 10-part Red Room series: SOA is Dead. Long Live SOA by Sean Boiling Are you doing SOP’s instead of SOA? by Saul Cunningham All The President's SOA by Sean Boiling SOA – Pay Now or Pay Dearly by Richard Ward SOA where are the skills? by Richard Ward Project Management Pitfalls within SOA by Anton Gouws Viewing SOA as a project instead of an architecture by Saul Cunningham Kiss and Tell by Sean Boiling Failure to implement and adhere to SOA Governance by Mervin Chiang Ten Out Of Ten by Sean Boiling Parts 2 of the Red Room Interview will be available next week, followed by Part 3, so stay tuned: RSS Change in the Wind Beginning with next week’s program, the OTN Arch2Arch Podcast will be rechristened as the OTN ArchBeat Podcast, to better align with this blog. The transformation will be painless – you won’t feel a thing.   del.icio.us Tags: otn,oracle,Archbeat,Arch2Arch,soa,service oriented architecture,podcast Technorati Tags: otn,oracle,Archbeat,Arch2Arch,soa,service oriented architecture,podcast

    Read the article

  • Perform Unit Conversions with the Windows 7 Calculator

    - by Matthew Guay
    Want to easily convert area, volume, temperature, and many other units?  With the Calculator in Windows 7, it’s easy to convert most any unit into another. The New Calculator in Windows 7 Calculator received a visual overhaul in Windows 7, but at first glance it doesn’t seem to have any new functionality.  Here’s Windows 7’s Calculator on the left, with Vista’s calculator on the right.   But, looks can be deceiving.  Window’s 7’s calculator has lots of new exciting features.  Let’s try them out.  Simply type Calculator in the start menu search. To uncover the new features, click the View menu.  Here you can select many different modes, including Unit Conversion mode which we will look at. When you select the Unit Conversion mode, the Calculator will expand with a form on the left side. This conversions pane has 3 drop-down menus.  From the top one, select the type of unit you want to convert. In the next two menus, select which values you wish to convert to and from.  For instance, here we selected Temperature in the first menu, Degrees Fahrenheit in the second menu, and Degrees Celsius in the third menu. Enter the value you wish to convert in the From box, and the conversion will automatically appear in the bottom box. The Calculator contains dozens of conversion values, including more uncommon ones.  So if you’ve ever wanted to know how many US gallons are in a UK gallon, or how many knots a supersonic jet travels in an hour, this is a great tool for you!   Conclusion Windows 7 is filled with little changes that give you an all-around better experience in Windows to help you work more efficiently and productively.  With the new features in the Calculator, you just might feel a little smarter, too! Similar Articles Productive Geek Tips Add Windows Calculator to the Excel 2007 Quick Launch ToolbarEnjoy Quick & Easy Unit Conversion with Convert for WindowsCalculate with Qalculate on LinuxDisable the Annoying “This device can perform faster” Balloon Message in Windows 7Get stats on your Ruby on Rails code TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Install, Remove and HIDE Fonts in Windows 7 Need Help with Your Home Network? Awesome Lyrics Finder for Winamp & Windows Media Player Download Videos from Hulu Pixels invade Manhattan Convert PDF files to ePub to read on your iPad

    Read the article

  • Don&rsquo;t Forget! In-Memory Databases are Hot

    - by andrewbrust
    If you’re left scratching your head over SAP’s intention to acquire Sybase for almost $6 million, you’re not alone.  Despite Sybase’s 1990s reign as the supreme database standard in certain sectors (including Wall Street), the company’s flagship product has certainly fallen from grace.  Why would SAP pay a greater than 50% premium over Sybase’s closing price on the day of the announcement just to acquire a relational database which is firmly stuck in maintenance mode? Well there’s more to Sybase than the relational database product.  Take, for example, its mobile application platform.  It hit Gartner’s “Leaders’ Quadrant” in January of last year, and SAP needs a good mobile play.  Beyond the platform itself, Sybase has a slew of mobile services; click this link to look them over. There’s a second major asset that Sybase has though, and I wonder if it figured prominently into SAP’s bid: Sybase IQ.  Sybase IQ is a columnar database.  Columnar databases place values from a given database column contiguously, unlike conventional relational databases, which store all of a row’s data in close proximity.  Storing column values together works well in aggregation reporting scenarios, because the figures to be aggregated can be scanned in one efficient step.  It also makes for high rates of compression because values from a single column tend to be close to each other in magnitude and may contain long sequences of repeating values.  Highly compressible databases use much less disk storage and can be largely or wholly loaded into memory, resulting in lighting fast query performance.  For an ERP company like SAP, with its own legacy BI platform (SAP BW) and the entire range of Business Objects and Crystal Reports BI products (which it acquired in 2007) query performance is extremely important. And it’s a competitive necessity too.  QlikTech has built an entire company on a columnar, in-memory BI product (QlikView).  So too has startup company Vertica.  IBM’s TM1 product has been doing in-memory OLAP for years.  And guess who else has the in-memory religion?  Microsoft does, in the form of its new PowerPivot product.  I expect the technology in PowerPivot to become strategic to the full-blown SQL Server Analysis Services product and the entire Microsoft BI stack.  I sure don’t blame SAP for jumping on the in-memory bandwagon, if indeed the Sybase acquisition is, at least in part, motivated by that. It will be interesting to watch and see what SAP does with Sybase’s product line-up (assuming the acquisition closes), including the core database, the mobile platform, IQ, and even tools like PowerBuilder.  It is also fascinating to watch columnar’s encroachment on relational.  Perhaps this acquisition will be columnar’s tipping point and people will no longer see it as a fad.  Are you listening Larry Ellison?

    Read the article

  • BI Publisher at Collaborate 2010

    - by mike.donohue
    Noelle and I are heading to Collaborate 2010 next week. There are over two dozen sessions on BI Publisher including a Hands On Lab (see below). Very excited to see what our customers and partners will be presenting and how they are using BI Publisher to get better reports and reduce costs. My only regret is that many sessions are scheduled at the same time so I won't get to see all of them. Noelle and I will be presenting the following: Monday, April 19 2:30 pm - 3:30 pm Introduction to Oracle Business Intelligence Publisher Session: 227 Location: Reef F By: Mike 2:30 pm - 3:30 pm The Reporting Platform for Applications: Oracle Business Intelligence Publisher Session: 73170 Location: South Seas Ballroom J By: Noelle 3:45 pm - 4:45 pm Oracle Business Intelligence Publisher Hands On Lab (1) Session: 217 Location: Palm D By: Noelle and Mike Tuesday, April 20 8:00 am - 9:00 am Oracle Business Intelligence Publisher Best Practices Session: 218 Location: Palm D By: Noelle and Mike We will also be at the BI Technology demo pod in the exhibt hall so please stop by and say hello. All BI Publisher related Sessions Sunday, April 18 2:00 pm - 2:50 pm Customizing your Invoices in a Flash! 3:00 pm - 3:50 pm BI Publisher SIG Meeting - Part 1 4:00 pm - 4:50 pm BI Publisher SIG Meeting - Part 2 Monday, April 19 8:00 am - 9:00 am XML Publisher and FSG for Beginners 2:30 pm - 3:30 pm Introduction to Oracle Business Intelligence Publisher 2:30 pm - 3:30 pm The Reporting Platform for Applications: Oracle Business Intelligence Publisher 2:30 pm - 3:30 pm Bay Ballroom A What it Takes to Make Your Business Intelligence Implementation a Success 2:30 pm - 3:30 pm XML Publisher-More Than Just Form Letters 3:45 pm - 4:45 pm JD Edwards EnterpriseOne Reporting and Batch Discussions presented by Technology SIG 3:45 pm - 4:45 pm Hands On Lab: Oracle Business Intelligence Publisher (1) Tuesday, April 20 8:00 am - 9:00 am Oracle Business Intelligence Publisher Best Practices 8:00 am - 9:00 am Creating XML Publisher Documents with PeopleCode 10:30 am - 11:30 am Moving to BI Publisher, Now What? Automated Fax and Email from Oracle EBS 2:00 pm - 3:00 pm Smart Reporting in Oracle Financials Release 12.1 2:00 pm - 3:00 pm Custom Check Printing Framework using XML Publisher 2:00 pm - 3:00 pm BI Publisher and Oracle BI for JD Edwards Wednesday, April 21 8:00 am - 9:00 am XML Publisher Tips for PeopleTools 10:30 am - 11:30 am JD Edwards World - Technical Upgrade Considerations 10:30 am - 11:30 am Data Visualization Best Practices: Know how to design and improve your BI & EPM reports, dashboards, and queries 10:30 am - 11:30 am Oracle BIEE End-to-End 1:00 pm - 2:00 pm Empower JD Edwards Users with Oracle BI Publisher for Ad Hoc Reporting 1:00 pm - 2:00 pm BIP and JD Edwards World - Good Stuff! 2:15 pm - 3:15 pm Proven Strategies for Increasing ROI with PeopleSoft HCM 4:00 pm - 5:00 pm Using Oracle BI Delivers to Send Reports to JD Edwards Users Thursday, April 22 9:45 am - 10:45 am PeopleSoft Recruiting Enhancements You Can Use 9:45 am - 10:45 am Reducing Cost with Oracle's BI Publisher Note (1) the Hands On Lab was not showing in the joint scheduler as of this posting but, it is definitely ON.

    Read the article

  • To 'seal' or to 'wrap': that is the question ...

    - by Simon Thorpe
    If you follow this blog you will already have a good idea of what Oracle Information Rights Management (IRM) does. By encrypting documents Oracle IRM secures and tracks all copies of those documents, everywhere they are shared, stored and used, inside and outside your firewall. Unlike earlier encryption products authorized end users can transparently use IRM-encrypted documents within standard desktop applications such as Microsoft Office, Adobe Reader, Internet Explorer, etc. without first having to manually decrypt the documents. Oracle refers to this encryption process as 'sealing', and it is thanks to the freely available Oracle IRM Desktop that end users can transparently open 'sealed' documents within desktop applications without needing to know they are encrypted and without being able to save them out in unencrypted form. So Oracle IRM provides an amazing, unprecedented capability to secure and track every copy of your most sensitive information - even enabling end user access to be revoked long after the documents have been copied to home computers or burnt to CD/DVDs. But what doesn't it do? The main limitation of Oracle IRM (and IRM products in general) is format and platform support. Oracle IRM supports by far the broadest range of desktop applications and the deepest range of application versions, compared to other IRM vendors. This is important because you don't want to exclude sensitive business processes from being 'sealed' just because either the file format is not supported or users cannot upgrade to the latest version of Microsoft Office or Adobe Reader. But even the Oracle IRM Desktop can only open 'sealed' documents on Windows and does not for example currently support CAD (although this is coming in a future release). IRM products from other vendors are much more restrictive. To address this limitation Oracle has just made available the Oracle IRM Wrapper all-format, any-platform encryption/decryption utility. It uses the same core Oracle IRM web services and classification-based rights model to manually encrypt and decrypt files of any format on any Java-capable operating system. The encryption envelope is the same, and it uses the same role- and classification-based rights as 'sealing', but before you can use 'wrapped' files you must manually decrypt them. Essentially it is old-school manual encryption/decryption using the modern classification-based rights model of Oracle IRM. So if you want to share sensitive CAD documents, ZIP archives, media files, etc. with a partner, and you already have Oracle IRM, it's time to get 'wrapping'! Please note that the Oracle IRM Wrapper is made available as a free sample application (with full source code) and is not formally supported by Oracle. However it is informally supported by its author, Martin Lambert, who also created the widely-used Oracle IRM Hot Folder automated sealing application.

    Read the article

  • SQLAuthority News – Book Review – Beginning T-SQL 2008 by Kathi Kellenberger

    - by pinaldave
    Beginning T-SQL 2008 by Kathi Kellenberger Amazon Link Detail Review: Beginning T-SQL 2008 is one of the best books on the market if you are just beginning to work with Microsoft SQL, or have a little bit of experience and need to learn more quickly. Each chapter of the book introduces a new subject, and builds upon topics covered in previous chapters.  The author of the book, Kathi Kellenberger understands that you need to form a solid foundation of knowledge before moving on to new topics, and sets up each subject nicely.  Because the chapters move in an orderly progression, you continue to use skills you learned earlier. One of the best features of Beginning T-SQL 2008 is that each chapter has multiple examples and exercises.  Many books introduce a topic and then never go back to it.  This book gives enough examples that you will be familiar with the subject when you come across it in real life.  The exercises at the end of the chapter mean that you will be using the skills you learned – and there is no better way to cement a subject in your brain. The book also includes discussions of the common errors that programmers will come across, how to avoid them, and how to fix them if they happen.  Ms. Kellenberger understands that not only do mistakes happen, but they are bound to happen if you aren’t trained properly.  Mistakes are part of the learning process! The book begins by discussions relational theory, so that programmers will understand the way T-SQL works from the ground up.  It also walks readers through writing accurate queries, combining set-based and procedural processing, embedding logic in stored functions, and so much more. Overall, the main goal of Beginning T-SQL 2008 is to introduce novices to SQL programming, and quickly familiarize them with the basics of running the program.  The book is written with the idea that readers will not know any of the technical terms or vocabulary.  However, if you are a little more familiar with SQL and looking to become better, you will still find this book very helpful. Ratting: 4.5+ Stars Summary: I must recommend Beginning T-SQL 2008 highly enough.  If you are going to buy any beginners guide to Transect-SQL, this is the one you should spend your money on.  You can save yourself a lot of time and effort later by using this very affordable manual to learn the basics, which will allow you to become an expert much faster. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Book Review, SQLAuthority News, T SQL, Technology

    Read the article

  • Best of "The Moth" 2010

    - by Daniel Moth
    It is the time again (like in 2004, 2005, 2006, 2007, 2008, 2009) to look back at my blog for the past year and identify areas of interest that seem to be more prominent than others. After doing so, representative posts follow in my top 5 list (in random order). 1. This was the year where I had to move for the first time since 2004 my blog engine (blogger.com –> dasBlog), host provider (zen –> godaddy), web server technology and OS (apache on Linux –> IIS on Windows Server). My goal was not to break any permalinks or the look and feel of this website. A series of posts covered how I achieved that goal, culminating in a tool for others to use if they wanted to do the same: Tool to convert blogger.com content to dasBlog. Going forward I aim to be sharing more small code utilities like that one… 2. At work I am known for being fairly responsive on email, and more importantly never dropping email balls on the floor. This is due to my email processing system, which I shared here: Processing Email in Outlook. I will be sharing more tips with regards to making the best of the Office products. 3. There is no doubt in my mind that this is the year people will remember as the one where Microsoft finally fights back in the mobile space. Even though the new platform means my Windows Mobile book sales will dwindle :-), I am ecstatic about Windows Phone 7 both as a consumer and as a developer. On the release day, to get you started I shared the top 10 Windows Phone 7 developer resources. I will be sharing my tips from my experience in writing code for and consuming this new platform… 4. For my HPC developer friends using Visual Studio, I shared Slides and code for MPI Cluster Debugger and also gave you all the links you need for getting started with Dryad and DryadLINQ from MSR. Expect more from me on cluster development in the coming year… 5. Still in the HPC space, but actually also in the game and even mainstream development, the big disruption and opportunity comes in the form of GPGPU and, on the Microsoft platform, (currently) DirectCompute. Expect more from me on gpgpu development in the coming year… Subscribe via the link on the left to stay tuned for 2011… I wish you a very Happy New Year (with whatever definition of happiness works for you)! Comments about this post welcome at the original blog.

    Read the article

  • SQLAuthority News – Social Media Series – Twitter and Myself

    - by pinaldave
    Pinal Dave on Twitter! Frequent readers of my blog might know that I am trying to get more involved in all social media sites, both professionally and personally.  Readers might also know that I have often struggled with finding the purpose of some social media sites – Twitter especially.  One of the great uses of social media is to stay connected and updated with followers.  Twitter’s 140 character limit means that Twitter is a great place to get quick updates from the world, but not a lot of deep information.  In fact, I have the feeling that Twitter’s form might actually limit its usefulness – especially for complex subjects like SQL Server. However, #sqlhelp has tag has for sure overcome that belief. You can instantly talk about SQL and get help with your SQL problems on twitter. I believe in keeping up with the changing times, and it didn’t feel right to give up on Twitter.  So I have determined a good way to use Twitter and set rules for myself.  The problem I was facing that if I followed everyone who interested me and let anyone follow me, I was completely overwhelmed by the amount of information Twitter could give me every day.  It didn’t seem like 140 characters should be able to take up so much of my time, but it took hours to sort through all the updates to find things that were of interest to me and to SQL Server. First, I was forced to unfollow anyone who made too many updates every day.  This was not an easy decision, but just for my own sake I had to limit the amount of information I could take in every day.  I still let anyone follow me who wants to, because I didn’t want to limit my readership, and I hope that they do not feel the way I did – that there are too many updates! Next, I made sure that the information I put on Twitter is useful and to the point.  I try to announce new blog posts on Twitter at least once a day, and I also try to find five posts from other people every day that are worth re-Tweeting.  This forces me to stay active in the community.  But it is not all business on Twitter.  It is also a place for me to post updates about my family and home life, for anyone who is interested. In simple words, I talk every thing and anything on twitter. If you’d like to follow me, my Twitter handle is www.twitter.com/pinaldave.  It is a good place to start if you’d like to keep updated with my blog and find out who I follow and who my influences are.  Twitter is perfect for getting little “tastes” of things you’re interested in.  If you are interested in my blog, SQL Server, or both, I hope that my Twitter updates will be interesting and helpful. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology Tagged: Social Media

    Read the article

  • The embarrassingly obvious about SQL Server CE

    - by Edward Boyle
    I have been working with SQL servers in one form or another for almost two decades now. But I am new to SQL Server Compact Edition. In the past weeks I have been working with SQL Serve CE a lot. The SQL, not a problem, but the engine itself is very new to me. One of the issues I ran into was a simple SQL statement taking excusive amounts of time; by excessive, I mean over one second. I wrote a little code to time the method. Sometimes it took under one second, other times as long as three seconds. –But it was a simple update statement! As embarrassing as it is, why it was slow eluded me. I posted my issue to MSDN and I got a reply from ErikEJ (MS MVP) who runs the blog “Everything SQL Server Compact” . I know little to nothing about SQL Server Compact. This guy is completely obsessed very well versed in CE. If you spend any time in MSDN forums, it seems that this guy single handedly has the answer for every CE question that comes up. Anyway, he said: “Opening a connection to a SQL Server Compact database file is a costly operation, keep one connection open per thread (incl. your UI thread) in your app, the one on the UI thread should live for the duration of your app.” It hit me, all databases have some connection overhead and SQL Server CE is not a database engine running as a service drinking Jolt Cola waiting for someone to talk to him so he can spring into action and show off his quarter-mile sprint capabilities. Imagine if you had to start the SQL Server process every time you needed to make a database connection. Principally, that is what you are doing with SQL Server CE. For someone who has worked with Enterprise Level SQL Servers a lot, I had to come to the mental image that my Open connection to SQL Server CE is basically starting a service, my own private service, and by closing the connection, I am shutting down my little private service. After making the changes in my code, I lost any reservations I had with using CE. At present, my Data Access Layer class has a constructor; in that constructor I open my connection, I also have OpenConnection and CloseConnection methods, I also implemented IDisposable and clean up any connections in Dispose(). I am still finalizing how this assembly will function. – That’s beside the point. All I’m trying to say is: “Opening a connection to a SQL Server Compact database file is a costly operation”

    Read the article

  • Windows Azure Use Case: Fast Acquisitions

    - by BuckWoody
    This is one in a series of posts on when and where to use a distributed architecture design in your organization's computing needs. You can find the main post here: http://blogs.msdn.com/b/buckwoody/archive/2011/01/18/windows-azure-and-sql-azure-use-cases.aspx  Description: Many organizations absorb, take over or merge with other organizations. In these cases, one of the most difficult parts of the process is the merging or changing of the IT systems that the employees use to do their work, process payments, and even get paid. Normally this means that the two companies have disparate systems, and several approaches can be used to have the two organizations use technology between them. An organization may choose to retain both systems, and manage them separately. The advantage here is speed, and keeping the profit/loss sheets separate. Another choice is to slowly “sunset” or stop using one organization’s system, and cutting to the other system immediately or at a later date. Although a popular choice, one of the most difficult methods is to extract data and processes from one system and import it into the other. Employees at the transitioning system have to be trained on the new one, the data must be examined and cleansed, and there is inevitable disruption when this happens. Still another option is to integrate the systems. This may prove to be as much work as a transitional strategy, but may have less impact on the users or the balance sheet. Implementation: A distributed computing paradigm can be a good strategic solution to most of these strategies. Retaining both systems is made more simple by allowing the users at the second organization immediate access to the new system, because security accounts can be created quickly inside an application. There is no need to set up a VPN or any other connections than just to the Internet. Having the users stop using one system and start with the other is also simple in Windows Azure for the same reason. Extracting data to Azure holds the same limitations as an on-premise system, and may even be more problematic because of the large data transfers that might be required. In a distributed environment, you pay for the data transfer, so a mixed migration strategy is not recommended. However, if the data is slowly migrated over time with a defined cutover, this can be an effective strategy. If done properly, an integration strategy works very well for a distributed computing environment like Windows Azure. If the Azure code is architected as a series of services, then endpoints can expose the service into and out of not only the Azure platform, but internally as well. This is a form of the Hybrid Application use-case documented here. References: Designing for Cloud Optimized Architecture: http://blogs.msdn.com/b/dachou/archive/2011/01/23/designing-for-cloud-optimized-architecture.aspx 5 Enterprise steps for adopting a Platform as a Service: http://blogs.msdn.com/b/davidmcg/archive/2010/12/02/5-enterprise-steps-for-adopting-a-platform-as-a-service.aspx?wa=wsignin1.0

    Read the article

  • LLBLGen Pro feature highlights: model views

    - by FransBouma
    (This post is part of a series of posts about features of the LLBLGen Pro system) To be able to work with large(r) models, it's key you can view subsets of these models so you can have a better, more focused look at them. For example because you want to display how a subset of entities relate to one another in a different way than the list of entities. LLBLGen Pro offers this in the form of Model Views. Model Views are views on parts of the entity model of a project, and the subsets are displayed in a graphical way. Additionally, one can add documentation to a Model View. As Model Views are displaying parts of the model in a graphical way, they're easier to explain to people who aren't familiar with entity models, e.g. the stakeholders you're interviewing for your project. The documentation can then be used to communicate specifics of the elements on the model view to the developers who have to write the actual code. Below I've included an example. It's a model view on a subset of the entities of AdventureWorks. It displays several entities, their relationships (both relational and inheritance relationships) and also some specifics gathered from the interview with the stakeholder. As the information is inside the actual project the developer will work with, the information doesn't have to be converted back/from e.g .word documents or other intermediate formats, it's the same project. This makes sure there are less errors / misunderstandings. (of course you can hide the docked documentation pane or dock it to another corner). The Model View can contain entities which are placed in different groups. This makes it ideal to group entities together for close examination even though they're stored in different groups. The Model View is a first-class citizen of the code-generator. This means you can write templates which consume Model Views and generate code accordingly. E.g. you can write a template which generates a service per Model View and exposes the entities in the Model View as a single entity graph, fetched through a method. (This template isn't included in the LLBLGen Pro package, but it's easy to write it up yourself with the built-in template editor). Viewing an entity model in different ways is key to fully understand the entity model and Model Views help with that.

    Read the article

  • Using Rich Text Editor (WYSIWYG) in ASP.NET MVC

    - by imran_ku07
       Introduction:          In ASP.NET MVC forum I found some question regarding a sample HTML Rich Text Box Editor(also known as wysiwyg).So i decided to create a sample ASP.NET MVC web application which will use a Rich Text Box Editor. There are are lot of Html Editors are available, but for creating a sample application, i decided to use cross-browser WYSIWYG editor from openwebware. In this article I will discuss what changes needed to work this editor with ASP.NET MVC. Also I had attached the sample application for download at http://www.speedfile.org/155076. Also note that I will only show the important features, not discuss every feature in detail.   Description:          So Let's start create a sample ASP.NET MVC application. You need to add the following script files,         jquery-1.3.2.min.js        jquery_form.js        wysiwyg.js        wysiwyg-settings.js        wysiwyg-popup.js          Just put these files inside Scripts folder. Also put wysiwyg.css in your Content Folder and add the following folders in your project        addons        popups          Also create a empty folder Uploads to store the uploaded images. Next open wysiwyg.js and set your configuration                  // Images Directory        this.ImagesDir = "/addons/imagelibrary/images/";                // Popups Directory        this.PopupsDir = "/popups/";                // CSS Directory File        this.CSSFile = "/Content/wysiwyg.css";              Next create a simple View TextEditor.aspx inside View / Home Folder and add the folllowing HTML.        <%@ Page Language="C#" Inherits="System.Web.Mvc.ViewPage" %>            <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">        <html >            <head runat="server">                <title>TextEditor</title>                <script src="../../Scripts/wysiwyg.js" type="text/javascript"></script>                <script src="../../Scripts/wysiwyg-settings.js" type="text/javascript"></script>                <script type="text/javascript">                            WYSIWYG.attach('text', full);                            </script>            </head>            <body>                <% using (Html.BeginForm()){ %>                    <textarea id="text" name="test2" style="width:850px;height:200px;">                    </textarea>                    <input type="submit" value="submit" />                <%} %>            </body>        </html>                  Here i have just added a text area control and a submit button inside a form. Note the id of text area and WYSIWYG.attach function's first parameter is same and next to watch is the HomeController.cs        using System;        using System.Collections.Generic;        using System.Linq;        using System.Web;        using System.Web.Mvc;        using System.IO;        namespace HtmlTextEditor.Controllers        {            [HandleError]            public class HomeController : Controller            {                public ActionResult Index()                {                    ViewData["Message"] = "Welcome to ASP.NET MVC!";                    return View();                }                    public ActionResult About()                {                                return View();                }                        public ActionResult TextEditor()                {                    return View();                }                [AcceptVerbs(HttpVerbs.Post)]                [ValidateInput(false)]                public ActionResult TextEditor(string test2)                {                    Session["html"] = test2;                            return RedirectToAction("Index");                }                        public ActionResult UploadImage()                {                    if (Request.Files[0].FileName != "")                    {                        Request.Files[0].SaveAs(Server.MapPath("~/Uploads/" + Path.GetFileName(Request.Files[0].FileName)));                        return Content(Url.Content("~/Uploads/" + Path.GetFileName(Request.Files[0].FileName)));                    }                    return Content("a");                }            }        }          So simple code, just save the posted Html into Session. Here the parameter of TextArea action is test2 which is same as textarea control name of TextArea.aspx View. Also note ValidateInputAttribute is false, so it's up to you to defends against XSS. Also there is an Action method which simply saves the file inside Upload Folder.          I am uploading the file using Jquery Form Plugin. Here is the code which is found in insert_image.html inside addons folder,        function ChangeImage() {            var myform=document.getElementById("formUpload");                    $(myform).ajaxSubmit({success: function(responseText){                insertImage(responseText);                        window.close();                }            });        }          and here is the Index View which simply renders the html of Editor which was saved in Session        <%@ Page Language="C#" MasterPageFile="~/Views/Shared/Site.Master" Inherits="System.Web.Mvc.ViewPage" %>        <asp:Content ID="indexTitle" ContentPlaceHolderID="TitleContent" runat="server">            Home Page        </asp:Content>        <asp:Content ID="indexContent" ContentPlaceHolderID="MainContent" runat="server">            <h2><%= Html.Encode(ViewData["Message"]) %></h2>            <p>                To learn more about ASP.NET MVC visit <a href="http://asp.net/mvc" title="ASP.NET MVC Website">http://asp.net/mvc</a>.            </p>            <%if (Session["html"] != null){                  Response.Write(Session["html"].ToString());            } %>                    </asp:Content>   Summary:          Hopefully you will enjoy this article. Just download the code and see the effect. From security point, you must handle the XSS attack your self. I had uploaded the sample application in http://www.speedfile.org/155076

    Read the article

  • .NET Reflector Pro Coming…

    The very best software is almost always originally the creation of a single person. Readers of our 'Geek of the Week' will know of a few of them.  Even behemoths such as MS Word or Excel started out with one programmer.  There comes a time with any software that it starts to grow up, and has to move from this form of close parenting to being developed by a team.  This has happened several times within Red-Gate: SQL Refactor, SQL Compare, and SQL Dependency Tracker, not to mention SQL Backup, were all originally the work of a lone coder, who subsequently handed over the development to a structured team of programmers, test engineers and usability designers. Because we loved .NET Reflector when Lutz Roeder wrote and nurtured it, and, like many other .NET developers, used it as a development tool ourselves, .NET Reflector's progress from being the apple of Lutz's eye to being a Red-Gate team-based development  seemed natural.  Lutz, after all, eventually felt he couldn't afford the time to develop it to the extent it deserved. Why, then, did we want to take on .NET Reflector?  Different people may give you different answers, but for us in the .NET team, it just seemed a natural progression. We're always very surprised when anyone suggests that we want to change the nature of the tool since it seems right just as it is. .NET Reflector will stay very much the tool we all use and appreciate, although the new version will support .NET 4, and will have many improvements in the accuracy of its decompiling. Whilst we've made a lot of improvements to Reflector, the radical addition, which we hope you'll want to try out as well, is '.NET Reflector Pro'. This is an extension to .NET Reflector that allows the debugging of decompiled code using the Visual Studio debugger. It is an add-in, but we'll be charging for it, mainly because we prefer to live indoors with a warm meal, rather than outside in tents, particularly when the winter's been as cold as this one has. We're hoping (we're even pretty confident!) that you'll share our excitement about .NET Reflector Pro. .NET Reflector Pro integrates .NET Reflector into Visual Studio, allowing you to seamlessly debug into third-party code and assemblies, even if you don't have the source code for them. You can now treat decompiled assemblies much like your own code: you can step through them and use all the debugging techniques that you would use on your own code. Try the beta now. span.fullpost {display:none;}

    Read the article

  • Silverlight Cream for March 10, 2010 - 2 -- #811

    - by Dave Campbell
    In this Issue: AfricanGeek, Phil Middlemiss, Damon Payne, David Anson, Jesse Liberty, Jeremy Likness, Jobi Joy(-2-), Fredrik Normén, Bobby Diaz, and Mike Taulty(-2-). Shoutouts: Shawn Wildermuth blogged that they posted My "What's New in Silverlight 3" Video from 0reDev Last Fall Shawn Wildermuth also has a post up for his loyal followers: Where to See Me At MIX10 Jonas Follesø has presentation materials up as well: MVVM presentation from NDC2009 on Vimeo Adam Kinney updated his Favorite Tool and Library Downloads for Silverlight From SilverlightCream.com: Styling Silverlight ListBox with Blend 3 In his latest Video Tutorial, AfricanGeek is animating the ListBox control by way of Expression Blend 3. Animating the Silverlight Opacity Mask Phil Middlemiss has written a Behavior that lets you turn a FrameworkElement into an opacity mask for it's parent container... check out his tutorial and grab the code. AddRange for ObservableCollection in Silverlight 3 Damon Payne has a post up discussing the problem with large amounts of data in an ObservableCollection, and how using AddRange is a performance booster. Easily rotate the axis labels of a Silverlight/WPF Toolkit chart David Anson blogged a solution to rotating the axis labels of a Silverlight and WPF chart. Persisting the Configuration (Updated) Jesse Liberty has a good discussion on the continuation of his HyperVideo Platform talking about what all he is needing from the database in the form of configuration information... including the relationships. Animations and View Models: IAnimationDelegate Check out Jeremy Likness' IAnimationDelegate that lets your ViewModel fire and respond to animations without having to know all about them. Button Style - Silverlight Jobi Joy converted a WPF control template into Silverlight... and you'll want to download the XAML he's got for this :) A Simple Accordion banner using ListBox Jobi Joy also has an Image Accordian created in Expression Blend... and it's a 'drop this XAML in your User Control' kinda thing... again, go grab the XAML :) WCF RIA Services Silverlight Business Application – Using ASP.NET SiteMap for Navigation Fredrik Normén has a code-laden post up on RIA Services and the ASP.NET SiteMap. He is using the Silverlight Business app template that comes with WCF RIA Services. A Simple, Selectable Silverlight TextBlock (sort of)... Bobby Diaz shares with us his solution for a Text control that can be copied from in the same manner 'normal' web controls can be. He also includes a link to another post on the same topic. Silverlight 4 Beta Networking. Part 11 - WCF and TCP Mike Taulty has another pair of video tutorials up in his Networking series. This one is on WCF over TCP Silverlight 4 Beta Networking. Part 12 - WCF and Polling HTTP Mike Taulty's 12th networking video tutorial is on WCF with HTTP polling duplex. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    MIX10

    Read the article

  • Comix is an Awesome Comics Archive Viewer for Linux

    - by Asian Angel
    Do you have a terrific collection of comics in electronic form but need a great app to view them with? If you have a Linux system then we have the perfect app for you…Comix, the open source comic reading powerhouse. For our example we installed Comix on our Ubuntu 10.10 system. Just go to the Ubuntu Software Center and conduct a quick search. When you go to install Comix in the Ubuntu Software Center, make sure to scroll all the way to the bottom and select Unarchiver for .rar files. The listing appears as a “non-free version” for some reason, but displays as free once selected. Odd, but nothing to worry about in the end… Once Comix is installed you can find it in the Graphics Section of the Ubuntu Menu. Comix also comes with a nice set of options to let you customize the app to best suit those important comic reading needs. Here is a comprehensive list of the features this little comic reading powerhouse packs into one easy to use package: Fullscreen mode, double page mode, fit-to-screen mode, zooming and scrolling, rotation and mirroring, magnification lens, changeable image scaling quality, image enhancement, can read right-to-left to fit manga, etc., caching for faster page flipping, bookmarks support, customizable GUI, archive comments support, archive converter, thumbnail browser, standards compliant, available in multiple languages (English, Swedish, Simplified Chinese, Spanish, Brazilian Portuguese, & German), reads “JPEG, PNG, TIFF, GIF, BMP, ICO, XPM, & XBM” image formats, reads “ZIP & tar archives natively, RAR archives through the unrar program” runs on Linux, FreeBSD, NetBSD, and virtually any other UNIX-like OS, and more! Have fun reading those comics on your favorite Linux system! Interested in learning more about Comix? Then be certain to drop by the homepage! Comix Homepage Latest Features How-To Geek ETC How to Enable User-Specific Wireless Networks in Windows 7 How to Use Google Chrome as Your Default PDF Reader (the Easy Way) How To Remove People and Objects From Photographs In Photoshop Ask How-To Geek: How Can I Monitor My Bandwidth Usage? Internet Explorer 9 RC Now Available: Here’s the Most Interesting New Stuff Here’s a Super Simple Trick to Defeating Fake Anti-Virus Malware Comix is an Awesome Comics Archive Viewer for Linux Get the MakeUseOf eBook Guide to Speeding Up Windows for Free Need Tech Support? Call the Star Wars Help Desk! [Video Classic] Reclaim Vertical UI Space by Adding a Toolbar to the Left or Right Side of Firefox Androidify Turns You into an Android-style Avatar Reader for Android Updates; Now with Feed Widgets and More

    Read the article

  • MVC Architecture

    Model-View-Controller (MVC) is an architectural design pattern first written about and implemented by  in 1978. Trygve developed this pattern during the year he spent working with Xerox PARC on a small talk application. According to Trygve, “The essential purpose of MVC is to bridge the gap between the human user's mental model and the digital model that exists in the computer. The ideal MVC solution supports the user illusion of seeing and manipulating the domain information directly. The structure is useful if the user needs to see the same model element simultaneously in different contexts and/or from different viewpoints.”  Trygve Reenskaug on MVC The MVC pattern is composed of 3 core components. Model View Controller The Model component referenced in the MVC pattern pertains to the encapsulation of core application data and functionality. The primary goal of the model is to maintain its independence from the View and Controller components which together form the user interface of the application. The View component retrieves data from the Model and displays it to the user. The View component represents the output of the application to the user. Traditionally the View has read-only access to the Model component because it should not change the Model’s data. The Controller component receives and translates input to requests on the Model or View components. The Controller is responsible for requesting methods on the model that can change the state of the model. The primary benefit to using MVC as an architectural pattern in a project compared to other patterns is flexibility. The flexibility of MVC is due to the distinct separation of concerns it establishes with three distinct components.  Because of the distinct separation between the components interaction is limited through the use of interfaces instead of classes. This allows each of the components to be hot swappable when the needs of the application change or needs of availability change. MVC can easily be applied to C# and the .Net Framework. In fact, Microsoft created a MVC project template that will allow new project of this type to be created with the standard MVC structure in place before any coding begins. The project also creates folders for the three key components along with default Model, View and Controller classed added to the project. Personally I think that MVC is a great pattern in regards to dealing with web applications because they could be viewed from a myriad of devices. Examples of devices include: standard web browsers, text only web browsers, mobile phones, smart phones, IPads, IPhones just to get started. Due to the potentially increasing accessibility needs and the ability for components to be hot swappable is a perfect fit because the core functionality of the application can be retained and the View component can be altered based on the client’s environment and the View component could be swapped out based on the calling device so that the display is targeted to that specific device.

    Read the article

  • Enterprise 2.0 Conference: Building Social Business

    - by kellsey.ruppel
    The way we work is changing rapidly, offering an enormous competitive advantage to those who embrace the new tools that enable contextual, agile and simplified information exchange and collaboration to distributed workforces and networks of partners and customers. As many of you are aware, Enterprise 2.0 is the term for the technologies and business practices that liberate the workforce from the constraints of legacy communication and productivity tools like email. It provides business managers with access to the right information at the right time through a web of inter-connected applications, services and devices. Enterprise 2.0 makes accessible the collective intelligence of many, translating to a huge competitive advantage in the form of increased innovation, productivity and agility. The Enterprise 2.0 Conference takes a strategic perspective, emphasizing the bigger picture implications of the technology and the exploration of what is at stake for organizations trying to change not only tools, but also culture and process. Beyond discussion of the "why", there will also be in-depth opportunities for learning the "how" that will help you bring Enterprise 2.0 to your business.You won't want to miss this opportunity to learn and hear from leading experts in the fields of technology for business, collaboration, culture change and collective intelligence. Oracle is a proud Gold sponsor of the Enterprise 2.0 Conference, taking place this week in Boston. Come and learn about Oracle at the following panel sessions and Market Leaders Theater Sessions. Tuesday, June 19, 2012 at 1:30 p.m. Market Theater Presentation Into the Activity Stream, and Beyond! Introducing Oracle Social Network Oracle Speaker: Christian Finn, Senior Director of Evangelism, Oracle WebCenter Tuesday, June 19, 2012 at 2:30 p.m.  Panel Session Innovation versus Integration Oracle Panel Speaker: Christian Finn, Senior Director of Evangelism, Oracle WebCenter Wednesday, June 20, 2012 at 1:30 p.m. Business Leadership Roundtable Oracle Panel Speaker: Christian Finn, Senior Director of Evangelism, Oracle WebCenter Wednesday, June 20, 2012 at 3:00 p.m. Market Theater Presentation Into the Activity Stream, and Beyond! Introducing Oracle Social Network Oracle Speaker: Christian Finn, Senior Director of Evangelism, Oracle WebCenter Thursday, June 21, 2012 at 8:30 a.m. Panel Session Collecting and Processing Big Data: Architecting Systems that Scale Oracle Panel Speaker: Ashok Joshi, Senior Director, Berkeley DB Development Thursday, June 21, 2012 at 11:00 a.m. Panel Session The Future of Big Data: What's Next Oracle Panel Speaker: Ashok Joshi, Senior Director, Berkeley DB Development Be sure to stop by and visit Oracle booth #501, to see live demonstrations of Oracle Social Network and Oracle WebCenter!

    Read the article

  • SnagIt Live Writer Plug-in updated

    - by Rick Strahl
    I've updated my free SnagIt Live Writer plug-in again as there have been a few issues with the new release of SnagIt 11. It appears that TechSmith has trimmed the COM object and removed a bunch of redundant functionality which has broken the older plug-in. I also updated the layout and added SnagIt's latest icons to the form. Finally I've moved the source code to Github for easier browsing and downloading for anybody interested and easier updating for me. This plug-in is not new - I created it a number of years back, but I use the hell out it both for my blogging and for a few internal apps that with MetaWebLogApi to update online content. The plug-in makes it super easy to add captured image content directly into a post and upload it to the server. What does it do? Once installed the plug-in shows up in the list of plug-ins. When you click it launches a SnagIt Capture Dialog: Typically you set the capture settings once, and then save your settings. After that a single click or ENTER press gets you off capturing. If you choose the Show in SnagIt preview window option, the image you capture is is displayed in the preview editor to mark up images, which is one of SnagIt's great strengths IMHO. The image editor has a bunch of really nice effects for framing and marking up and highlighting of images that is really sweet. Here's a capture from a previous image in the SnagIt editor where I applied the saw tooth cutout effect: Images are saved to disk and can optionally be deleted immediately, since Live Writer creates copies of original images in its own folders before uploading the files. No need to keep the originals around typically. The plug-in works with SnagIt Versions 7 and later. It's a simple thing of course - nothing magic here, but incredibly useful at least to me. If you're using Live Writer and you own a copy of SnagIt do yourself a favor and grab this and install it as a plug-in. Resources: SnagIt Windows Live Writer Plug-in Installer Source Code on GitHub Buy SnagIt© Rick Strahl, West Wind Technologies, 2005-2012 Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Getting Started With nServiceBus on VAN Mar 31

    - by van
    Topic: nServiceBus is mature and powerful open source framework that enables to design robust, scalable, message-based, service-oriented architectures. Latest improvements in the configuration API enables developers to quickly get started and build a working simple system that uses messaging infrastructure. The goal of this session is to give a jump start with the framework, introduce basic concepts such as message handlers, Sagas, Pub/Sub, Generic Host and also create a working demo application that uses publish/subscribe messaging. The content of the session is addressed to developers that are interested in learning how to get started using nServiceBus in order to design and build distributed systems. Bio: Bernard Kowalski is currently a Software Developer at Microdesk, one of Autodesk's leading partners in providing variety of Geospatial and Computer-Aided Design solutions. Bernard has experience developing .NET framework-based applications utilizing Windows Forms, Windows Services, ASP.NET MVC, and Web services. In a recent project, Bernard architected and implemented a distributed system based on SOA principles using an open source implementation of an Enterprise Service Bus. Bernard develops software with Agile patterns and practices using Domain Driven Design combined with TDD (Test Driven Development). He is familiar with all of the following APIs: Autodesk Vault/Product Stream API, AutoCAD ActiveX/VBA/.NET API, AutoCAD Mechanical API, Autodesk Inventor API, Autodesk MapGuide Enterprise. Prior to joining Microdesk, Bernard worked as a researcher and teacher at the University of Science and Technology in Krakow, Poland where he was awarded with a PhD in Computer Methods in Materials Science. He also participated in research projects where he developed applications for analysis of hot compression test results using advanced optimization techniques. He also developed Finite Element Method-based programs for thermal and stress analysis using C++ and FORTRAN. Bernard is a member of the Domain Driven Design and ALT.NET user groups in NYC. Virtual ALT.NET (VAN) is the online gathering place of the ALT.NET community. Through conversations, presentations, pair programming and dojos, we strive to improve, explore, and challenge the way we create software. Using net conferencing technology such as Skype and LiveMeeting, we hold regular meetings, open to anyone, usually taking the form of a presentation or an Open Space Technology-style conversation. Please see the Calendar(http://www.virtualaltnet.com/Home/Calendar) to find a VAN group that meets at a time convenient to you, and feel welcome to join a meeting. Past sessions can be found on the Recording page. To stay informed about VAN activities, you can subscribe to the Virtual ALT.NET Google Group and follow the Virtual ALT.NET blog. Times below are Central Standard Time Start Time: Wed, Mar 31, 2010 8:00 PM UTC/GMT -5 hours End Time: Wed, Mar 31, 2010 10:00 PM UTC/GMT -5 hours Attendee URL: http://www.virtualaltnet.com/van Zach Young http://www.virtualaltnet.com

    Read the article

< Previous Page | 607 608 609 610 611 612 613 614 615 616 617 618  | Next Page >