Search Results

Search found 700 results on 28 pages for 'subscription'.

Page 26/28 | < Previous Page | 22 23 24 25 26 27 28  | Next Page >

  • Oracle Unveils Industry’s Broadest Cloud Strategy

    - by kellsey.ruppel
    Oracle Unveils Industry’s Broadest Cloud Strategy Adds Social Cloud and Showcases early customers Redwood Shores, Calif. – June 6, 2012 “Almost seven years of relentless engineering and innovation plus key strategic acquisitions. An investment of billions. We are now announcing the most comprehensive Cloud on the planet Earth,” said Oracle CEO, Larry Ellison. “Most cloud vendors only have niche assets. They don’t have platforms to extend. Oracle is the only vendor that offers a complete suite of modern, socially-enabled applications, all based on a standards-based platform.” News Facts In a major strategy update today, Larry Ellison announced the industry’s broadest and most advanced Cloud strategy and introduced Oracle Cloud Social Services, a broad Enterprise Social Platform offering. Oracle Cloud delivers a broad set of industry-standards based, integrated services that provide customers with subscription-based access to Oracle Platform Services, Application Services, and Social Services, all completely managed, hosted and supported by Oracle. Offering a wide range of business applications and platform services, the Oracle Cloud is the only cloud to enable customers to avoid the data and business process fragmentation that occurs when using multiple, siloed public clouds. Oracle Cloud is powered by leading enterprise-grade infrastructure, including Oracle Exadata and Oracle Exalogic, providing customers and partners with a high-performance, reliable, and secure infrastructure for running critical business applications. Oracle Cloud enables easy self-service for both business users and developers. Business users can order, configure, extend, and monitor their applications. Developers and administrators can easily develop, deploy, monitor and manage their applications. As part of the event, Oracle also showcased several early Oracle Cloud customers and partners including system integrators and independent software vendors. Oracle Cloud Platform Services Built on a common, complete, standards-based and enterprise-grade set of infrastructure components, Oracle Cloud Platform Services enable customers to speed time to market and lower costs by quickly building, deploying and managing bespoke applications. Oracle Cloud Platform Services will include: Database Services to manage data and build database applications with the Oracle Database. Java Services to develop, deploy and manage Java applications with Oracle WebLogic. Developer Services to allow application developers to collaboratively build applications. Web Services to build Web applications rapidly using PHP, Ruby, and Python. Mobile Services to allow developers to build cross-platform native and HTML5 mobile applications for leading smartphones and tablets. Documents Services to allow project teams to collaborate and share documents through online workspaces and portals. Sites Services to allow business users to develop and maintain visually engaging .com sites Analytics Services to allow business users to quickly build and share analytic dashboards and reports through the Cloud. Oracle Cloud Application Services Oracle Cloud Application Services provides customers access to the industry’s broadest range of enterprise applications available in the cloud today, with built-in business intelligence, social and mobile capabilities. Easy to setup, configure, extend, use and administer, Oracle Cloud Application Services will include: ERP Services: A complete set of Financial Accounting, Project Management, Procurement, Sourcing, and Governance, Risk & Compliance solutions. HCM Services: A complete Human Capital Management solution including Global HR, Workforce Lifecycle Management, Compensation, Benefits, Payroll and other solutions. Talent Management Services: A complete Talent Management solution including Recruiting, Sourcing, Performance Management, and Learning. Sales and Marketing Services: A complete Sales and Marketing solution including Sales Planning, Territory Management, Leads & Opportunity Management, and Forecasting. Customer Experience Services: A complete Customer Service solution including Web Self-Service, Contact Centers, Knowledge Management, Chat, and e-mail Management. Oracle Cloud Social Services Oracle Cloud Social Services provides the most broad and complete enterprise social platform available in the cloud today.  With Oracle Cloud Social Services, enterprises can engage with their customers on a range of social media properties in a comprehensive and meaningful fashion including social marketing, commerce, service and listening. The platform also provides enterprises with a rich social networking solution for their employees to collaborate effectively inside the enterprise. Oracle’s integrated social platform will include: Oracle Social Network to enable secure enterprise collaboration and purposeful social networking for business. Oracle Social Data Services to aggregate data from social networks and enterprise data sources to enrich business applications. Oracle Social Marketing and Engagement Services to enable marketers to centrally create, publish, moderate, manage, measure and report on their social marketing campaigns. Oracle Social Intelligence Services to enable marketers to analyze social media interactions and to enable customer service and sales teams to engage with customers and prospects effectively. Supporting Resources Oracle Cloud – learn more cloud.oracle.com – sign up now Webcast – watch the replay About Oracle Oracle engineers hardware and software to work together in the cloud and in your data center. For more information about Oracle (NASDAQ:ORCL), visit www.oracle.com. TrademarksOracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

    Read the article

  • Configure Oracle SOA JMSAdatper to Work with WLS JMS Topics

    - by fip
    The WebLogic JMS Topic are typically running in a WLS cluster. So as your SOA composites that receive these Topic messages. In some situation, the two clusters are the same while in others they are sepearate. The composites in SOA cluster are subscribers to the JMS Topic in WebLogic cluster. As nature of JMS Topic is meant to distribute the same copy of messages to all its subscribers, two questions arise immediately when it comes to load balancing the JMS Topic messages against the SOA composites: How to assure all of the SOA cluster members receive different messages instead of the same (duplicate) messages, even though the SOA cluster members are all subscribers to the Topic? How to make sure the messages are evenly distributed (load balanced) to SOA cluster members? Here we will walk through how to configure the JMS Topic, the JmsAdapter connection factory, as well as the composite so that the JMS Topic messages will be evenly distributed to same composite running off different SOA cluster nodes without causing duplication. 2. The typical configuration In this typical configuration, we achieve the load balancing of JMS Topic messages to JmsAdapters by configuring a partitioned distributed topic along with sharable subscriptions. You can reference the documentation for explanation of PDT. And this blog posting does a very good job to visually explain how this combination of configurations would message load balancing among clients of JMS Topics. Our job is to apply this configuration in the context of SOA JMS Adapters. To do so would involve the following steps: Step A. Configure JMS Topic to be UDD and PDT, at the WebLogic cluster that house the JMS Topic Step B. Configure JCA Connection Factory with proper ServerProperties at the SOA cluster Step C. Reference the JCA Connection Factory and define a durable subscriber name, at composite's JmsAdapter (or the *.jca file) Here are more details of each step: Step A. Configure JMS Topic to be UDD and PDT, You do this at the WebLogic cluster that house the JMS Topic. You can follow the instructions at Administration Console Online Help to create a Uniform Distributed Topic. If you use WebLogic Console, then at the same administration screen you can specify "Distribution Type" to be "Uniform", and the Forwarding policy to "Partitioned", which would make the JMS Topic Uniform Distributed Destination and a Partitioned Distributed Topic, respectively Step B: Configure ServerProperties of JCA Connection Factory You do this step at the SOA cluster. This step is to make the JmsAdapter that connect to the JMS Topic through this JCA Connection Factory as a certain type of "client". When you configure the JCA Connection Factory for the JmsAdapter, you define the list of properties in FactoryProperties field, in a semi colon separated list: ClientID=myClient;ClientIDPolicy=UNRESTRICTED;SubscriptionSharingPolicy=SHARABLE;TopicMessageDistributionAll=false You can refer to Chapter 8.4.10 Accessing Distributed Destinations (Queues and Topics) on the WebLogic Server JMS of the Adapter User Guide for the meaning of these properties. Please note: Except for ClientID, other properties such as the ClientIDPolicy=UNRESTRICTED, SubscriptionSharingPolicy=SHARABLE and TopicMessageDistributionAll=false are all default settings for the JmsAdapter's connection factory. Therefore you do NOT have to explicitly specify them explicitly. All you need to do is the specify the ClientID. The ClientID is different from the subscriber ID that we are to discuss in the later steps. To make it simple, you just need to remember you need to specify the client ID and make it unique per connection factory. Here is the example setting: Step C. Reference the JCA Connection Factory and define a durable subscriber name, at composite's JmsAdapter (or the *.jca file) In the following example, the value 'MySubscriberID-1' was given as the value of property 'DurableSubscriber': <adapter-config name="subscribe" adapter="JMS Adapter" wsdlLocation="subscribe.wsdl" xmlns="http://platform.integration.oracle/blocks/adapter/fw/metadata"> <connection-factory location="eis/wls/MyTestUDDTopic" UIJmsProvider="WLSJMS" UIConnectionName="ateam-hq24b"/> <endpoint-activation portType="Consume_Message_ptt" operation="Consume_Message"> <activation-spec className="oracle.tip.adapter.jms.inbound.JmsConsumeActivationSpec"> <property name="DurableSubscriber" value="MySubscriberID-1"/> <property name="PayloadType" value="TextMessage"/> <property name="UseMessageListener" value="false"/> <property name="DestinationName" value="jms/MyTestUDDTopic"/> </activation-spec> </endpoint-activation> </adapter-config> You can set the durable subscriber name either at composite's JmsAdapter wizard,or by directly editing the JmsAdapter's *.jca file within the Composite project. 2.The "atypical" configurations: For some systems, there may be restrictions that do not allow the afore mentioned "typical" configurations be applied. For examples, some deployments may be required to configure the JMS Topic to be Replicated Distributed Topic rather than Partition Distributed Topic. We would like to discuss those scenarios here: Configuration A: The JMS Topic is NOT PDT In this case, you need to define the message selector 'NOT JMS_WL_DDForwarded' in the adapter's *.jca file, to filter out those "replicated" messages. Configuration B. The ClientIDPolicy=RESTRICTED In this case, you need separate factories for different composites. More accurately, you need separate factories for different *.jca file of JmsAdapter. References: Managing Durable Subscription WebLogic JMS Partitioned Distributed Topics and Shared Subscriptions JMS Troubleshooting: Configuring JMS Message Logging: Advanced Programming with Distributed Destinations Using the JMS Destination Availability Helper API

    Read the article

  • [Windows 8] Application bar popup button

    - by Benjamin Roux
    Here is a small control to create an application bar button which will display a content in a popup when the button is clicked. Visually it gives this So how to create this? First you have to use the AppBarPopupButton control below.   namespace Indeed.Controls { public class AppBarPopupButton : Button { public FrameworkElement PopupContent { get { return (FrameworkElement)GetValue(PopupContentProperty); } set { SetValue(PopupContentProperty, value); } } public static readonly DependencyProperty PopupContentProperty = DependencyProperty.Register("PopupContent", typeof(FrameworkElement), typeof(AppBarPopupButton), new PropertyMetadata(null, (o, e) => (o as AppBarPopupButton).CreatePopup())); private Popup popup; private SerialDisposable sizeChanged = new SerialDisposable(); protected override void OnTapped(Windows.UI.Xaml.Input.TappedRoutedEventArgs e) { base.OnTapped(e); if (popup != null) { var transform = this.TransformToVisual(Window.Current.Content); var offset = transform.TransformPoint(default(Point)); sizeChanged.Disposable = PopupContent.ObserveSizeChanged().Do(_ => popup.VerticalOffset = offset.Y - (PopupContent.ActualHeight + 20)).Subscribe(); popup.HorizontalOffset = offset.X + 24; popup.DataContext = this.DataContext; popup.IsOpen = true; } } private void CreatePopup() { popup = new Popup { IsLightDismissEnabled = true }; popup.Closed += (o, e) => this.GetParentOfType<AppBar>().IsOpen = false; popup.ChildTransitions = new Windows.UI.Xaml.Media.Animation.TransitionCollection(); popup.ChildTransitions.Add(new Windows.UI.Xaml.Media.Animation.PopupThemeTransition()); var container = new Grid(); container.Children.Add(PopupContent); popup.Child = container; } } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } The ObserveSizeChanged method is just an extension method which observe the SizeChanged event (using Reactive Extensions - Rx-Metro package in Nuget). If you’re not familiar with Rx, you can replace this line (and the SerialDisposable stuff) by a simple subscription to the SizeChanged event (using +=) but don’t forget to unsubscribe to it ! public static IObservable<Unit> ObserveSizeChanged(this FrameworkElement element) { return Observable.FromEventPattern<SizeChangedEventHandler, SizeChangedEventArgs>( o => element.SizeChanged += o, o => element.SizeChanged -= o) .Select(_ => Unit.Default); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } The GetParentOfType extension method just retrieve the first parent of type (it’s a common extension method that every Windows 8 developer should have created !). You can of course tweak to control (for example if you want to center the content to the button or anything else) to fit your needs. How to use this control? It’s very simple, in an AppBar control just add it and define the PopupContent property. <ic:AppBarPopupButton Style="{StaticResource RefreshAppBarButtonStyle}" HorizontalAlignment="Left"> <ic:AppBarPopupButton.PopupContent> <Grid> [...] </Grid> </ic:AppBarPopupButton.PopupContent> </ic:AppBarPopupButton> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } When the button is clicked the popup is displayed. When the popup is closed, the app bar is closed too. I hope this will help you !

    Read the article

  • Oracle ATG Web Commerce 10 Implementation Developer Boot Camp - Reading (UK) - October 1-12, 2012

    - by Richard Lefebvre
    REGISTER NOW: Oracle ATG Web Commerce 10 Implementation Developer Boot Camp Reading, UK, October 1-12, 2012! OPN invites you to join us for a 10-day implementation bootcamp on Oracle ATG Web Commerce in Reading, UK from October 1-12, 2012.This 10-day boot camp is designed to provide partners with hands-on experience and technical training to successfully build and deploy Oracle ATG Web Commerce 10 Applications. This particular boot camp is focused on helping partners develop the essential skills needed to implement every aspect of an ATG Commerce Application from scratch, (not CRS-based), with a specific goal of enabling experienced Java/J2EE developers with a path towards becoming functional, effective, and contributing members of an ATG implementation team. Built for both new and experienced ATG developers alike, the collaborative nature of this program and its exercises, have proven to be highly effective and extremely valuable in learning the best practices for implementing ATG solutions. Though not required, this bootcamp provides a structured path to earning a Certified Oracle ATG Web Commerce 10 Specialization! What Is Covered: This boot camp is for Application Developers and Software Architects wanting to gain valuable insight into ATG application development best practices, as well as relevant and applicable implementation experience on projects modeled after four of the most common types of applications built on the ATG platform. The following learning objectives are all critical, and are of equal priority in enabling this role to succeed. This learning boot camp will help with: Building a basic functional transaction-ready ATG Web Commerce 10 Application. Utilizing ATG’s platform features such as scenarios, slots, targeters, user profiles and segments, to create a personalized user experience. Building Nucleus components to support and/or extend application functionality. Understanding the intricacies of ATG order checkout and fulfillment. Specifying, designing and implementing new commerce features in ATG 10. Building a functional commerce application modeled after four of the most common types of applications built on the ATG platform, within an agile-based project team environment and under simulated real-world project conditions. Duration: The Oracle ATG Web Commerce 10 Implementation Developer Boot Camp is an instructor-led workshop spanning 10 days. Audience: Application Developers Software Architects Prerequisite Training and Environment Requirements: Programming and Markup Experience with Java J2EE, JavaScript, XML, HTML and CSS Completion of Oracle ATG Web Commerce 10 Implementation Specialist Development Guided Learning Path modules Participants will be required to bring their own laptop that meets the minimum specifications:   64-bit PC and OS (e.g. Windows 7 64-bit) 4GB RAM or more 40GB Hard Disk Space Laptops will require access to the Internet through Remote Desktop via Windows. Agenda Topics: Week 1 – Day 1 through 5 Build a Basic Commerce Application In week one of the boot camp training, we will apply knowledge learned from the ATG Web Commerce 10 Implementation Developer Guided Learning Path modules, towards building a basic transaction-ready commerce application. There will be little to no lectures delivered in this boot camp, as developers will be fully engaged in ATG Application Development activities and best practices. Developers will work independently on the following lab assignments from day's 1 through 5: Lab Assignments  1 Environment Setup 2 Build a dynamic Home Page 3 Site Authentication 4 Build Customer Registration 5 Display Top Level Categories 6 Display Product Sub-Categories 7 Display Product List Page 8 Display Product Detail Page 9 ATG Inventory 10 Build “Add to Cart” Functionality 11 Build Shopping Cart 12 Build Checkout Page  13 Build Checkout Review Page 14 Create an Order and Build Order Confirmation Page 15 Implement Slots and Targeters for Personalization 16 Implement Pricing and Promotions 17 Order Fulfillment Back to top Week 2 – Day 6 through 10 Team-based Case Project In the second week of the boot camp training, participants will be asked to join a project team that will select a case project for the team to implement. Teams will be able to choose from four of the most common application types developed and deployed on the ATG platform. They are as follows: Hard goods with physical fulfillment, Soft goods with electronic fulfillment, a Service or subscription case example, a Course/Event registration case example. Team projects will have approximately 160 hours of use cases/stories for each team to build (40 hours per developer). Each day's Use Cases/Stories will build upon the prior day's work, and therefore must be fully completed at the end of each day. Please note that this boot camp intends to simulate real-world project conditions, and as such will likely require the need for project teams to possibly work beyond normal business hours. To promote further collaboration and group learning, each team will be asked to present their work and share the methodologies and solutions that they've applied to their cases at the end of each day. Location: Oracle Reading CVC TPC510 Room: Wraysbury Reading, UK 9:00 AM – 5:00 PM  Registration Fee (10 Days): US $3,375 Please click on the following link to REGISTER or  visit the Oracle ATG Web Commerce 10 Implementation Developer Boot Camp page for more information. Questions: Patrick Ty Partner Enablement, Oracle Commerce Phone: 310.343.7687 Mobile: 310.633.1013 Email: [email protected]

    Read the article

  • Backup Your Windows Home Server Off-Site with Asus Webstorage

    - by Mysticgeek
    Windows Home Server lets you backup machines on your network easily. But what about backing up the server data? Today we take a look at ASUS WebStorage for Windows Home Server, which provides you with secure off-site backup for WHS. To use the ASUS WebStorage service you’ll need to sign up for a free account. It offers 1GB of free storage, then you can purchase an unlimited backup package for $39.99 for a year subscription. Note: They also offer online storage for individual PCs as well. Install ASUS WebStorage for WHS Browse to your shared folders on the server and open the Add-Ins folder and copy over the WHSConnectorSetup2.2.4.088.msi file (link below) then close out of the folder. Now launch Windows Home Server Console from one of the computers on your network, click Settings, then Add-ins. Under Available Add-ins click the Available tab and you’ll see the Asus WebStorage installer file we just copied over. Click the Install button. Installation kicks off and when it’s complete, you’ll need to close out of the console and reconnect. Using ASUS WebStorage WHS Connector  When you reconnect to WHS Console, scroll over to the ASUS WebStorage icon and click on Settings. Now log into your ASUS account… Now select the folders you want to backup to the WebStorage service. Select the radio button next to Enable to initialize the backup process… The backup process begins. You can change which folders are backed up simply by disabling the backup process, uncheck the folder(s), then enable the backup again. ASUS WebStorage Site After you have files backed up to the ASUS site, log into your account, and your presented with an overview of the amount of storage you’re using. It also shows what type of files are taking certain amounts of space.   You can browse through your backed up files and folders. It allows you to share and sync backed up data as well. Navigate to the file you want and you can easily download it by clicking on it, or share it out by clicking the share link below it. If you choose to share it, you’re provided with a link to the file to send out to other users.   Conclusion Users of Windows Home Server have been looking for an inexpensive cloud backup solution for quite some time. There are services such as JungleDisk, KeepVault, Wuala…etc. These services probably do a better job, but can start getting expensive once you start uploading a GBs of data. Another disappointment of ASUS WebStorage is you can only backup your WHS shares (from what we’ve been able to determine), it’s an “all or nothing” type of thing. You cannot go in and select individual files and folders. The initial upload speeds can be a bit slow as well, although that might have something to do with limited upload speeds on the DSL connection we used to test it. Retrieving your data from the ASUS site is a breeze though, and all the data files are organized quite well. The WHS Addin is very easy to install and use. If you’re looking for an off-site solution to backup your WHS data, you can test out ASUS WebStorage for free with a 1GB limit. This is good for testing the service and it might be exactly what you’re looking for. Other users may want a more advanced solution like KeepVault or CloudBerry…which is a front end for Amazon S3 storage. Download ASUS WebStorage WHS Addin Other WHS Offsite Backup Solutions CloudBerry, JungleDisk, KeepVault, Wuala Similar Articles Productive Geek Tips Restore Files from Backups on Windows Home ServerGMedia Blog: Setting Up a Windows Home ServerCreate A Windows Home Server Home Computer Restore DiscRemove a Network Computer from Windows Home ServerShare Ubuntu Home Directories using Samba TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Gadfly is a cool Twitter/Silverlight app Enable DreamScene in Windows 7 Microsoft’s “How Do I ?” Videos Home Networks – How do they look like & the problems they cause Check Your IMAP Mail Offline In Thunderbird Follow Finder Finds You Twitter Users To Follow

    Read the article

  • Monitoring Events in your BPEL Runtime - RSS Feeds?

    - by Ramkumar Menon
    @10g - It had been a while since I'd tried something different. so here's what I did this week!Whenever our Developers deployed processes to the BPEL runtime, or perhaps when a process gets turned off due to connectivity issues, or maybe someone retired a process, I needed to know. So here's what I did. Step 1: Downloaded Quartz libraries and went through the documentation to understand what it takes to schedule a recurring job. Step 2: Cranked out two components using Oracle JDeveloper. [Within a new Web Project] a) A simple Java Class named FeedUpdater that extends org.quartz.Job. All this class does is to connect to your BPEL Runtime [via opmn:ormi] and fetch all events that occured in the last "n" minutes. events? - If it doesn't ring a bell - its right there on the BPEL Console. If you click on "Administration > Process Log" - what you see are events.The API to retrieve the events is //get the locator reference for the domain you are interested in.Locator l = .... //Predicate to retrieve events for last "n" minutesWhereCondition wc = new WhereCondition(...) //get all those events you needed.BPELProcessEvent[] events = l.listProcessEvents(wc); After you get all these events, write out these into an RSS Feed XML structure and stream it into a file that resides either in your Apache htdocs, or wherever it can be accessed via HTTP.You can read all about RSS 2.0 here. At a high level, here is how it looks like. <?xml version = '1.0' encoding = 'UTF-8'?><rss version="2.0">  <channel>    <title>Live Updates from the Development Environment</title>    <link>http://soadev.myserver.com/feeds/</link>    <description>Live Updates from the Development Environment</description>    <lastBuildDate>Fri, 19 Nov 2010 01:03:00 PST</lastBuildDate>    <language>en-us</language>    <ttl>1</ttl>    <item>      <guid>1290213724692</guid>      <title>Process compiled</title>      <link>http://soadev.myserver.com/BPELConsole/mdm_product/administration.jsp?mode=processLog&amp;processName=&amp;dn=all&amp;eventType=all&amp;eventDate=600&amp;Filter=++Filter++</link>      <pubDate>Fri Nov 19 00:00:37 PST 2010</pubDate>      <description>SendPurchaseOrderRequestService: 3.0 Time : Fri Nov 19 00:00:37                   PST 2010</description>    </item>   ...... </channel> </rss> For writing ut XML content, read through Oracle XML Parser APIs - [search around for oracle.xml.parser.v2] b) Now that my "Job" was done, my job was half done. Next, I wrote up a simple Scheduler Servlet that schedules the above "Job" class to be executed ever "n" minutes. It is very straight forward. Here is the primary section of the code.           try {        Scheduler sched = StdSchedulerFactory.getDefaultScheduler();         //get n and make a trigger that executes every "n" seconds        Trigger trigger = TriggerUtils.makeSecondlyTrigger(n);        trigger.setName("feedTrigger" + System.currentTimeMillis());        trigger.setGroup("feedGroup");                JobDetail job = new JobDetail("SOA_Feed" + System.currentTimeMillis(), "feedGroup", FeedUpdater.class);        sched.scheduleJob(job,trigger);         }catch(Exception ex) {            ex.printStackTrace();            throw new ServletException(ex.getMessage());        } Look up the Quartz API and documentation. It will make this look much simpler.   Now that both components were ready, I packaged the Application into a war file and deployed it onto my Application Server. When the servlet initialized, the "n" second schedule was set/initialized. From then on, the servlet kept populating the RSS Feed file. I just ensured that my "Job" code keeps only 30 latest events within it, so that the feed file is small and under control. [a few kbs]   Next I opened up the feed xml on my browser - It requested a subscription - and Here I was - watching new deployments/life cycle events all popping up on my browser toolbar every 5 (actually n)  minutes!   Well, you could do it on a browser/reader of your choice - or perhaps read them like you read an email on your thunderbird!.      

    Read the article

  • In the Mobile and Tablet World, How Much is Too Much?

    - by andrewbrust
    The week of April 26th was a huge one in the world of mobile and tablet devices,  There were so many individual developments, announcements and solidifications of strategy, it’s almost impossible to believe they occurred in the same month, let alone the same week. Things started with Apple and Gizmodo having a Law and Order moment over the latter’s procurement of what appears to be the former’s 4th gen iPhone prototype.  We found out on the 26th that Gizmodo blogger Jason Chen’s apartment was raided by police and, honestly, that was a bit much. But Apple didn’t stop there.  They also published Steve Job’s critique of Adobe Flash and his explanation of Cupertino’s embargo of Flash on iPhones, iPods and iPads.  If you ask me, this too, was a bit much. Apple finished up the week by releasing the 3G version of its iPad product to the US market. I like (iLike?) my WiFi iPad.  The idea of getting a version of it that required a second 3G service monthly subscription, is, well, a bit  much. Microsoft was in the news too.  It killed a project it hadn’t even acknowledged the existence of: the Courier tablet.  That’s a bit much too.  If a tree falls in the woods, and Microsoft says they can’t hear it anyway, could they really have chopped it down? Maybe Microsoft Research should have licensed some of Courier’s technology from other parts of Microsoft.  Then maybe they could have kept the product alive.  Ask HTC: they’re going to be licensing technology from Microsoft because Redmond insists that Google’s Android operating system infringes on certain of their patents.  And since HTC now builds a number of handsets on Android, instead of being beholden, as they once were, to Windows Mobile, that means they can keep making their products.  Why does HTC have to pay the royalties, and not Google?  Maybe Microsoft decided that going after GOOG would have been a bit much, even for them. The agreement came not a moment to soon: HTC released their “Droid Incredible” (that name’s a bit much), an Android 2.1 handset with amazing hardware and HTC’s own Sense UI, on April 30th (this past Friday). This phone is very well-reviewed.  Maybe that’s why Google basically decided to beg off introducing a version of its Nexus One phone (also manufactured by HTC) on the Verizon Wireless network.  Google backing down?  That’s incredible, if not also a bit much. And that brings us to HP.  Which this week announced its acquisition of Palm and its webOS mobile phone touch-oriented operating system.  HP also killed its own Slate initiative.  Apparently HP realized that Windows 7, even with a proprietary HP touch UI added on top, is no match for the iPad.  I’m guessing they think webOS might work a bit better,  And I’m wondering if HP even wants to use webOS for phone handsets, beyond the Pre and Pixi.  Using it just for slate devices would be a bit extreme, but maybe not too much. Honestly, this was not Microsoft’s best week.  It killed a project and a close partner did likewise.  Then that same partner bought a competing OS product, while another partner released their new product that uses yet another competing OS platform. What did Microsoft actually produce this past week? An update to its Windows Phone 7 developer tools that actually works with the version of Visual Studio 2010 released on April 12th, and the version of Silverlight released three days later. That took three weeks to get synced up, and that’s a bit much too. But at least it happened. Windows Phone 7 is Microsoft’s best hope for a comeback in the SmartPhone market and to offer a credible touch-based tablet device.  This week, two of Microsoft’s slate initiatives died, and its only mobile phone victory was around its competitor’s operating system.  I hope the new platform gets Redmond out of the PC ghetto and into the classes of device that get people really excited today.  If it can’t, that would be a bit much; probably too much.  And, as the signs at the Lonestar Cafe in NYC used to say, too much ain’t enough.

    Read the article

  • Silverlight Cream for December 18, 2010 -- #1012

    - by Dave Campbell
    In this Issue: Mark Monster, Kevin Dockx, Jeremy Likness(-2-,-3-), Timmy Kokke, Den Delimarsky, Mike Snow, Samuel Jack(-2-), and Renuka Prasad(-2-). Above the Fold: Silverlight: "Trigger a Storyboard on ViewModel changes" Mark Monster WP7: "Microsoft Push Notification in Windows Phone 7" Renuka Prasad Shoutouts: SilverlightGal sent me the link to The Silverlight Dossier ... I think it's a pretty good start... additions I'd like to see are ways to submit to the various areas. Michael Crump put up a contest that runs from now to January 1st... Win a set of Infragistics Silverlight Controls with Data Visualization!... pretty cool, Michael! If you visit WynApse.com, you'll see I have a subscription to LearnVisualStudio.net... and now they have posted a batch of WP7 videos... 64 of them to be exact... wow!: New video series From SilverlightCream.com: Trigger a Storyboard on ViewModel changes Mark Monster has a great post up about triggering Storyboard on ViewModel changes using the DataTrigger from Blend... cool stuff, and you can also do GoToStateAction or other actions or build yourowndang Trigger Action... fun awaits! ... sorry it took a while to post, Mark... been a tad overloaded here! Working with the Silverlight Rich Text Box control Kevin Dockx has had a post up for a while at SilverlightShow where he takes a good look at the RichText control and it's various capabilities, including source so you can give it a dance yourself. Lessons Learned in Personal Web Page Part 3: Custom Panel and Listbox Jeremy Likness's part 3 of his Personal Web Page lessons learned is covering the tres-cool 3D Panel he did... and he's got it all explained out... building from scratch via a custom panel and a Listbox control... A Silverlight MVVM Feed Reader from Scratch in 30 Minutes Jeremy Likness has a video tutorial showing building an MVVM/Silverlight feedreader in 30 minutes ... plus a couple mods that he noticed after the fact... beat that HTML5 :) Jounce Part 8: Raising Property Changed In Jeremy Likness's latest post, he has number 8 in his series on his MVVM platform, Jounce. This time he's explaining the property changed notification, has a very cool way of doing it, and some interesting comments from readers. Dependency Injection, MVVM, Ninject and Silverlight Timmy Kokke has a great tutorial up with associated demo project on Dependency Injection in MVVM and Silverlight. Some hidden features in the Windows Phone 7 emulator Den Delimarsky shows how to get some of the hidden features on your WP7 emulator like the Call History, Call Settings, and Details about the numbers. Playing sound effects on Windows Phone 7 Mike Snow's latest tip is playing sound effects on your WP7 ... a little bit of XNA here and there, and badabing, badaboom, you got sound! Day 3 of my “Build a Windows Phone 7 game in 3 days” Challenge Samuel Jack has a couple more posts up about his 'Build a WP7 game in 3 Days' challenge... first up is Day 3 from 8:50 to 22:30 ... wow... long day! ... but he's got something good going now... some good external links also Day 3.5 of my “Build a Windows Phone 7 game in 3 days” Challenge Samuel Jack's 3rd day ended with another half-day added on to put on some finishing touches... again, some good external links... and he finished with this Say hello to Simon Squared, my 3.5 day old WP7 Game Microsoft Push Notification in Windows Phone 7 Renuka Prasad has a bunch of material up that I've not been aware of (how did that happen, people??) ... here's the first of a couple of his posts on Code Project ... a very nice tutorial on the Push Notification process... great diagrams and external links. Windows Phone 7 – Toast Notification Using Windows Azure Cloud Service Renuka Prasad has another WP7 post on CodeProject... this one on Toast Notification... and he's using Azure and WCF all rolled into it as well... great diagrams, descriptions and all the code. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • SOA Suite 11g Dynamic Payload Testing with soapUI Free Edition

    - by Greg Mally
    Overview Many web service developers use soapUI for various tests like: smoke test, unit test, and load testing because you can get a free edition that is fairly robust. However, if you need to venture into more complex testing that requires a dynamic payload, then the free edition doesn't necessarily make it easy. This feature does exist in soapUI, but for obvious reasons it is in the Pro version. In this blog I will show you how to use soapUI free edition for dynamic payloads in a simplified example. Hopefully this will open the doors for you to expand into more complex scenarios. The following assumes that you have a working knowledge of soapUI and will not go into concepts like setting up a project etc. For the basics, please review the documentation for soapUI: http://www.soapui.org/Getting-Started/. Additionally, we will be using asynchronous web services and you can review the setup for this in my blog: SOA Suite 11g Asynchronous Testing with soapUI. Features in soapUI Free Edition Relating to this Topic The soapUI test tool provides a very feature rich environment that can do many things provided you are willing to go beyond point and click. For this example, we will be leveraging just a couple features for our dynamic payload example: Test Case Properties Scripting with Groovy Basically, we will be using a property as a global variable and we will manipulate that property using a Groovy script. Setting Up Our Property Properties are available throughout soapUI and here is a snippet from the soapUI website defining the locations: Projects : for handling Project scope values, for example a subscription ID TestSuite : for handling TestSuite scoped values, can be seen as "arguments" to a TestSuite TestCases : for handling TestCase scoped values, can be seen as "arguments" to a TestCase Properties TestStep : for providing local values/state within a TestCase Local TestStep properties : several TestStep types maintain their own list of properties specific to their functionality : DataSource, DataSink, Run TestCase MockServices : for handling MockService scoped values/arguments MockResponses : for handling MockResponse scoped values Global Properties : for handling Global properties, optionally from an external source For our example, we will be defining a custom property in a TestCase called SimpleAsyncPayload. The property can be created in either the Custom Properties tab located at the bottom of the Navigator panel when the TestCase is selected in the Navigator or the Properties label in the TestCase editor: Navigator Panel TestCase Editor You will notice that I set a value of “0” for the custom property. For this simplified example, we will need to retrieve that value and manipulate it prior to making the web service request invocation. In order to accomplish this, we will need to get Groovy ;) Let's Get Groovy We will now add a new Groovy Script step to the TestCase called Manipulate Payload: TestCase Editor > Append Step > Groovy Script Once we have added the Groovy Script step to our TestCase, we can open the Groovy Script editor to add the code to: Get the current value of the property we created called SimpleAsyncPayload. Convert the value of the property to an integer. Increment the value. Store the incremented value back into the TestCase property called SimpleAsyncPayload. The script should look something like the following: Groovy Script Editor – Manipulate Payload At this point we can test the script to see if it is working by simply running the TestCase (left-click on the green triangle in the upper left-hand corner of the TestCase editor). To verify if it ran correctly, we can look at the value of the SimpleAsyncPayload property which should now be 1: TestCase Editor – Run Results All that is left to complete the TestCase is to append another step of type Test Request. The information required to append the request is a name and an operation to invoke. In this example we will use the default name and select the SimpleAsyncBPELProcessBingd -> process as the operation (any other information being requested, simply use the defaults unless you are calling an asynchronous operation then do not add any assertions). We are now in familiar ground with the Test Request editor. Depending upon the type of operation you are invoking (synchronous or asynchronous), please update the request with the necessary information (e.g., callback information for asynchronous operations). We will now tweak the Test Request payload to retrieve the value of the SimpleAsyncPayload property. The soapUI editor makes this very simple: right-click in the payload and navigate to the property (e.g., right-click > Get Data.. > TestCase: [Groovy TestCase] > Property [SimpleAsyncPayload]): Test Request Editor – Insert Property Value Your payload should now look something like the following: Test Request Editor – Inserted Property Value Just like before, we are now ready to run the TestCase. If everything goes as expected we should see a response like the following: Message Viewer – Results of TestCase Run We are now setup to be able to run a stress test where the payload will change for each request. This simple example can be expanded to include multiple payload values, complex calculations in the scripts, or whatever can be done via the soapUI scripting. Hopefully you have found this useful and happy testing to you :)

    Read the article

  • Windows Azure Recipe: Software as a Service (SaaS)

    - by Clint Edmonson
    The cloud was tailor built for aspiring companies to create innovative internet based applications and solutions. Whether you’re a garage startup with very little capital or a Fortune 1000 company, the ability to quickly setup, deliver, and iterate on new products is key to capturing market and mind share. And if you can capture that share and go viral, having resiliency and infinite scale at your finger tips is great peace of mind. Drivers Cost avoidance Time to market Scalability Solution Here’s a sketch of how a basic Software as a Service solution might be built out: Ingredients Web Role – this hosts the core web application. Each web role will host an instance of the software and as the user base grows, additional roles can be spun up to meet demand. Access Control – this service is essential to managing user identity. It’s backed by a full blown implementation of Active Directory and allows the definition and management of users, groups, and roles. A pre-built ASP.NET membership provider is included in the training kit to leverage this capability but it’s also flexible enough to be combined with external Identity providers including Windows LiveID, Google, Yahoo!, and Facebook. The provider model provides extensibility to hook into other industry specific identity providers as well. Databases – nearly every modern SaaS application is backed by a relational database for its core operational data. If the solution is sold to organizations, there’s a good chance multi-tenancy will be needed. An emerging best practice for SaaS applications is to stand up separate SQL Azure database instances for each tenant’s proprietary data to ensure isolation from other tenants. Worker Role – this is the best place to handle autonomous background processing such as data aggregation, billing through external services, and other specialized tasks that can be performed asynchronously. Placing these tasks in a worker role frees the web roles to focus completely on user interaction and data input and provides finer grained control over the system’s scalability and throughput. Caching (optional) – as a web site traffic grows caching can be leveraged to keep frequently used read-only, user specific, and application resource data in a high-speed distributed in-memory for faster response times and ultimately higher scalability without spinning up more web and worker roles. It includes a token based security model that works alongside the Access Control service. Blobs (optional) – depending on the nature of the software, users may be creating or uploading large volumes of heterogeneous data such as documents or rich media. Blob storage provides a scalable, resilient way to store terabytes of user data. The storage facilities can also integrate with the Access Control service to ensure users’ data is delivered securely. Training & Examples These links point to online Windows Azure training labs and examples where you can learn more about the individual ingredients described above. (Note: The entire Windows Azure Training Kit can also be downloaded for offline use.) Windows Azure (16 labs) Windows Azure is an internet-scale cloud computing and services platform hosted in Microsoft data centers, which provides an operating system and a set of developer services which can be used individually or together. It gives developers the choice to build web applications; applications running on connected devices, PCs, or servers; or hybrid solutions offering the best of both worlds. New or enhanced applications can be built using existing skills with the Visual Studio development environment and the .NET Framework. With its standards-based and interoperable approach, the services platform supports multiple internet protocols, including HTTP, REST, SOAP, and plain XML SQL Azure (7 labs) Microsoft SQL Azure delivers on the Microsoft Data Platform vision of extending the SQL Server capabilities to the cloud as web-based services, enabling you to store structured, semi-structured, and unstructured data. Windows Azure Services (9 labs) As applications collaborate across organizational boundaries, ensuring secure transactions across disparate security domains is crucial but difficult to implement. Windows Azure Services provides hosted authentication and access control using powerful, secure, standards-based infrastructure. Developing Applications for the Cloud, 2nd Edition (eBook) This book demonstrates how you can create from scratch a multi-tenant, Software as a Service (SaaS) application to run in the cloud using the latest versions of the Windows Azure Platform and tools. The book is intended for any architect, developer, or information technology (IT) professional who designs, builds, or operates applications and services that run on or interact with the cloud. Fabrikam Shipping (SaaS reference application) This is a full end to end sample scenario which demonstrates how to use the Windows Azure platform for exposing an application as a service. We developed this demo just as you would: we had an existing on-premises sample, Fabrikam Shipping, and we wanted to see what it would take to transform it in a full subscription based solution. The demo you find here is the result of that investigation See my Windows Azure Resource Guide for more guidance on how to get started, including more links web portals, training kits, samples, and blogs related to Windows Azure.

    Read the article

  • From Pocket to Instapaper

    - by Michael Freidgeim
    Some time ago I’ve described the issues that I’ve had since a new version of Read It Later, named Pocket, was introduced.I’ve waited with hope for a new upgrade, but I had a huge disappointment with the latest version 16 June 2012. It didn’t fixed any of the two major problems, that I  experienced since new Pocket was introduced-  1. iPad app still didn’t show many of the saved links. 2. ability to rename articles on iPad still wasn’t restored.I’ve posted the message into their forum. They did not show my comment on their forum( I would name it censorship, not moderation), but a few days ago I’ve received an email, recommending “try logging out of the app on your iPad, and back in again.” Their suggestion helped,  but I don’t understand, why it is not posted as a recommendation on their support site.So I decided to try InstAPaper on my iPad, Previously I’ve used it for Kindle. I never considered it before on iPad, because there were no free demo and I was very satisfied with RIL free and then RIL Pro. Currently InstAPaper cost $3, so the price is not an issue.I’ve checked that it has most of features that I am using(e.g. renaming, folders) and I am quite happy with it now. Actually I am using Pocket (or RIL free) for old bookmarks( I have 1000+ stored on my iPad) and for new bookmarks I am using InstAPaper.Having a solid experience with RIL/Pocket I’ve created a list of suggestions to Marco Arment to implement.1. Some pages stored in InstAPaper have removed essential sections of the text. E.g in many blogs comments are not stored in  InstAPaper. Some pages lost almost all of important links (e.g. http://www.lib.rus.ec/a/32416 -sorry, in Russian). RIL/Pocket has 2 modes to store offline- Web view and Article view. Web View includes all links/images of the original page, but it’s very reliable. Article view suppose to strip unrelated information, but often corrupts the content. I prefer to use offline Web view.InstAPaper should also support offline Web view, in case if stripped view removes important part of content.2.  Black full screen Saving on iPad Safari is very annoying. After user pressed a bookmark, the saving has some delay and then for a few seconds prevents from reading the text.Would be better to show as message on the top part(as in Pocket ). I am surprised, that  a full screen popup was  implemented recently as a desired feature. 3.There are no comments allowed on http://blog.instapaper.com/. I would prefer to post some of these notes as comments on http://blog.instapaper.com/ rather than write them in my blog and then send link to Marco.(I found recommendation how to add support of comments on tumblr at http://www.tumblr.com/help, but then realized that Marko was the lead developer ofTumblr.)4. Also there is no support forum. I understand that maintenance of the forum ican be a hassle, but stackexchange fSome time ago I’ve described the issues that I’ve had since a new version of Read It Later, named Pocket, was introduced.I’ve waited with hope for a new upgrade, but I had a huge disappointment with the latest version 16 June 2012. It didn’t fixed any of the two major problems, that I  experienced since new Pocket was introduced- orums can be referred on  http://www.instapaper.com/main/support page, i.e.http://webapps.stackexchange.com/search?q=Instapaper  or http://apple.stackexchange.com/search?q=Instapaper 5. Tags are more convenient than folders. i.e. an ability for the same article to have more than one tag. Also creating of new folders is not supported offline, which is an annoying limitation.6. I would like to have a narrow list - additionally to existing list modes have a subject only list or subject+site list to show more list items on a screen.7. Limit of 500 offline articles sounds quite big, but my RIL list exceeded 1000, so it could be a issue in the future.8. Search button on iPad version is visible, but doesn’t work- it forces to buy Premium subscription. I think, that it’s not correct. If the button in a paid version is visible and enabled, it should  provide  a working functionality, e.g. search in article names only. And leave full-text search for the premium support.9..Copy URL is an important operation and deserves to be in a first level of Action menu, rather than in Share sub-menu.I’ve also have comment re post http://www.marco.org/2011/04/28/removed-instapaper-free. Marco Arment  explained, why he doesn’t provide free version of Instapaper.  I believe that he is loosing essential part of his customers. When I decided which of iPad application to choose, I’ve selected RIL, because I was able to play with free version, and I liked it. I didn’t have a chance to compare RIL and InstAPaper on iPad, so I’ve bought  RIL pro. For a user there is no point to pay even $3 , if there are similar free product, that user can try and see, is it suitable for him/her.I’ve also played with Readability. It doesn’t have folders or tags(which is very important for me), but nicely supports full text search

    Read the article

  • Reading the tea leaves from Windows Azure support

    - by jamiet
    A few idle thoughts… Three months ago I had an issue regarding Windows Azure where I was unable to login to the management portal. At the time I contacted Azure support, the issue was soon resolved and I thought no more about it. Until today that is when I received an email from Azure support providing a detailed analysis of the root cause, the fix and moreover precise details about when and where things occurred. The email itself is interesting and I have included the entirety of it below. A few things were interesting to me: The level of detail and the diligence in investigating and reporting the issue I found really rather impressive. They even outline the number of users that were affected (127 in case you can’t be bothered reading). Compare this to the quite pathetic support that another division within Microsoft, Skype, provided to Greg Low recently: Skype support and dead parrot sketches   This line: “Windows Azure performed a planned change from using the Microsoft account service (formerly Windows Live ID) to the Azure Active Directory (AAD) as its primary authentication mechanism on August 24th. This change was made to enable future innovation in the area of authentication – particularly for organizationally owned identities, identity federation, stronger authentication methods and compliance certification. ” I also found to be particularly interesting. I have long thought that one of the reasons Microsoft has proved to be such a money-making machine in the enterprise is because they provide the infrastructure and then upsell on top of that – and nothing is more infrastructural than Active Directory. It has struck me of late that they are trying to make the same play of late in the cloud by tying all their services into Azure Active Directory and here we see a clear indication of that by making AAD the authentication mechanism for anyone using Windows Azure. I get the feeling that we’re going to hear much much more about AAD in the future; isn’t it about time we could log on to SQL Azure Windows Azure SQL Database without resorting to SQL authentication, for example? And why do Microsoft have two identity providers – Microsoft Account (aka Windows Live ID) and AAD – isn’t it about time those things were combined? As I said, just some idle thoughts. Below is the transcript of the email if you are interested. @Jamiet  This is regarding the support request <redacted> where in you were not able to login into the windows azure management portal with live id. We are providing you with the summary, root cause analysis and information about permanent fix: Incident Title: You were unable to access Windows Azure Portal after Microsoft Account to Azure Active Directory account Migration. Service Impacted: Management Portal Incident Start Date and Time: 8/24/2012 4:30:00 PM Date and Time Service was Restored: 10/17/2012 12:00:00 AM Summary: Windows Azure performed a planned change from using the Microsoft account service (formerly Windows Live ID) to the Azure Active Directory (AAD) as its primary authentication mechanism on August 24th.   This change was made to enable future innovation in the area of authentication – particularly for organizationally owned identities, identity federation, stronger authentication methods and compliance certification.   While this migration was largely transparent to Windows Azure users, a small number of users whose sign-in names were part of a Windows Live Custom Domain were unable to login.   This incompatibility was not discovered during the Quality Assurance testing phase prior to the migration. Customer Impact: Customers whose sign-in names were part of a Windows Live Custom Domain were unable to sign-in the Management Portal after ~4:00 p.m. PST on August 24th, 2012.   We determined that the issue did impact at least 127 users in 98 of these Windows Live Custom Domains and had a maximum potential impact of 1,110 users in total. Root Cause: The root cause of the issue was an incompatibility in the AAD authentication service to handle logins from Microsoft accounts whose sign-in names were part of a Windows Live Custom Domains.  This issue was not discovered during the Quality Assurance testing phase prior to the migration from Microsoft Account (MSA) to AAD. Mitigations: The issue was mitigated for the majority of affected users by 8:20 a.m. PST on August 25th, 2012 by running some internal scripts to correct many known Windows Live Custom Domains.   The remaining affected domains fell into two categories: Windows Live Custom Domains that were not corrected by 8/25/2012. An additional 48 Windows Live Custom Domains were fixed in the weeks following the incident within 2 business days after the AAD team received an escalation from product support regarding those accounts. Windows Live Custom domains that were also provisioned in Office365. Some of the affected Windows Live Custom Domains had already been provisioned in AAD because their owners signed up for Office365 which is a service that also uses AAD.   In these cases the Azure customers had to work around the issue by renaming their Microsoft Account or using a different Microsoft Account to administer their Azure subscription. Permanent Fix: The Azure Active Directory team permanently fixed the issue for all customers on 10/17/2012 in an upgraded release of the AAD service.

    Read the article

  • My own personal use of Oracle Linux

    - by wcoekaer
    It always is easier to explain something with examples... Many people still don't seem to understand some of the convenient things around using Oracle Linux and since I personally (surprise!) use it at home, let me give you an idea. I have quite a few servers at home and I also have 2 hosted servers with a hosted provider. The servers at home I use mostly to play with random Linux related things, or with Oracle VM or just try out various new Oracle products to learn more. I like the technology, it's like a hobby really. To be able to have a good installation experience and use an officially certified Linux distribution and not waste time trying to find the right libraries, I, of course, use Oracle Linux. Now, at least I can get a copy of Oracle Linux for free (even if I was not working for Oracle) and I can/could use that on as many servers at home (or at my company if I worked elsewhere) for testing, development and production. I just go to http://edelivery.oracle.com/linux and download the version(s) I want and off I go. Now, I also have the right (and not because I am an employee) to take those images and put them on my own server and give them to someone else, I in fact, just recently set up my own mirror on my own hosted server. I don't have to remove oracle-logos, I don't have to rebuild the ISO images, I don't have to recompile anything, I can just put the whole binary distribution on my own server without contract. Perfectly free to do so. Of course the source code of all of this is there, I have a copy of the UEK code at home, just cloned from https://oss.oracle.com/git/?p=linux-2.6-unbreakable.git. And as you can see, the entire changelog, checkins, merges from Linus's tree, complete overview of everything that got changed from kernel to kernel, from patch to patch, errata to errata. No obfuscating, no tar balls and spending time with diff, or go read bug reports to find out what changed (seems silly to me). Some of my servers are on the external network and I need to be current with security errata, but guess what, no problem, my servers are hooked up to http://public-yum.oracle.com which is open, free, and completely up to date, in a consistent, reliable way with any errata, security or bugfix. So I have nothing to worry about. Also, not because I am an employee. Anyone can. And, with this, I also can, and have, set up my own mirror site that hosts these RPMs. both binary and source rpms. Because I am free to get them and distribute them. I am quite capable of supporting my servers on my own, so I don't need to rely on the support organization so I don't need to have a support subscription :-). So I don't need to pay. Neither would you, at least not with Oracle Linux. Another cool thing. The hosted servers came (unfortunately) with Centos installed. While Centos works just fine as is, I tend to prefer to be current with my security errata(reliably) and I prefer to just maintain one yum repository instead of 2, I converted them over to Oracle Linux as well (in place) so they happily receive and use the exact same RPMs. Since Oracle Linux is exactly the same from a user/application point of view as RHEL, including files like /etc/redhat-release and no changes from .el. to .centos. I know I have nothing to worry about installing one of the RHEL applications. So, OL everywhere makes my life a lot easier and why not... Next! Since I run Oracle VM and I have -tons- of VM's on my machines, in some cases on my big WOPR box I have 15-20 VMs running. Well, no problem, OL is free and I don't have to worry about counting the number of VMs, whether it's 1, or 4, or more than 10 ... like some other alternatives started doing... and finally :) I like to try out new stuff, not 3 year old stuff. So with UEK2 as part of OL6 (and 6.3 in particular) I can play with a 3.0.x based kernel and it just installs and runs perfectly clean with OL6, so quite current stuff in an environment that I know works, no need to toy around with an unsupported pre-alpha upstream distribution with libraries and versions that are not compatible with production software (I have nothing against ubuntu or fedora or opensuse... just not what I can rely on or use for what I need, and I don't need a desktop). pretty compelling. I say... and again, it doesn't matter that I work for Oracle, if I was working elsewhere, or not at all, all of the above would still apply. Student, teacher, developer, whatever. contrast this with $349 for 2 sockets and oneguest and selfsupport per year to even just get the software bits.

    Read the article

  • IE9, LightSwitch Beta 2 and Zune HD: A Study in Risk Management?

    - by andrewbrust
    Photo by parl, 'Risk.’ Under Creative Commons Attribution-NonCommercial-NoDerivs License This has been a busy week for Microsoft, and for me as well.  On Monday, Microsoft launched Internet Explorer 9 at South by Southwest (SXSW) in Austin, TX.  That evening I flew from New York to Seattle.  On Tuesday morning, Microsoft launched Visual Studio LightSwitch, Beta 2 with a Go-Live license, in Redmond, and I had the privilege of speaking at the keynote presentation where the announcement was made.  Readers of this blog know I‘m a fan of LightSwitch, so I was happy to tell the app dev tools partners in the audience that I thought the LightSwitch extensions ecosystem represented a big opportunity – comparable to the opportunity when Visual Basic 1.0 was entering its final beta roughly 20 years ago.  On Tuesday evening, I flew back to New York (and wrote most of this post in-flight). Two busy, productive days.  But there was a caveat that impacts the accomplishments, because Monday was also the day reports surfaced from credible news agencies that Microsoft was discontinuing its dedicated Zune hardware efforts.  While the Zune brand, technology and service will continue to be a component of Windows Phone and a piece of the Xbox puzzle as well, speculation is that Microsoft will no longer be going toe-to-toe with iPod touch in the portable music player market. If we take all three of these developments together (even if one of them is based on speculation), two interesting conclusions can reasonably be drawn, one good and one less so. Microsoft is doubling down on technologies it finds strategic and de-emphasizing those that it does not.  HTML 5 and the Web are strategic, so here comes IE9, and it’s a very good browser.  Try it and see.  Silverlight is strategic too, as is SQL Server, Windows Azure and SQL Azure, so here comes Visual Studio LightSwitch Beta 2 and a license to deploy its apps to production.  Downloads of that product have exceeded Microsoft’s projections by more than 50%, and the company is even citing analyst firms’ figures covering the number of power-user developers that might use it. (I happen to think the product will be used by full-fledged developers as well, but that’s a separate discussion.) Windows Phone is strategic too…I wasn’t 100% positive of that before, but the Nokia agreement has made me confident.  Xbox as an entertainment appliance is also strategic.  Standalone music players are not strategic – and even if they were, selling them has been a losing battle for Microsoft.  So if Microsoft has consolidated the Zune content story and the ZunePass subscription into Xbox and Windows Phone, it would make sense, and would be a smart allocation of resources.  Essentially, it would be for the greater good. But it’s not all good.  In this scenario, Zune player customers would lose out.  Unless they wanted to switch to Windows Phone, and then use their phone’s battery for the portable media needs, they’re going to need a new platform.  They’re going to feel abandoned.  Even if Zune lives, there have been other such cul de sacs for customers.  Remember SPOT watches?  Live Spaces?  The original Live Mesh?  Microsoft discontinued each of these products.  The company is to be commended for cutting its losses, as admitting a loss isn’t easy.  But Redmond won’t be well-regarded by the victims of those decisions.  Instead, it gets black marks. What’s the answer?  I think it’s a bit like the 1980’s New York City “don’t block the box” gridlock rules: don’t enter an intersection unless you see a clear path through it.  If the light turns red and you’re blocking the perpendicular traffic, that’s your fault in judgment.  You get fined and get points on your license and you don’t get to shrug it off as beyond your control.  Accountability is key.  The same goes for Microsoft.  If it decides to enter a market, it should see a reasonable path through success in that market. Switching analogies, Microsoft shouldn’t make investments haphazardly, and it certainly shouldn’t ask investors to buy into a high-risk fund that is sold as safe and which offers only moderate returns.  People won’t continue to invest with a fund manager with a track record of over-zealous, imprudent, sub-prime investments.  The same is true on the product side for Microsoft, and not just with music players and geeky wrist watches.  It’s true of Web browsers, and line-of-business app dev tools, and smartphones, and cloud platforms and operating systems too.  When Microsoft is casual about its own risk, it raises risk for its customers, and weakens its reputation, market share and credibility.  That doesn’t mean all risk is bad, but it does mean no product team’s risk should be taken lightly. For mutual fund companies, it’s the CEO’s job to give his fund managers autonomy, but to make sure they’re conforming to a standard of rational risk management.  Because all those funds carry the same brand, and many of them serve the same investors. The same goes for Microsoft, its product portfolio, its executive ranks and its product managers.

    Read the article

  • Revisiting .NET, but what should I focus on?

    - by Wayne M
    After about a two-year hiatus, I'm brushing up on my .NET skills to find a .NET job (my previous two positions have very little development, or development using legacy technologies, so apart from a few very minor apps I have not touched .NET in close to two years). I'm aware of things like ASP.NET MVC, and I have previously read on things like NHibernate and DI/IOC, albeit I have yet to use them apart from very trivial "Hello World" type applications. I have a subscription to Rob Conery's Tekpub website and occasionally watch these videos when I have free time. My concern is this: I don't live in a very technical area. I would be surprised if any but the most tech-savvy companies have heard of, let alone use, ASP.NET MVC, NHibernate (or even LINQ/EF), or know about IoC. I would be willing to bet a large sum of money that 95% of the possible jobs I could obtain will use the following: Visual Source Safe, if any VCS at all ASP.NET 2.0 Webforms (3.5 if lucky) Raw ADO.NET on top of a very thin implementation of the Gateway pattern Stored Procedures in the database for most CRUD operations Gratuitous use of code-behind, with a Service layer if I'm lucky If I were extremely lucky, I might find a shop that has heard of ORMs and either uses one, or has wrote their own data abstraction. Also if I were lucky, the company would be using Model-View-Presenter. In light of this I'm not sure what I should focus on learning. Personally, I would prefer to be using the latest stuff - ASP.NET MVC, NHibernate, jQuery, WCF etc. Reality says I should go back to the basics, since it looks like most potential opportunities aren't going to be anywhere near the cutting edge, or anywhere close to it. And, as much as I would like to find a position and start to show the other developers the benefits, in my past experience this has usually resulted in my being fired for "not being a team player" and doing things the bad old way. So, I am curious how you would approach a situation like this? What should I focus on, in order to A) Reaquaint myself with .NET, and B) Prepare myself to obtain a .NET job again that is more than likely going to use techniques that I and most other knowledgeable developers will scoff at?

    Read the article

  • Single Sign On for a Web App

    - by Jeremy Goodell
    I have been trying to understand how this problem is solved for over a month now. I really need to come up with a general approach that works -- I'm basically the only resource who can do it. I have a theory, but I'm just not sure it's the easiest (or correct) approach and I haven't been able to find any information to support my ideas. Here's the scenario: 1) You have a complex web application that offers secure content on a subscription basis. 2) Users are required to log in to your application with user name and password. 3) You sell to large corporations, which already have a corporate authentication technology (for example, Active Directory). 4) You would like to integrate with the corporate authentication mechanism to allow their users to log onto your Web App without having to enter their user name and password. Now, any solution you come up with will have to provide a mechanism for: adding new users removing users changing user information allowing users to log in Ideally, all these would happen "automagically" when the corporate customer made the corresponding changes to their own authentication. Now, I have a theory that the way to do this (at least for Active Directory) would be for me to write a client-side app that integrates with the customer's Active Directory to track the targeted changes, and then communicate those changes to my Web App. I think that if this communication were done via Web Services offered by my web app, then it would maintain an unhackable level of security, which would obviously be a requirement for these corporate customers. I've found some information about a Microsoft product called Active Directory Federation Service (ADFS) which may or may not be the right approach for me. It seems to be a bit bulky and have some requirements that might not work for all customers. For other existing ID scenarios (like Athens and Shibboleth), I don't think a client application is necessary. It's probably just a matter of tying into the existing ID services. I would appreciate any advice anyone has on anything I've mentioned here. In particular, if you can tell me if my theory is correct about providing a client-side app that communicates with server-side Web Services, or if I'm totally going in the wrong direction. Also, if you could point me at any web sites or articles that explain how to do this, I'd really appreciate it. My research has not turned up much so far. Finally, if you could let me know of any Web applications that currently offer this service (particularly as tied to a corporate Active Directory), I would be very grateful. I am wondering if other B2B Web app's like salesforce.com, or hoovers.com offer a similar service for their corporate customers. I hate being in the dark and would greatly appreciate any light you can shed ... Jeremy

    Read the article

  • Sql Server Compact 2005 on Visual Studio 2008

    - by Tim
    I'm working on a Windows Forms application that interacts with a Sql Compact database file created by SQL Server 2005. This application was originally developed in Visual Studio 2005 but was recently converted to a Visual Studio 2008 solution. In regards to Sql Compact, we made sure the references were all still set to the assemblies that handle the 2005 version of Sql Compact rather than Sql Compact 3.5. Having done this, the application still runs just as it should - it will still interact with the Compact database, perform synchronization operations, etc. However, I just discovered today that Visual Studio tools such as the DataSet Designer do not play well with a Sql Compact database file of an older version than 3.5. If I go to the New Connection... wizard, the only Sql Compact Data Source / Data Provider are for Sql Compact 3.5. I assume that Visual Studio 2008 just doesn't include the data provider for the older version of Sql Compact by default. Is there a way you can add the old version of Sql Compact to the list of "Data Sources" for the connection wizard? To see exactly what I'm referring to, click on the Tools menu of Visual Studio 2008 and click Connect to Database... In the window that comes up, click Change... next to the Data source setting. From this dialog there is no way I can select the earlier version of Sql Compact - only 3.5 is available. Maybe I need to add an assembly reference somewhere? Or copy some file(s) from my Visual Studio 2005 directory over to 2008? I would think there would have to be a way for Visual Studio 2008 to be able to interact with a Sql Compact database from Sql Server 2005. To provide one more bit of detail, I discovered this problem when I went to my DataSet, right-clicked and tried to add a TableAdapter. The first screen that comes up says, "Choose Your Data Connection". If I leave it set to the Sql Compact connection that we've always used, I now get the following error when clicking the Next button: Failed to open a connection to the database "The selected database was created with an earlier version of SQL Server Compact and needs to be upgraded to SQL Server Compact 3.5 before the connection can be opened or tested. Upgrade the database by creating a new data connection and completing the Add Connection dialog box." Check the connection and try again. The only problem here is that we still use Sql Server 2005, and if my understanding is correct, it does not produce subscription files that are compatible with Sql Compact 3.5. If I am wrong in this assumption, please correct me. Any help you can provide is greatly appreciated. Thank you.

    Read the article

  • WLI domain with 3 servers - issues on JPD process startup

    - by XpiritO
    Hi there. I'm currently working on a clustered WLI environment which comprehends 3 servers: 1 admin server ("AdminServer") and 2 managed servers ("mn1" and "mn2") grouped as a cluster, as follows: Architecture diagram: http://img72.imageshack.us/img72/4112/clusterdiagram.jpg I've developed a JPD process to execute some scheduled tasks, invoked using a Message Broker. I've deployed this project into a single-server WLI domain (with AdminServer only) and it works as expected: the JPD process is invoked (I've configured a Timer Event Generator instance to start it up). Message broker: http://img532.imageshack.us/img532/1443/wlimessagebroker.jpg Timer event generator: http://img408.imageshack.us/img408/7358/wlitimereventgenerator.jpg In order to achieve fail-over and load-balancing capabilities, I'm currently trying to deploy this JPD process into this clustered WLI environment. Although, I'm having some issues with this, as I cannot get it to work properly, even if it still works. Here is a screenshot of the "WLI Process Instance Monitor" (with AdminServer and mn1 instances up and running): http://img710.imageshack.us/img710/8477/wliprocessinstancemonit.jpg According to this screen the process seems to be running, as it shows in this instance monitor screen. However, I don't see any output coming out neither at AdminServer console or mn1 console. In single-server domain it was visible output from JPD process "timeout" callback method, wich implementation is shown below: @com.bea.wli.control.broker.MessageBroker.StaticSubscription(xquery = "", filterValueMatch = "", channelName = "/SamplePrefix/Samples/SampleStringChannel", messageBody = "{x0}") public void subscription(java.lang.String x0) { String toReturn=""; try { Context myCtx = new InitialContext(); MBeanHome mbeanHome = (MBeanHome)myCtx.lookup("weblogic.management.home.localhome"); toReturn=mbeanHome.getMBeanServer().getServerName(); System.out.println("**** executed at **** " + System.currentTimeMillis() + " by: " + toReturn); } catch (Exception e) { System.out.println("Exception!"); e.printStackTrace(); } } (...) @org.apache.beehive.controls.api.events.EventHandler(field = "myT", eventSet = com.bea.control.WliTimerControl.Callback.class, eventName = "onTimeout") public void myT_onTimeout(long time, java.io.Serializable data) { // #START: CODE GENERATED - PROTECTED SECTION - you can safely add code above this comment in this method. #// // input transform System.out.println("**** published at **** " + System.currentTimeMillis()); publishControl.publish("aaaa"); // parameter assignment // #END : CODE GENERATED - PROTECTED SECTION - you can safely add code below this comment in this method. #// } and here is the output visible at "AdminServer" console in single-server domain testing: **** published at **** 1273238090713 **** executed at **** 1273238132123 by: AdminServer **** published at **** 1273238152462 **** executed at **** 1273238152562 by: AdminServer (...) What may be wrong with my clustered configuration? Am I missing something to accomplish clustered deployment? Thanks in advance for your help.

    Read the article

  • Approach for authentication and storing user details.

    - by cappuccino
    Hey folks, I am using the Zend Framework but my question is broadly about sessions / databases / auth (PHP MySQL). Currently this is my approach to authentication: 1) User signs in, the details are checked in database. - Standard stuff really. 2) If the details are correct only the user's unique ID is stored in the session and a security token (user unique ID + IP + Browser info + salt). The session in written to the filesystem. I've been reading around and many are saying that storing stuff in sessions is not a good idea, and that you should really only write a unique ID which refers back to the user's details and a security token to prevent session hijacking. So this is the approach i've taken, i use to write the user's details in session, but i've moved that out. Wanted to know your opinions on this. I'm keeping sessions in the filesystem since i don't run on multiple servers, and since i'm only writting a tiny tiny bit of data to sessions, i thought that performance would be greater keeping sessions in the filesystem to reduce load on the database. Once the session is written on authentication, it really is only read-only from then on. 3) The rest of the user's details (like subscription details, permissions, account info etc) are cached in the filesystem (this can always be easily moved to memory if i wanted even more performance). So rather than keeping the user's details in session, the user's details are cached in the file system. I'm using Zend_Cache and the unique cache id is something like md5(/cache/auth/2892), the number is the unique id of the user. I guess the benefit of this method is that once the user is logged in, there is essentially not database queries being run to get the user's details. Just wonder if this approach is better than keeping the whole lot in session... 4) As the user moves throughout the site the only thing that is checked is the ID in the session and the security token. So, overall the first question is 1) is the filesystem more efficient than a database for this purpose 2) have i taken enough security precautions 3) is separating user detail's from the session into a cached file a pointless task? Thanks.

    Read the article

  • php paypal ipn membership error

    - by aleksander haugas
    hello I integrated the membership subscription in the website, but I fail to insert data to the database. checking out all the function I make a txt file with php with data and generates me correctly. but I can not verify any data and less insert. <?php // read the post from PayPal system and add 'cmd' $req = 'cmd=_notify-validate'; foreach ($_POST as $key => $value) { $value = urlencode(stripslashes($value)); $req .= "&$key=$value"; } // Post back to PayPal system to validate $header .= "POST /cgi-bin/webscr HTTP/1.0\r\n"; $header .= "Content-Type: application/x-www-form-urlencoded\r\n"; $header .= "Content-Length: " . strlen($req) . "\r\n\r\n"; $fp = fsockopen ('ssl://www.paypal.com', 443, $errno, $errstr, 30); // Assign posted variables to local variables $item_name = $_POST['item_name']; $item_number = $_POST['item_number']; $payment_status = $_POST['payment_status']; $payment_amount = $_POST['mc_gross']; $payment_currency = $_POST['mc_currency']; $txn_id = $_POST['txn_id']; $receiver_email = $_POST['receiver_email']; $payer_email = $_POST['payer_email']; $user_id = $_POST['custom']; //Here i make the txt file and it works correctly //but after this does not work or make a txt file //Insert and update data if (!$fp) { // HTTP ERROR } else { fputs ($fp, $header . $req); while (!feof($fp)) { $res = fgets ($fp, 1024); if (strcmp ($res, "VERIFIED") == 0) { //Verificamos el estado del pedido if ($payment_status == 'Completed') { $txn_id_check = mysql_query("SELECT Transaction_ID FROM ".$db_table_prefix."Users_Payments WHERE Transaction_ID='".$txn_id."'"); if (mysql_num_rows($txn_id_check) !=1) { //Si el pago se ha realizado al dueño especificado y coincide con el dueño if ($receiver_email == $website_Payment_Email) { //Tipo de cuenta segun pagado if ($payment_amount == '10.00' && $payment_currency == 'EUR') { //add txn_id to database $log_query = mysql_query("INSERT INTO `".$db_table_prefix."Users_Payments` VALUES ('','".$txn_id."','".$payer_email."')"); //update premium to 3 $update_premium = mysql_query("UPDATE ".$db_table_prefix."Users SET Group_ID='3' WHERE User_ID='".$user_id."'"); } } } } } else if (strcmp ($res, "INVALID") == 0) { // log for manual investigation } } fclose ($fp); } ?> I call this file with require_once () with the connection to the database any ideas? thanks

    Read the article

  • What is a good generic sibling control Javascript communication strategy?

    - by James
    I'm building a webpage that is composed of several controls, and trying to come up with an effective somewhat generic client side sibling control communication model. One of the controls is the menu control. Whenever an item is clicked in here I wanted to expose a custom client side event that other controls can subscribe to, so that I can achieve a loosely coupled sibling control communication model. To that end I've created a simple Javascript event collection class (code below) that acts as like a hub for control event registration and event subscription. This code certainly gets the job done, but my question is is there a better more elegant way to do this in terms of best practices or tools, or is this just a fools errand? /// Event collection object - acts as the hub for control communication. function ClientEventCollection() { this.ClientEvents = {}; this.RegisterEvent = _RegisterEvent; this.AttachToEvent = _AttachToEvent; this.FireEvent = _FireEvent; function _RegisterEvent(eventKey) { if (!this.ClientEvents[eventKey]) this.ClientEvents[eventKey] = []; } function _AttachToEvent(eventKey, handlerFunc) { if (this.ClientEvents[eventKey]) this.ClientEvents[eventKey][this.ClientEvents[eventKey].length] = handlerFunc; } function _FireEvent(eventKey, triggerId, contextData ) { if (this.ClientEvents[eventKey]) { for (var i = 0; i < this.ClientEvents[eventKey].length; i++) { var fn = this.ClientEvents[eventKey][i]; if (fn) fn(triggerId, contextData); } } } } // load new collection instance. var myClientEvents = new bsdClientEventCollection(); // register events specific to the control that owns it, this will be emitted by each respective control. myClientEvents.RegisterEvent("menu-item-clicked"); Here is the part where this code above is consumed by source and subscriber controls. // menu control $(document).ready(function() { $(".menu > a").click( function(event) { //event.preventDefault(); myClientEvents.FireEvent("menu-item-clicked", $(this).attr("id"), null); }); }); <div style="float: left;" class="menu"> <a id="1" href="#">Menu Item1</a><br /> <a id="2" href="#">Menu Item2</a><br /> <a id="3" href="#">Menu Item3</a><br /> <a id="4" href="#">Menu Item4</a><br /> </div> // event subscriber control $(document).ready(function() { myClientEvents.AttachToEvent("menu-item-clicked", menuItemChanged); myClientEvents.AttachToEvent("menu-item-clicked", menuItemChanged2); myClientEvents.AttachToEvent("menu-item-clicked", menuItemChanged3); }); function menuItemChanged(id, contextData) { alert('menuItemChanged ' + id); } function menuItemChanged2(id, contextData) { alert('menuItemChanged2 ' + id); } function menuItemChanged3(id, contextData) { alert('menuItemChanged3 ' + id); }

    Read the article

  • Publishing/subscribing multiple subsets of the same server collection

    - by matb33
    How does one go about publishing different subsets (or "views") of a single collection on the server as multiple collections on the client? Here is some pseudo-code to help illustrate my question: items collection on the server Assume that I have an items collection on the server with millions of records. Let's also assume that: 50 records have the enabled property set to true, and; 100 records have the processed property set to true. All others are set to false. items: { "_id": "uniqueid1", "title": "item #1", "enabled": false, "processed": false }, { "_id": "uniqueid2", "title": "item #2", "enabled": false, "processed": true }, ... { "_id": "uniqueid458734958", "title": "item #458734958", "enabled": true, "processed": true } Server code Let's publish two "views" of the same server collection. One will send down a cursor with 50 records, and the other will send down a cursor with 100 records. There are over 458 million records in this fictitious server-side database, and the client does not need to know about all of those (in fact, sending them all down would probably take several hours in this example): var Items = new Meteor.Collection("items"); Meteor.publish("enabled_items", function () { // Only 50 "Items" have enabled set to true return Items.find({enabled: true}); }); Meteor.publish("processed_items", function () { // Only 100 "Items" have processed set to true return Items.find({processed: true}); }); Client code In order to support the latency compensation technique, we are forced to declare a single collection Items on the client. It should become apparent where the flaw is: how does one differentiate between Items for enabled_items and Items for processed_items? var Items = new Meteor.Collection("items"); Meteor.subscribe("enabled_items", function () { // This will output 50, fine console.log(Items.find().count()); }); Meteor.subscribe("processed_items", function () { // This will also output 50, since we have no choice but to use // the same "Items" collection. console.log(Items.find().count()); }); My current solution involves monkey-patching _publishCursor to allow the subscription name to be used instead of the collection name. But that won't do any latency compensation. Every write has to round-trip to the server: // On the client: var EnabledItems = new Meteor.Collection("enabled_items"); var ProcessedItems = new Meteor.Collection("processed_items"); With the monkey-patch in place, this will work. But go into Offline mode and changes won't appear on the client right away -- we'll need to be connected to the server to see changes. What's the correct approach?

    Read the article

  • Interview with Lenz Grimmer about MySQL Connect

    - by Keith Larson
    Keith Larson: Thank you for allowing me to do this interview with you.  I have been talking with a few different Oracle ACEs   about the MySQL Connect Conference. I figured the MySQL community might be missing you as well. You have been very busy with Oracle Linux but I know you still have an eye on the MySQL Community. How have things been?Lenz Grimmer: Thanks for including me in this series of interviews, I feel honored! I've read the other interviews, and really liked them. I still try to follow what's going on over in the MySQL community and it's good to see that many of the familiar faces are still around. Over the course of the 9 years that I was involved with MySQL, many colleagues and contacts turned into good friends and we still maintain close relationships.It's been almost 1.5 years ago that I moved into my new role here in the Linux team at Oracle, and I really enjoy working on a Linux distribution again (I worked for SUSE before I joined MySQL AB in 2002). I'm still learning a lot - Linux in the data center has greatly evolved in so many ways and there are a lot of new and exciting technologies to explore. Keith Larson: What were your thoughts when you heard that Oracle was going to deliver the MySQL Connect conference to the MySQL Community?Lenz Grimmer: I think it's testament to the fact that Oracle deeply cares about MySQL, despite what many skeptics may say. What started as "MySQL Sunday" two years ago has now evolved into a full-blown sub-conference, with 80 sessions at one of the largest corporate IT events in the world. I find this quite telling, not many products at Oracle enjoy this level of exposure! So it certainly makes me feel proud to see how far MySQL has come. Keith Larson: Have you had a chance to look over the sessions? What are your thoughts on them?Lenz Grimmer: I did indeed look at the final schedule.The content committee did a great job with selecting these sessions. I'm glad to see that the content selection was influenced by involving well-known and respected members of the MySQL community. The sessions cover a broad range of topics and technologies, both covering established topics as well as recent developments. Keith Larson: When you get a chance, what sessions do you plan on attending?Lenz Grimmer: I will actually be manning the Oracle booth in the exhibition area on one of these days, so I'm not sure if I'll have a lot of time attending sessions. But if I do, I'd love to see the keynotes and catch some of the sessions that talk about recent developments and new features in MySQL, High Availability and Clustering . Quite a lot has happened and it's hard to keep up with this constant flow of new MySQL releases.In particular, the following sessions caught my attention: MySQL Connect Keynote: The State of the Dolphin Evaluating MySQL High-Availability Alternatives CERN’s MySQL “as a Service” Deployment with Oracle VM: Empowering Users MySQL 5.6 Replication: Taking Scalability and High Availability to the Next Level What’s New in MySQL Server 5.6? MySQL Security: Past and Present MySQL at Twitter: Development and Deployment MySQL Community BOF MySQL Connect Keynote: MySQL Perspectives Keith Larson: So I will ask you just like I have asked the others I have interviewed, any tips that you would give to people for handling the long hours at conferences?Lenz Grimmer: Wear comfortable shoes and make sure to drink a lot! Also prepare a plan of the sessions you would like to attend beforehand and familiarize yourself with the venue, so you can get to the next talk in time without scrambling to find the location. The good thing about piggybacking on such a large conference like Oracle OpenWorld is that you benefit from the whole infrastructure. For example, there is a nice schedule builder that helps you to keep track of your sessions of interest. Other than that, bring enough business cards and talk to people, build up your network among your peers and other MySQL professionals! Keith Larson: What features of the MySQL 5.6 release do you look forward to the most ?Lenz Grimmer: There has been solid progress in so many areas like the InnoDB Storage Engine, the Optimizer, Replication or Performance Schema, it's hard for me to really highlight anything in particular. All in all, MySQL 5.6 sounds like a very promising release. I'm confident it will follow the tradition that Oracle already established with MySQL 5.5, which received a lot of praise even from very critical members of the MySQL community. If I had to name a single feature, I'm particularly and personally happy that the precise GIS functions have finally made it into a GA release - that was long overdue. Keith Larson:  In your opinion what is the best reason for someone to attend this event?Lenz Grimmer: This conference is an excellent opportunity to get in touch with the key people in the MySQL community and ecosystem and to get facts and information from the domain experts and developers that work on MySQL. The broad range of topics should attract people from a variety of roles and relations to MySQL, beginning with Developers and DBAs, to CIOs considering MySQL as a viable solution for their requirements. Keith Larson: You will be attending MySQL Connect and have some Oracle Linux Demos, do you see a growing demand for MySQL on Oracle Linux ?Lenz Grimmer: Yes! Oracle Linux is our recommended Linux distribution and we have a good relationship to the MySQL engineering group. They use Oracle Linux as a base Linux platform for development and QA, so we make sure that MySQL and Oracle Linux are well tested together. Setting up a MySQL server on Oracle Linux can be done very quickly, and many customers recognize the benefits of using them both in combination.Because Oracle Linux is available for free (including free bug fixes and errata), it's an ideal choice for running MySQL in your data center. You can run the same Linux distribution on both your development/staging systems as well as on the production machines, you decide which of these should be covered by a support subscription and at which level of support. This gives you flexibility and provides some really attractive cost-saving opportunities. Keith Larson: Since I am a Linux user and fan, what is on the horizon for  Oracle Linux?Lenz Grimmer: We're working hard on broadening the ecosystem around Oracle Linux, building up partnerships with ISVs and IHVs to certify Oracle Linux as a fully supported platform for their products. We also continue to collaborate closely with the Linux kernel community on various projects, to make sure that Linux scales and performs well on large systems and meets the demands of today's data centers. These improvements and enhancements will then rolled into the Unbreakable Enterprise Kernel, which is the key ingredient that sets Oracle Linux apart from other distributions. We also have a number of ongoing projects which are making good progress, and I'm sure you'll hear more about this at the upcoming OpenWorld conference :) Keith Larson: What is something that more people should be aware of when it comes to Oracle Linux and MySQL ?Lenz Grimmer: Many people assume that Oracle Linux is just tuned for Oracle products, such as the Oracle Database or our Engineered Systems. While it's of course true that we do a lot of testing and optimization for these workloads, Oracle Linux is and will remain a general-purpose Linux distribution that is a very good foundation for setting up a LAMP-Stack, for example. We also provide MySQL RPM packages for Oracle Linux, so you can easily stay up to date if you need something newer than what's included in the stock distribution.One more thing that is really unique to Oracle Linux is Ksplice, which allows you to apply security patches to the running Linux kernel, without having to reboot. This ensures that your MySQL database server keeps up and running and is not affected by any downtime. Keith Larson: What else would you like to add ?Lenz Grimmer: Thanks again for getting in touch with me, I appreciated the opportunity. I'm looking forward to MySQL Connect and Oracle OpenWorld and to meet you and many other people from the MySQL community that I haven't seen for quite some time! Keith Larson:  Thank you Lenz!

    Read the article

  • CodePlex Daily Summary for Thursday, April 08, 2010

    CodePlex Daily Summary for Thursday, April 08, 2010New ProjectsBackUpAnyWhere: BackUpAnyWhereCustomFormbyEndUser: 在项目开发中,经常遇到不同的用户对同一报表有不同要求的情况,有时甚至用户需要从头生成一个报表,在以前可能使用第三方的开发工具来实现。在SQL Server2005中,通过使用Reporting Services可以使最终用户不通过编码,只要了解数据结构就能自行编辑报表。本例使用Adventur...DbExecutor - linq based database executor: IEnumerable based database reader. (linq like primitive sql executor)DeepZoomRenderingPack: A collection of libraries and plug-ins architecture that turns various files (like PDF, PS, etc.) into a "Visual" representation that the DeepZoom ...DotNetNuke Russian Language packs: DNNRussianLP - DotNetNuke Russian Language pack. F# Refactor: Deisgned to bring Code Refactoring capabilities to the F# Language in Visual Studio 2010. Invocando WebService e Site HTTP dinamicamente com HTTPWebRequest C#: Invocando Site HTTP e WebService dinamicamente com HTTPWebRequest Passando o SoapAction e Envelope XML Escrito em C# www.biztalkbrasil.c...Jitbit WYSWYG BBCode Editor: "Jitbit WYSIWYG-BBCode" is a browser-based JavaScript-powered WYSIWYG BBCode editorMRDS Services for Phidgets: MRDS (Microsoft Robotics Developer Studio) Services for Phidgets provides additional services for Phidgets sensors and controllers that are not inc...MSBuild Addin: This tool is a simple addin for VisualStudio 2008 used in association with Microsoft MSBuild. It allows you to run MSBuild directly inside Visual S...NISHIL-BizTalk Custom Eventlog Functiod: While testing our maps at times when it fails we cant trace it because we don’t know what the output of the functiods are. Normally in a single ma...Northest GNSG: Supinfo B3C Paris Northest University project. Galego, Neveu, Simon, Geissmann.Oily: Composite application project for oil parameters. It's developed in C#Outlook.Utility: The MSDN article Outlook Customization for Integrating with Enterprise Applications at http://msdn.microsoft.com/en-us/library/Aa479345 has quite a...Particle Plot Pivot: Scan select particle physics experiment web sites for plots and generate a Pivot display for easy browsing.project tca: project tca - translating chat application. Satisfyr: A new way of performing assertions on tests so that they remain agnostic to the underlying test framework, and leverage .NET built-in lambda syntax.sejce2008: jce se course wiki and projects linksSGB Controls: SGB Controls is a set of standard .net controls that include a number of enhancements to make life easier for the developer. These controls incl...Syringe: Syringe is a lightweight service container and dependency injection library designed for use with ASP.NET MVC2. Supported features: Dependency inj...topicbox: topicboxUr-Index: Ur-Index makes it a lot easier to create onomastic indexes for books in pdf format.VietGeeks ZohoDocApis: Implement .NET Zoho Document Apis library to help developer can intergrate Zoho Docs easy with their websitesWebometrics Dashboard: Webometrics Dashboardwebpress: It is a WebBased CMS and Blog platform.WPF Ink Canvas Toolbar: WPF Ink Canvas Toolbar makes it easy for WPF developers to use pen input in TabletPC or UMPC applications. The WPF InkCanvas control has drawing, e...WS-TMS: WS GISG HTT TMSNew ReleasesBatterySaver: Version 1.0: Fixed battery increase/decrease events not firing Fixed memory corruption error Added working set trimming (used very sparingly) Fixed poorly rende...Chargify.NET: Chargify.NET 0.65: Added in Transactions, Subscription Re-activation, and finally XML documentation (which has been missing in the previous releases).DbExecutor - linq based database executor: DbExecutor ver.1.0.0.1: renameDotNetNuke Russian Language packs: Russian Language Pack for DotNetNuke 04.09.02: Russian Language Pack for DotNetNuke 04.09.02Encrypted Notes: Encrypted Notes 1.6.3: This is the latest version of Encrypted Notes (1.6.3), with general improvements. It has an installer that will create a directory 'CPascoe' in My ...Invocando WebService e Site HTTP dinamicamente com HTTPWebRequest C#: Código projeto CallSiteHTTP: Código escrito em C#.NET 2.0 - VS2005 Contem: Solution completa(código e executável) XML de configuração - Config.xml ...Jitbit WYSWYG BBCode Editor: Main package: Contains the JS-file, CSS-file and a sample.Live Writer Picasa Plugin: Live Writer Picasa Plugin 1.1.0: Changelog Communication with Picasa Web Albums is done directly via HTTP now (v1.0.0 used Google's GData .NET Libraries) The plugin can search fo...MRDS Services for Phidgets: Phidgets for RDS 2008 R3: First Beta Release This ZIP file contains a web page called Readme_CodePlex.html that explains how to install the RDS Phidgets services for RDS 200...MSBuild Addin: MsBuildAddin-v1.0.0: Initial versionMSBuild Addin: MsBuildAddin-v1.0.0-src.zip: Initial versionOutlook.Utility: Outlook.Utility v1: I have used most of the code in previous projects and seems to be quite stable. Of course the point of open sourcing this is so this project is use...Scrum Dashboard: Scrum Dashboard v3 Alpha 1: Scrum Dashboard v3 is targeting .NET 4, TFS 2010 and the brand new Scrum for Team System v3 process templates. Most of the code has been rewritten ...SharePoint Labs: SPLab4004A-FRA-Level100: SPLab4004A-FRA-Level100 This SharePoint Lab will teach you the 4th best practice you should apply when writing code with the SharePoint API. Lab La...SharePoint Labs: SPLab5012A-FRA-Level100: SPLab5012A-FRA-Level100 This SharePoint Lab will teach you how to provision a new welcome page (how to change and rename the default.aspx page) on ...Shweet: SharePoint 2010 Team Messaging built with Pex: Shweet Source Code: Although the latest version pex and moles used with this project is not available, we thought it would be useful to provide a download to the source.Syringe: Syringe 1.0: Features Dependency injection on properties of services in container Dependency injection on constructors of services in container ASP.Net Mvc ...Text to HTML: 0.4.1.0: Cambios de la versiónOptimización del código de exportación reduciendo el código. Cambio en el icono de exportación. Añadido menú Seleccionar t...VsTortoise - a TortoiseSVN add-in for Microsoft Visual Studio: VsTortoise Build 23: Build 23 Fix: Executing "Blame" through the Solution Explorer on a file opens TortoiseMerge rather than TortoiseBlame. Build 22 (beta) New: Visua...WPF Ink Canvas Toolbar: WPF Ink Canvas Toolbar 1.0: First release - included custom colour selectionMost Popular ProjectsRawrWBFS ManagerMicrosoft SQL Server Product Samples: DatabaseASP.NET Ajax LibrarySilverlight ToolkitAJAX Control ToolkitWindows Presentation Foundation (WPF)ASP.NETMicrosoft SQL Server Community & SamplesFacebook Developer ToolkitMost Active ProjectsGraffiti CMSnopCommerce. Open Source online shop e-commerce solution.RawrShweet: SharePoint 2010 Team Messaging built with Pexpatterns & practices – Enterprise LibraryAcadsysAutoPocoIonics Isapi Rewrite FilterNcqrs Framework - The CQRS framework for .NETFarseer Physics Engine

    Read the article

  • Mocking the Unmockable: Using Microsoft Moles with Gallio

    - by Thomas Weller
    Usual opensource mocking frameworks (like e.g. Moq or Rhino.Mocks) can mock only interfaces and virtual methods. In contrary to that, Microsoft’s Moles framework can ‘mock’ virtually anything, in that it uses runtime instrumentation to inject callbacks in the method MSIL bodies of the moled methods. Therefore, it is possible to detour any .NET method, including non-virtual/static methods in sealed types. This can be extremely helpful when dealing e.g. with code that calls into the .NET framework, some third-party or legacy stuff etc… Some useful collected resources (links to website, documentation material and some videos) can be found in my toolbox on Delicious under this link: http://delicious.com/thomasweller/toolbox+moles A Gallio extension for Moles Originally, Moles is a part of Microsoft’s Pex framework and thus integrates best with Visual Studio Unit Tests (MSTest). However, the Moles sample download contains some additional assemblies to also support other unit test frameworks. They provide a Moled attribute to ease the usage of mole types with the respective framework (there are extensions for NUnit, xUnit.net and MbUnit v2 included with the samples). As there is no such extension for the Gallio platform, I did the few required lines myself – the resulting Gallio.Moles.dll is included with the sample download. With this little assembly in place, it is possible to use Moles with Gallio like that: [Test, Moled] public void SomeTest() {     ... What you can do with it Moles can be very helpful, if you need to ‘mock’ something other than a virtual or interface-implementing method. This might be the case when dealing with some third-party component, legacy code, or if you want to ‘mock’ the .NET framework itself. Generally, you need to announce each moled type that you want to use in a test with the MoledType attribute on assembly level. For example: [assembly: MoledType(typeof(System.IO.File))] Below are some typical use cases for Moles. For a more detailed overview (incl. naming conventions and an instruction on how to create the required moles assemblies), please refer to the reference material above.  Detouring the .NET framework Imagine that you want to test a method similar to the one below, which internally calls some framework method:   public void ReadFileContent(string fileName) {     this.FileContent = System.IO.File.ReadAllText(fileName); } Using a mole, you would replace the call to the File.ReadAllText(string) method with a runtime delegate like so: [Test, Moled] [Description("This 'mocks' the System.IO.File class with a custom delegate.")] public void ReadFileContentWithMoles() {     // arrange ('mock' the FileSystem with a delegate)     System.IO.Moles.MFile.ReadAllTextString = (fname => fname == FileName ? FileContent : "WrongFileName");       // act     var testTarget = new TestTarget.TestTarget();     testTarget.ReadFileContent(FileName);       // assert     Assert.AreEqual(FileContent, testTarget.FileContent); } Detouring static methods and/or classes A static method like the below… public static string StaticMethod(int x, int y) {     return string.Format("{0}{1}", x, y); } … can be ‘mocked’ with the following: [Test, Moled] public void StaticMethodWithMoles() {     MStaticClass.StaticMethodInt32Int32 = ((x, y) => "uups");       var result = StaticClass.StaticMethod(1, 2);       Assert.AreEqual("uups", result); } Detouring constructors You can do this delegate thing even with a class’ constructor. The syntax for this is not all  too intuitive, because you have to setup the internal state of the mole, but generally it works like a charm. For example, to replace this c’tor… public class ClassWithCtor {     public int Value { get; private set; }       public ClassWithCtor(int someValue)     {         this.Value = someValue;     } } … you would do the following: [Test, Moled] public void ConstructorTestWithMoles() {     MClassWithCtor.ConstructorInt32 =            ((@class, @value) => new MClassWithCtor(@class) {ValueGet = () => 99});       var classWithCtor = new ClassWithCtor(3);       Assert.AreEqual(99, classWithCtor.Value); } Detouring abstract base classes You can also use this approach to ‘mock’ abstract base classes of a class that you call in your test. Assumed that you have something like that: public abstract class AbstractBaseClass {     public virtual string SaySomething()     {         return "Hello from base.";     } }      public class ChildClass : AbstractBaseClass {     public override string SaySomething()     {         return string.Format(             "Hello from child. Base says: '{0}'",             base.SaySomething());     } } Then you would set up the child’s underlying base class like this: [Test, Moled] public void AbstractBaseClassTestWithMoles() {     ChildClass child = new ChildClass();     new MAbstractBaseClass(child)         {                 SaySomething = () => "Leave me alone!"         }         .InstanceBehavior = MoleBehaviors.Fallthrough;       var hello = child.SaySomething();       Assert.AreEqual("Hello from child. Base says: 'Leave me alone!'", hello); } Setting the moles behavior to a value of  MoleBehaviors.Fallthrough causes the ‘original’ method to be called if a respective delegate is not provided explicitly – here it causes the ChildClass’ override of the SaySomething() method to be called. There are some more possible scenarios, where the Moles framework could be of much help (e.g. it’s also possible to detour interface implementations like IEnumerable<T> and such…). One other possibility that comes to my mind (because I’m currently dealing with that), is to replace calls from repository classes to the ADO.NET Entity Framework O/R mapper with delegates to isolate the repository classes from the underlying database, which otherwise would not be possible… Usage Since Moles relies on runtime instrumentation, mole types must be run under the Pex profiler. This only works from inside Visual Studio if you write your tests with MSTest (Visual Studio Unit Test). While other unit test frameworks generally can be used with Moles, they require the respective tests to be run via command line, executed through the moles.runner.exe tool. A typical test execution would be similar to this: moles.runner.exe <mytests.dll> /runner:<myframework.console.exe> /args:/<myargs> So, the moled test can be run through tools like NCover or a scripting tool like MSBuild (which makes them easy to run in a Continuous Integration environment), but they are somewhat unhandy to run in the usual TDD workflow (which I described in some detail here). To make this a bit more fluent, I wrote a ReSharper live template to generate the respective command line for the test (it is also included in the sample download – moled_cmd.xml). - This is just a quick-and-dirty ‘solution’. Maybe it makes sense to write an extra Gallio adapter plugin (similar to the many others that are already provided) and include it with the Gallio download package, if  there’s sufficient demand for it. As of now, the only way to run tests with the Moles framework from within Visual Studio is by using them with MSTest. From the command line, anything with a managed console runner can be used (provided that the appropriate extension is in place)… A typical Gallio/Moles command line (as generated by the mentioned R#-template) looks like that: "%ProgramFiles%\Microsoft Moles\bin\moles.runner.exe" /runner:"%ProgramFiles%\Gallio\bin\Gallio.Echo.exe" "Gallio.Moles.Demo.dll" /args:/r:IsolatedAppDomain /args:/filter:"ExactType:TestFixture and Member:ReadFileContentWithMoles" -- Note: When using the command line with Echo (Gallio’s console runner), be sure to always include the IsolatedAppDomain option, otherwise the tests won’t use the instrumentation callbacks! -- License issues As I already said, the free mocking frameworks can mock only interfaces and virtual methods. if you want to mock other things, you need the Typemock Isolator tool for that, which comes with license costs (Although these ‘costs’ are ridiculously low compared to the value that such a tool can bring to a software project, spending money often is a considerable gateway hurdle in real life...).  The Moles framework also is not totally free, but comes with the same license conditions as the (closely related) Pex framework: It is free for academic/non-commercial use only, to use it in a ‘real’ software project requires an MSDN Subscription (from VS2010pro on). The demo solution The sample solution (VS 2008) can be downloaded from here. It contains the Gallio.Moles.dll which provides the here described Moled attribute, the above mentioned R#-template (moled_cmd.xml) and a test fixture containing the above described use case scenarios. To run it, you need the Gallio framework (download) and Microsoft Moles (download) being installed in the default locations. Happy testing…

    Read the article

< Previous Page | 22 23 24 25 26 27 28  | Next Page >