Search Results

Search found 424 results on 17 pages for 'keith thompson'.

Page 17/17 | < Previous Page | 13 14 15 16 17 

  • Worse is better. Is there an example?

    - by J.F. Sebastian
    Is there a widely-used algorithm that has time complexity worse than that of another known algorithm but it is a better choice in all practical situations (worse complexity but better otherwise)? An acceptable answer might be in a form: There are algorithms A and B that have O(N**2) and O(N) time complexity correspondingly, but B has such a big constant that it has no advantages over A for inputs less then a number of atoms in the Universe. Examples highlights from the answers: Simplex algorithm -- worst-case is exponential time -- vs. known polynomial-time algorithms for convex optimization problems. A naive median of medians algorithm -- worst-case O(N**2) vs. known O(N) algorithm. Backtracking regex engines -- worst-case exponential vs. O(N) Thompson NFA -based engines. All these examples exploit worst-case vs. average scenarios. Are there examples that do not rely on the difference between the worst case vs. average case scenario? Related: The Rise of ``Worse is Better''. (For the purpose of this question the "Worse is Better" phrase is used in a narrower (namely -- algorithmic time-complexity) sense than in the article) Python's Design Philosophy: The ABC group strived for perfection. For example, they used tree-based data structure algorithms that were proven to be optimal for asymptotically large collections (but were not so great for small collections). This example would be the answer if there were no computers capable of storing these large collections (in other words large is not large enough in this case). Coppersmith–Winograd algorithm for square matrix multiplication is a good example (it is the fastest (2008) but it is inferior to worse algorithms). Any others? From the wikipedia article: "It is not used in practice because it only provides an advantage for matrices so large that they cannot be processed by modern hardware (Robinson 2005)."

    Read the article

  • Why isn't my WPF Datagrid showing data?

    - by Edward Tanguay
    This walkthrough says you can create a WPF datagrid in one line but doesn't give a full example. So I created an example using a generic list and connected it to the WPF datagrid, but it doesn't show any data. What do I need to change on the code below to get it to show data in the datagrid? ANSWER: This code works now: XAML: <Window x:Class="TestDatagrid345.Window1" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:toolkit="http://schemas.microsoft.com/wpf/2008/toolkit" xmlns:local="clr-namespace:TestDatagrid345" Title="Window1" Height="300" Width="300" Loaded="Window_Loaded"> <StackPanel> <toolkit:DataGrid ItemsSource="{Binding}"/> </StackPanel> </Window> Code Behind: using System.Collections.Generic; using System.Windows; namespace TestDatagrid345 { public partial class Window1 : Window { private List<Customer> _customers = new List<Customer>(); public List<Customer> Customers { get { return _customers; }} public Window1() { InitializeComponent(); } private void Window_Loaded(object sender, RoutedEventArgs e) { DataContext = Customers; Customers.Add(new Customer { FirstName = "Tom", LastName = "Jones" }); Customers.Add(new Customer { FirstName = "Joe", LastName = "Thompson" }); Customers.Add(new Customer { FirstName = "Jill", LastName = "Smith" }); } } }

    Read the article

  • New Communications Industry Data Model with "Factory Installed" Predictive Analytics using Oracle Da

    - by charlie.berger
    Oracle Introduces Oracle Communications Data Model to Provide Actionable Insight for Communications Service Providers   We've integrated pre-installed analytical methodologies with the new Oracle Communications Data Model to deliver automated, simple, yet powerful predictive analytics solutions for customers.  Churn, sentiment analysis, identifying customer segments - all things that can be anticipated and hence, preconcieved and implemented inside an applications.  Read on for more information! TM Forum Management World, Nice, France - 18 May 2010 News Facts To help communications service providers (CSPs) manage and analyze rapidly growing data volumes cost effectively, Oracle today introduced the Oracle Communications Data Model. With the Oracle Communications Data Model, CSPs can achieve rapid time to value by quickly implementing a standards-based enterprise data warehouse that features communications industry-specific reporting, analytics and data mining. The combination of the Oracle Communications Data Model, Oracle Exadata and the Oracle Business Intelligence (BI) Foundation represents the most comprehensive data warehouse and BI solution for the communications industry. Also announced today, Hong Kong Broadband Network enhanced their data warehouse system, going live on Oracle Communications Data Model in three months. The leading provider increased its subscriber base by 37 percent in six months and reduced customer churn to less than one percent. Product Details Oracle Communications Data Model provides industry-specific schema and embedded analytics that address key areas such as customer management, marketing segmentation, product development and network health. CSPs can efficiently capture and monitor critical data and transform it into actionable information to support development and delivery of next-generation services using: More than 1,300 industry-specific measurements and key performance indicators (KPIs) such as network reliability statistics, provisioning metrics and customer churn propensity. Embedded OLAP cubes for extremely fast dimensional analysis of business information. Embedded data mining models for sophisticated trending and predictive analysis. Support for multiple lines of business, such as cable, mobile, wireline and Internet, which can be easily extended to support future requirements. With Oracle Communications Data Model, CSPs can jump start the implementation of a communications data warehouse in line with communications-industry standards including the TM Forum Information Framework (SID), formerly known as the Shared Information Model. Oracle Communications Data Model is optimized for any Oracle Database 11g platform, including Oracle Exadata, which can improve call data record query performance by 10x or more. Supporting Quotes "Oracle Communications Data Model covers a wide range of business areas that are relevant to modern communications service providers and is a comprehensive solution - with its data model and pre-packaged templates including BI dashboards, KPIs, OLAP cubes and mining models. It helps us save a great deal of time in building and implementing a customized data warehouse and enables us to leverage the advanced analytics quickly and more effectively," said Yasuki Hayashi, executive manager, NTT Comware Corporation. "Data volumes will only continue to grow as communications service providers expand next-generation networks, deploy new services and adopt new business models. They will increasingly need efficient, reliable data warehouses to capture key insights on data such as customer value, network value and churn probability. With the Oracle Communications Data Model, Oracle has demonstrated its commitment to meeting these needs by delivering data warehouse tools designed to fill communications industry-specific needs," said Elisabeth Rainge, program director, Network Software, IDC. "The TM Forum Conformance Mark provides reassurance to customers seeking standards-based, and therefore, cost-effective and flexible solutions. TM Forum is extremely pleased to work with Oracle to certify its Oracle Communications Data Model solution. Upon successful completion, this certification will represent the broadest and most complete implementation of the TM Forum Information Framework to date, with more than 130 aggregate business entities," said Keith Willetts, chairman and chief executive officer, TM Forum. Supporting Resources Oracle Communications Oracle Communications Data Model Data Sheet Oracle Communications Data Model Podcast Oracle Data Warehousing Oracle Communications on YouTube Oracle Communications on Delicious Oracle Communications on Facebook Oracle Communications on Twitter Oracle Communications on LinkedIn Oracle Database on Twitter The Data Warehouse Insider Blog

    Read the article

  • 10 Best Programming Podcast 2010 Edition

    - by mbcrump
    This list is in no particular order. Just the 10 best programming podcast that I have found so far. Stack Overflow Podcast -  Jeff Atwood (of codinghorror.com) and Joel Spolsky (of joelonsoftware.com) discuss the development of their new programming community, StackOverflow.com. [This Podcast hasn’t been updated in a while, but its always great to hear more from Jeff Atwood] Hanselminutes - Hanselminutes is a weekly audio talk show with noted web developer and technologist Scott Hanselman and hosted by Carl Franklin. Scott discusses utilities and tools, gives practical how-to advice, and discusses ASP.NET or Windows issues and workarounds. [This Podcast has recently started talking about random topics like diabetes, plane travel and geek relationship tips.  I am not sure if Scott is trying to move to a more mainstream audience or not] Herding Code - A weekly discussion featuring K. Scott Allen (odetocode.com), Kevin Dente, Scott Koon (lazycoder.com), and Jon Galloway. [Great all all-around podcast that I would recommend to all] Deep Fried Bytes - Deep Fried Bytes is an audio talk show with a Southern flavor hosted by technologists and developers Keith Elder and Chris Woodruff. The show discusses a wide range of topics including application development, operating systems and technology in general. Anything is fair game if it plugs into the wall or takes a battery. [This is one that just keeps getting better] Dot Net Rocks - .NET Rocks! is an Internet Audio Talk Show for Microsoft .NET Developers. [One of the first and usually very high quality content] Connected Show - Connected Show Podcast! A podcast covering new Microsoft technology for the developer community. The show is hosted by Dmitry Lyalin and Peter Laudati. [This and Polymorphic are one of my favorite podcast – Dmitry is a great host and would recommend this to all] Polymorphic Podcast - Object oriented development, architecture and best practices in .NET [Craig is a ASP.NET MVP and a great presenter. His podcast is great and it could only be better if he recorded it more often] ASP.NET Podcast - Wallace B. (Wally) McClure presents interviews and short technical talks on .NET Technologies. [Has great information on ASP.NET of course as well as iPhone Dev] Ruby on Rails Podcast - News and interviews about the Ruby language and the Rails website framework. [Even though I am not a Ruby programmer, I’ve found this podcast very interesting] Software Engineering Radio - Software Engineering Radio is a podcast targeted at the professional software developer. The goal is to be a lasting educational resource, not a newscast. Every ten days, a new episode is published that covers all topics software engineering. Episodes are either tutorials on a specific topic, or an interview with a well-known character from the software engineering world. All SE Radio episodes are original content ? we do not record conferences or talks given in other venues. Each episode comprises two speakers to ensure a lively listening experience. SE Radio is an independent and non-commercial organization. [Another excellent podcast – I would recommend any programmer add this to his/her drive home] If I have missed something, please feel free to email me and it might make the 2011 list. =)

    Read the article

  • How countdown get Synchronise with jquery using "jquery.countdown.js" plugin?

    - by ricky roy
    unable to get the correct Ans as i am getting from the Jquery I am using jquery.countdown.js ref. site http://keith-wood.name/countdown.html here is my code [WebMethod] public static String GetTime() { DateTime dt = new DateTime(); dt = Convert.ToDateTime("April 9, 2010 22:38:10"); return dt.ToString("dddd, dd MMMM yyyy HH:mm:ss"); } html file <script type="text/javascript" src="Scripts/jquery-1.3.2.js"></script> <script type="text/javascript" src="Scripts/jquery.countdown.js"></script> <script type="text/javascript"> $(function() { var shortly = new Date('April 9, 2010 22:38:10'); var newTime = new Date('April 9, 2010 22:38:10'); //for loop divid /// $('#defaultCountdown').countdown({ until: shortly, onExpiry: liftOff, onTick: watchCountdown, serverSync: serverTime }); $('#div1').countdown({ until: newTime }); }); function serverTime() { var time = null; $.ajax({ type: "POST", //Page Name (in which the method should be called) and method name url: "Default.aspx/GetTime", // If you want to pass parameter or data to server side function you can try line contentType: "application/json; charset=utf-8", dataType: "json", data: "{}", async: false, //else If you don't want to pass any value to server side function leave the data to blank line below //data: "{}", success: function(msg) { //Got the response from server and render to the client time = new Date(msg.d); alert(time); }, error: function(msg) { time = new Date(); alert('1'); } }); return time; } function watchCountdown() { } function liftOff() { } </script>

    Read the article

  • jQuery draggable + droppable: how to snap dropped element to dropped-on element

    - by 10goto10
    I have my screen divided into two DIVs. In the left DIV I have a few 50x50 pixel DIVs, in the right DIV I have an empty grid made of 80x80 LIs. The DIVs on the left are draggable, and once dropped on a LI, they should snap to center of that LI. Sounds simple, right? I just don't know how to get this done. I tried by manipulating the dropped DIV's top and left CSS properties to match those of the LI they're dropped into, but the left and top properties are relative to the left DIV. How can I best have the dropped element snap to the center of the element it's dropped into? That's gotta be simple, right? Edit: I'm using jQuery UI 1.7.2 with jQuery 1.3.2. Edit 2: For whoever else has this problem, this is how I fixed it: I used Keith's solution of removing the dragged element and placing it inside the dropped-on element in the drop callback of the droppable plugin: function gallerySnap(droppedOn, droppedElement) { $(droppedOn).html('<div class="drop_styles">'+$(droppedElement).html()+'</div>' ); $(droppedElement).remove(); } I don't the dropped element to be draggable again, but if you do, just bind draggable to it again. For me this method also solved the problem I had when positioning the dropped elements (which would be relative to the left DIV) and scrolling inside the second DIV. (Elements would remain fixed on page, now they scroll along). I did play with the snap options to make it look good while dragging, so thanks to karim79 for that suggestion. I probably won't win any Awesome Code prizes with this, so if you see room for improvement, please share!

    Read the article

  • Which language should I pick up: VB.Net or C#

    - by magius
    I'm looking to pick up either C# or VB.Net. I'd done a fair bit of VB6 programming in the past. I'm looking at getting the book, Visual Basic .NET or C#, Which to Choose? but I'm hoping that someone has read it, or used both languages and can offer advice. Should I just RTFB? Edit: Anders Sandvig raised a valid question. I'm intending to develop ActiveX applications that will be served through IE. Edit: Given that the functionality is pretty close and my favored approach to learning is to "just build it" and solve problems by looking it up on the internet, I've decided that the choice of language will be based on how easy it is to learn it. I looked around and found sites like C# Corner that supports my approach. Personal note: I wish I could also select Seb Nilsson's response as an accepted answer as well. Thanks guys for your input! Alright, then! I admit, theoretically, this topic is subjective; but a quick tally of answers seems to skew votes heavily in C#'s favor. Anyway, I'm really after experiences like what Keith's alluding to. I'm hoping he'll return to this topic and drop us a few more gems.

    Read the article

  • Do programmers need a union?

    - by James A. Rosen
    In light of the acrid responses to the intellectual property clause discussed in my previous question, I have to ask: why don't we have a programmers' union? There are many issues we face as employees, and we have very little ability to organize and negotiate. Could we band together with the writers', directors', or musicians' guilds, or are our needs unique? Has anyone ever tried to start one? If so, why did it fail? (Or, alternatively, why have I never heard of it, despite its success?) later: Keith has my idea basically right. I would also imagine the union being involved in many other topics, including: legal liability for others' use/misuse of our work, especially unintended uses evaluating the quality of computer science and software engineering higher education programs -- unlike many other engineering disciplines, we are not required to be certified on receiving our Bachelor's degrees evangelism and outreach -- especially to elementary school students certification -- not doing it, but working with the companies like ISC(2) and others to make certifications meaningful and useful continuing education -- similar to previous conferences -- maintain a go-to list of organizers and other resources our members can use I would see it less so as a traditional trade union, with little emphasis on: pay -- we tend to command fairly good salaries outsourcing and free trade -- most of use tend to be pretty free-market oriented working conditions -- we're the only industry with Aeron chairs being considered anything like "standard"

    Read the article

  • How to make a small flash swf with ComboBox in Actionscript 3?

    - by Sint
    I have a pure Actionscript 3 project, using flash.* libraries, compiles down to about 6k (using mxmlc). Program handles about 1k shapes, a few sprites, a sockets connection, works great (tastes less filling). Now, how would I add a ComboBox control without incurring excessive bloat? More specificially, I would like to keep the size under 100k. So far I have tried: Adobe mx.controls ComboBoxexample - simple mxml example compiles to 200+k both on my main Linux Box using mxmlc and in Windows using Flash Builder 4 Yahoo Astra - uses mx libraries underneath(so as bloated as Adobe?), plus does not contain exact ComboBox Keith Peter's MinimalComps - seems small, but far from providing ComboBox functionality SPAS (Swing Package for Actionscript) - compiles to 130k, but alpha version of ComboBox does not let me adjust height... asuilib - compiles to 40k, unfortunately this ComboBox does not provide for scrolling items...if it does not fit on screen no way to scroll to it Now my questions: Is there a way to lower size for projects importing mx.controls ? Maybe there is a way to fix SPAS or asuilib ComboBoxes? Perhaps, there are some other libraries which provide a ComboBox(or DropList)?

    Read the article

  • How to access "Custom" or non-System TFS workitem fields using PowerShell?

    - by DaBozUK
    When using PowerShell to extract information from TFS, I find that I can get at the standard fields but not "Custom" fields. I'm not sure custom is the correct term, but for example if I look at the Process Editor in VS2008 and edit the Work Item type, there are fields such as listed below, with Name, Type and RefName: Title String System.Title State String System.State Rev Integer System.Rev Changed By String System.ChangedBy I can access these with Get-TfsItemHistory: Get-TfsItemHistory "$/path" -Version "D01/12/10~" -R | Select -exp WorkItems | Format-Table Title, State, Rev, ChangedBy -Auto So far so good. However, there are also some other fields in the WorkItem type, which I'm calling "Custom" or non-System fields, e.g.: Activated By String Microsoft.VSTS.Common.ActivatedBy Resolved By String Microsoft.VSTS.Common.ResolvedBy And the following command does not retrieve the data, just spaces. Get-TfsItemHistory "$/path" -Version "D01/12/10~" -R | Select -exp WorkItems | Format-Table ActivatedBy, ResolvedBy -Auto I've also tried the names in quotes, the fully qualified refname, but no luck. How do you access these "non-System" fields? Thanks Boz UPDATE: From Keith's answer I can get the fields I need: Get-TfsItemHistory "$/Hermes/Main" -Version "D01/12/10~" -Recurse ` | Select ChangeSetId, Comment -exp WorkItems ` | Select ChangeSetId, Comment, @{n='WI-Id'; e={$_.Id}}, Title -exp Fields ` | Where {$_.ReferenceName -eq 'Microsoft.VSTS.Common.ResolvedBy'} ` | Format-Table ChangesetId, Comment, WI-Id, Title, @{n='Resolved By'; e={$_.Value}} -Auto

    Read the article

  • Newbie question: When to use extern "C" { //code } ?

    - by Russel
    Hello, Maybe I'm not understanding the differences between C and C++, but when and why do we need to use: extern "C" { ? Apparently its a "linkage convention"? I read about it briefly and noticed that all the .h header files included with MSVS surround their code with it. What type of code exactly is "C code" and NOT "C++ code"? I thought C++ included all C code? I'm guessing that this is not the case and that C++ is different and that standard features/functions exist in one or the other but not both (ie: printf is C and cout is C++), but that C++ is backwards compatible though the extern "C" declaration. Is this correct? My next question depends on the answer to the first, but I'll ask it here anyway: Since MSVS header files that are written in C are surrounded by extern "C" { ... }, when would you ever need to use this yourself in your own code? If your code is C code and you are trying to compile it in a C++ compiler, shouldn't it work without problem because all the standard h files you include will already have the extern "C" thing in them with the C++ compiler? Do you have to use this when compiling in C++ but linking to alteady built C libraries or something? Please help clarify this for me... Thanks! --Keith

    Read the article

  • Micro Second resolution timestamps on windows.

    - by Nikhil
    How to get micro second resolution timestamps on windows? I am loking for something better than QueryPerformanceCounter, QueryPerformanceFrequency (these can only give you an elapsed time since boot, and are not necessarily accurate if they are called on different threads - ie QueryPerformanceCounter may return different results on different CPUs. There are also some processors that adjust their frequency for power saving, which apparently isn't always reflected in their QueryPerformanceFrequency result.) There is this, http://msdn.microsoft.com/en-us/magazine/cc163996.aspx but it does not seem to be solid. This looks great but its not available for download any more. http://www.ibm.com/developerworks/library/i-seconds/ This is another resource. http://www.lochan.org/2005/keith-cl/useful/win32time.html But requires a number of steps, running a helper program plus some init stuff also, I am not sure if it works on multiple CPUs Also looked at the Wikipedia link on the subject which is interesting but not that useful. http://en.wikipedia.org/wiki/Time_Stamp_Counter If the answer is just do this with BSD or Linux, its a lot easier thats fine, but I would like to confirm this and get some explanation as to why this is so hard in windows and so easy in linux and bsd. Its the same damm hardware...

    Read the article

  • Unable to Get a Correct Time when I am Calling serverTime using jquery.countdown.js + Asp.net ?

    - by user312891
    When i am calling the below function I unable to get a correct Answer. Both var Shortly and newTime having same time one coming from the client site other sync with server. http://keith-wood.name/countdown.html I am waiting from your response. Thanks $(function() { var shortly = new Date('April 9, 2010 20:38:10'); var newTime = new Date('April 9, 2010 20:38:10'); //for loop divid /// $('#defaultCountdown').countdown({ until: shortly, onExpiry: liftOff, onTick: watchCountdown, serverSync: serverTime }); $('#div1').countdown({ until: newTime }); }); function serverTime() { var time = null; $.ajax({ type: "POST", //Page Name (in which the method should be called) and method name url: "Default.aspx/GetTime", // If you want to pass parameter or data to server side function you can try line contentType: "application/json; charset=utf-8", dataType: "json", data: "{}", async: false, //else If you don't want to pass any value to server side function leave the data to blank line below //data: "{}", success: function(msg) { //Got the response from server and render to the client time = new Date(msg.d); alert(time); }, error: function(msg) { time = new Date(); alert('1'); } }); return time; }

    Read the article

  • Oracle Fusion Applications User Experience Design Patterns: Feeling the Love after Launch

    - by mvaughan
    By Misha Vaughan, Oracle Applications User ExperienceIn the first video by the Oracle Applications User Experience team on the Oracle Partner Network, Vice President Jeremy Ashley said that Oracle is looking to expand the ecosystem of support for Oracle’s applications customers as they begin to assess their investment and adoption of Oracle Fusion Applications. Oracle has made a massive investment to maintain the benefits of the Fusion Applications User Experience. This summer, the Applications User Experience team released the Oracle Fusion Applications user experience design patterns.Design patterns help create consistent experiences across devices.The launch has been very well received:Angelo Santagata, Senior Principal Technologist and Fusion Middleware evangelist for Oracle,  wrote this to the system integrator community: “The web site is the result of many years of Oracle R&D into user interface design for Fusion Applications and features a really cool web app which allows you to visualise the UI components in action.”  Grant Ronald, Director of Product Management, Application Development Framework (ADF) said: “It’s a science I don't understand, but now I don't have to ... Now you can learn from the UX experience of Fusion Applications.”Frank Nimphius, Senior Principal Product Manager, Oracle (ADF) wrote about the launch of the design patterns for the ADF Code Corner, and Jürgen Kress, Senior Manager EMEA Alliances & Channels for Fusion MiddleWare and Service Oriented Architecture, (SOA), shared the news with his Partner Community. Oracle Twitter followers also helped spread the message about the design patterns launch: ?@bex – Brian Huff, founder and Chief Software Architect for Bezzotech, and Oracle ACE Director:“Nifty! The Oracle Fusion UX team just released new ADF design patterns.”@maiko_rocha, Maiko Rocha, Oracle Consulting Solutions Architect and Oracle FMW engineer: “Haven't seen any other vendor offer such comprehensive UX Design Patterns catalog for free!”@zirous_chad, Chad Thompson, Senior Solutions Architect for Zirous, Inc. and ADF Developer:Wow - @ultan and company did a great job with the Fusion UX PatternsWhat is a user experience design pattern?A user experience design pattern is a re-usable, usability tested functional blueprint for a particular user experience.  Some examples are guided processes, shopping carts, and search and search results.  Ultan O’Broin discusses the top design patterns every developer should know.The patterns that were just released are based on thousands of hours of end-user field studies, state-of-the-art user interface assessments, and usability testing.  To be clear, these are functional design patterns, not technical design patterns that developers may be used to working with.  Because we know there is a gap, we are putting together some training that will help close that gap.Who should care?This is an offering targeted primarily at Application Development Framework (ADF) developers. If you are faced with the following questions regarding Fusion Applications, you will want to know and learn more:•    How do I build something that looks like Fusion Applications?•    How do I build a next-generation application?•    How do I extend a Fusion Application and maintain the user experience?•    I don’t want to re-invent the wheel on the user interface, so where do I start?•    I need to build something that will eventually co-exist with Fusion Applications. How do I do that?These questions are relevant to partners with an ADF competency, individual practitioners, or small consultancies with an ADF specialization, and customers who are trying to shift their IT staff over to supporting Fusion Applications.Where you can find out more?OnlineOur Fusion User Experience design patterns maven is Ultan O’Broin. The Oracle Partner Network is helping our team bring this first e-seminar to you in order to go into a more detail on what this means and how to take advantage of it:? Webinar: Build a Better User Experience with Oracle: Oracle Fusion Applications Functional Design PatternsSept 20, 2012 , 10:30am-11:30am PacificDial-In:  1. 877-664-9137 / Passcode 102546?International:  706-634-9619  http://www.intercall.com/national/oracleuniversity/gdnam.htmlAccess the Live Event Or Via Webconference Access http://ouweb.webex.com  ?and enter this session number: 598036234At a Usergroup eventThe Fusion User Experience Advocates (FXA) are also going to be getting some deep-dive training on this content and can share it with local user groups.At OpenWorld Ultan O’Broin               Chris MuirIf you will be at OpenWorld this year, our own Ultan O’Broin will be visiting the ADF demopod to say hello, thanks to Shay Shmeltzer, Senior Group Manager for ADF outbound communication and at the OTN lounge: Monday 10-10:45, Tuesday 2:15-2:45, Wednesday 2:15-3:30 ?  Oracle JDeveloper and Oracle ADF,  Moscone South, Right - S-207? “ADF Meet and Greett”, OTN Lounge, Wednesday 4:30 And I cannot talk about OpenWorld and ADF without mentioning Chris Muir’s ADF EMG event: the Year After the Year Of the ADF Developer – Sunday, Sept 30 of OpenWorld. Chris has played host to Ultan and the Applications user experience message for his online community and is now a seasoned UX expert.Expect to see additional announcements about expanded and training on similar topics in the future.

    Read the article

  • Learn About Oracle’s Strategy for a Simple, Modern User Experience at OpenWorld 2012

    - by Applications User Experience
    By Kathy Miedema, Oracle Applications User Experience If you’re interested in what the best possible user experience looks like, you’ll want to hear what Oracle’s Applications User Experience team is planning for OpenWorld 2012, Sept. 30-Oct. 4 in San Francisco. This year, we will talk Fusion, Fusion, Fusion. We were among the first to show Oracle Fusion Applications in the last couple of years, and we’ll be showing it again this year so you can see what Oracle is planning for the next generation of enterprise applications. Attend our sessions to learn more about the user experience strategy in which Oracle is investing. Simplicity is the driving force behind the demos that we are unveiling now, which you can see at OpenWorld. We want to create opportunities for productivity and efficiency, and deliver enterprise data across devices to help you do your work in the way best suited to your job and needs, said Jeremy Ashley, Vice President, Oracle Applications User Experience. You can see the new look for Fusion Applications at a general session led by Ashley at 3:30 p.m. on Wednesday, Oct. 3. You’ll also have the chance to learn more about tailoring in Oracle Fusion Applications, and gain a new understanding of the investment in the user experience behind Fusion Applications at our sessions (see session information below). Inside the Oracle Applications User Experience team’s on-site lab at Oracle OpenWorld 2011. Head to the demogrounds to see new demos from the Applications User Experience team, including the new look for Fusion Applications and what we’re building for mobile platforms. Take a spin on our eye tracker, a very cool tool that we use to research the usability of a particular design. Visit the Usable Apps OpenWorld page to find out where our demopods will be located. We are also recruiting participants for our on-site lab, in which we gather feedback on new user experience designs, and taking reservations for a charter bus that will bring you to Oracle headquarters for a lab tour Thursday, Oct. 4, or Friday, Oct. 5. Tours leave at 10 a.m. and 1:45 p.m. from the Moscone Center in San Francisco. You’ll see more of our newest designs at the lab tour, and some of our research tools in action. Can’t participate in a customer feedback session or take a lab tour this time around? Visit Usable Apps to participate or book a tour another time. For more information on any OpenWorld sessions, check the content catalog – also available at www.oracle.com/openworld. For information on Applications User Experience (Apps UX) sessions and activities, go to the Usable Apps OpenWorld page. APPS UX OPENWORLD SESSIONS Oracle’s Roadmap to a Simple, Modern User Experience Presenter: Jeremy Ashley, Vice President Applications User Experience, Oracle; with Debra Lilley, Fujitsu Consulting; Basheer Khan, Innowave; and Edward Roske, InterRelSession ID: CON9467Date: Wednesday, Oct. 3 Time: 3:30 - 4:30 p.m.Location: Moscone West - 3002/3004 Jeremy Ashley Oracle Fusion Applications: Transforming Insight into Action Presenters: Killian Evers and Kristin Desmond, OracleSession ID: CON8718Date: Thursday, Oct. 4Time: 11:15 a.m. - 12:15 p.m.Location: Moscone West - 2008 “FRIENDS OF UX” OPENWORLD SESSIONS Sessions by the Oracle Usability Advisory Board (OUAB) members: Advances in Oracle Enterprise Governance, Risk, and Compliance Manager  Presenters: Koen Delaure, KPMG Advisory NV, and Oracle Usability Advisory Board member; Russell Stohr, Oracle Session ID: CON9389Date: Tuesday, Oct. 2Time: 1:15 - 2:15 p.m.Location: Palace Hotel - Concert Optimize Oracle E-Busines Suite Procure-to-Pay: Cut Inefficiences/Fraud with Oracle GRC Apps Presenters: Koen Delaure, KPMG Advisory NV, and Solveig Wagner, Seadrill Management AS, both Oracle Usability Advisory Board members; and Swarnali Bag, OracleSession ID: CON9401Date: Monday, Oct. 1Time: 12:15 - 1:15 p.m.Location: Intercontinental - Sutter Showcase of JD Edwards EnterpriseOne Mobility Presenters: Jon Wells, Westmoreland Coal Co., Oracle Usability Advisory Board member; Rob Mills and Liz Davson, Town of Oakville; Keith Sholes and Louise Farner, Oracle Session ID: CON9123Date: Tuesday, Oct. 2Time: 1:15 - 2:15 p.m.Location: InterContinental - Grand Ballroom B Sessions by the Fusion User Experience Adovcates (FXA) Usability and Features of Oracle Fusion Applications, Built upon Oracle Fusion Middleware Presenters: Debra Lilley, Fujitsu Consulting and Oracle Usability Advisory Board member; John King, King Training ResourcesSession ID: UGF10371Date: Sunday, Sept. 30Time: 11 a.m. - 11:45 a.m. Location: Moscone West – 2010 Ten Things to Love About Oracle Fusion Project Portfolio Management  Presenter: Floyd Teter, EiS TechnologiesSession ID: CON6021Date: Tuesday, Oct. 2Time: 10:15 - 11:15 a.m.Location: Moscone West – 2003

    Read the article

  • Is Berkeley DB a NoSQL solution?

    - by Gregory Burd
    Berkeley DB is a library. To use it to store data you must link the library into your application. You can use most programming languages to access the API, the calls across these APIs generally mimic the Berkeley DB C-API which makes perfect sense because Berkeley DB is written in C. The inspiration for Berkeley DB was the DBM library, a part of the earliest versions of UNIX written by AT&T's Ken Thompson in 1979. DBM was a simple key/value hashtable-based storage library. In the early 1990s as BSD UNIX was transitioning from version 4.3 to 4.4 and retrofitting commercial code owned by AT&T with unencumbered code, it was the future founders of Sleepycat Software who wrote libdb (aka Berkeley DB) as the replacement for DBM. The problem it addressed was fast, reliable local key/value storage. At that time databases almost always lived on a single node, even the most sophisticated databases only had simple fail-over two node solutions. If you had a lot of data to store you would choose between the few commercial RDBMS solutions or to write your own custom solution. Berkeley DB took the headache out of the custom approach. These basic market forces inspired other DBM implementations. There was the "New DBM" (ndbm) and the "GNU DBM" (GDBM) and a few others, but the theme was the same. Even today TokyoCabinet calls itself "a modern implementation of DBM" mimicking, and improving on, something first created over thirty years ago. In the mid-1990s, DBM was the name for what you needed if you were looking for fast, reliable local storage. Fast forward to today. What's changed? Systems are connected over fast, very reliable networks. Disks are cheep, fast, and capable of storing huge amounts of data. CPUs continued to follow Moore's Law, processing power that filled a room in 1990 now fits in your pocket. PCs, servers, and other computers proliferated both in business and the personal markets. In addition to the new hardware entire markets, social systems, and new modes of interpersonal communication moved onto the web and started evolving rapidly. These changes cause a massive explosion of data and a need to analyze and understand that data. Taken together this resulted in an entirely different landscape for database storage, new solutions were needed. A number of novel solutions stepped up and eventually a category called NoSQL emerged. The new market forces inspired the CAP theorem and the heated debate of BASE vs. ACID. But in essence this was simply the market looking at what to trade off to meet these new demands. These new database systems shared many qualities in common. There were designed to address massive amounts of data, millions of requests per second, and scale out across multiple systems. The first large-scale and successful solution was Dynamo, Amazon's distributed key/value database. Dynamo essentially took the next logical step and added a twist. Dynamo was to be the database of record, it would be distributed, data would be partitioned across many nodes, and it would tolerate failure by avoiding single points of failure. Amazon did this because they recognized that the majority of the dynamic content they provided to customers visiting their web store front didn't require the services of an RDBMS. The queries were simple, key/value look-ups or simple range queries with only a few queries that required more complex joins. They set about to use relational technology only in places where it was the best solution for the task, places like accounting and order fulfillment, but not in the myriad of other situations. The success of Dynamo, and it's design, inspired the next generation of Non-SQL, distributed database solutions including Cassandra, Riak and Voldemort. The problem their designers set out to solve was, "reliability at massive scale" so the first focal point was distributed database algorithms. Underneath Dynamo there is a local transactional database; either Berkeley DB, Berkeley DB Java Edition, MySQL or an in-memory key/value data structure. Dynamo was an evolution of local key/value storage onto networks. Cassandra, Riak, and Voldemort all faced similar design decisions and one, Voldemort, choose Berkeley DB Java Edition for it's node-local storage. Riak at first was entirely in-memory, but has recently added write-once, append-only log-based on-disk storage similar type of storage as Berkeley DB except that it is based on a hash table which must reside entirely in-memory rather than a btree which can live in-memory or on disk. Berkeley DB evolved too, we added high availability (HA) and a replication manager that makes it easy to setup replica groups. Berkeley DB's replication doesn't partitioned the data, every node keeps an entire copy of the database. For consistency, there is a single node where writes are committed first - a master - then those changes are delivered to the replica nodes as log records. Applications can choose to wait until all nodes are consistent, or fire and forget allowing Berkeley DB to eventually become consistent. Berkeley DB's HA scales-out quite well for read-intensive applications and also effectively eliminates the central point of failure by allowing replica nodes to be elected (using a PAXOS algorithm) to mastership if the master should fail. This implementation covers a wide variety of use cases. MemcacheDB is a server that implements the Memcache network protocol but uses Berkeley DB for storage and HA to replicate the cache state across all the nodes in the cache group. Google Accounts, the user authentication layer for all Google properties, was until recently running Berkeley DB HA. That scaled to a globally distributed system. That said, most NoSQL solutions try to partition (shard) data across nodes in the replication group and some allow writes as well as reads at any node, Berkeley DB HA does not. So, is Berkeley DB a "NoSQL" solution? Not really, but it certainly is a component of many of the existing NoSQL solutions out there. Forgetting all the noise about how NoSQL solutions are complex distributed databases when you boil them down to a single node you still have to store the data to some form of stable local storage. DBMs solved that problem a long time ago. NoSQL has more to do with the layers on top of the DBM; the distributed, sometimes-consistent, partitioned, scale-out storage that manage key/value or document sets and generally have some form of simple HTTP/REST-style network API. Does Berkeley DB do that? Not really. Is Berkeley DB a "NoSQL" solution today? Nope, but it's the most robust solution on which to build such a system. Re-inventing the node-local data storage isn't easy. A lot of people are starting to come to appreciate the sophisticated features found in Berkeley DB, even mimic them in some cases. Could Berkeley DB grow into a NoSQL solution? Absolutely. Our key/value API could be extended over the net using any of a number of existing network protocols such as memcache or HTTP/REST. We could adapt our node-local data partitioning out over replicated nodes. We even have a nice query language and cost-based query optimizer in our BDB XML product that we could reuse were we to build out a document-based NoSQL-style product. XML and JSON are not so different that we couldn't adapt one to work with the other interchangeably. Without too much effort we could add what's missing, we could jump into this No SQL market withing a single product development cycle. Why isn't Berkeley DB already a NoSQL solution? Why aren't we working on it? Why indeed...

    Read the article

  • Big Data&rsquo;s Killer App&hellip;

    - by jean-pierre.dijcks
    Recently Keith spent  some time talking about the cloud on this blog and I will spare you my thoughts on the whole thing. What I do want to write down is something about the Big Data movement and what I think is the killer app for Big Data... Where is this coming from, ok, I confess... I spent 3 days in cloud land at the Cloud Connect conference in Santa Clara and it was quite a lot of fun. One of the nice things at Cloud Connect was that there was a track dedicated to Big Data, which prompted me to some extend to write this post. What is Big Data anyways? The most valuable point made in the Big Data track was that Big Data in itself is not very cool. Doing something with Big Data is what makes all of this cool and interesting to a business user! The other good insight I got was that a lot of people think Big Data means a single gigantic monolithic system holding gazillions of bytes or documents or log files. Well turns out that most people in the Big Data track are talking about a lot of collections of smaller data sets. So rather than thinking "big = monolithic" you should be thinking "big = many data sets". This is more than just theoretical, it is actually relevant when thinking about big data and how to process it. It is important because it means that the platform that stores data will most likely consist out of multiple solutions. You may be storing logs on something like HDFS, you may store your customer information in Oracle and you may store distilled clickstream information in some distilled form in MySQL. The big question you will need to solve is not what lives where, but how to get it all together and get some value out of all that data. NoSQL and MapReduce Nope, sorry, this is not the killer app... and no I'm not saying this because my business card says Oracle and I'm therefore biased. I think language is important, but as with storage I think pragmatic is better. In other words, some questions can be answered with SQL very efficiently, others can be answered with PERL or TCL others with MR. History should teach us that anyone trying to solve a problem will use any and all tools around. For example, most data warehouses (Big Data 1.0?) get a lot of data in flat files. Everyone then runs a bunch of shell scripts to massage or verify those files and then shoves those files into the database. We've even built shell script support into external tables to allow for this. I think the Big Data projects will do the same. Some people will use MapReduce, although I would argue that things like Cascading are more interesting, some people will use Java. Some data is stored on HDFS making Cascading the way to go, some data is stored in Oracle and SQL does do a good job there. As with storage and with history, be pragmatic and use what fits and neither NoSQL nor MR will be the one and only. Also, a language, while important, does in itself not deliver business value. So while cool it is not a killer app... Vertical Behavioral Analytics This is the killer app! And you are now thinking: "what does that mean?" Let's decompose that heading. First of all, analytics. I would think you had guessed by now that this is really what I'm after, and of course you are right. But not just analytics, which has a very large scope and means many things to many people. I'm not just after Business Intelligence (analytics 1.0?) or data mining (analytics 2.0?) but I'm after something more interesting that you can only do after collecting large volumes of specific data. That all important data is about behavior. What do my customers do? More importantly why do they behave like that? If you can figure that out, you can tailor web sites, stores, products etc. to that behavior and figure out how to be successful. Today's behavior that is somewhat easily tracked is web site clicks, search patterns and all of those things that a web site or web server tracks. that is where the Big Data lives and where these patters are now emerging. Other examples however are emerging, and one of the examples used at the conference was about prediction churn for a telco based on the social network its members are a part of. That social network is not about LinkedIn or Facebook, but about who calls whom. I call you a lot, you switch provider, and I might/will switch too. And that just naturally brings me to the next word, vertical. Vertical in this context means per industry, e.g. communications or retail or government or any other vertical. The reason for being more specific than just behavioral analytics is that each industry has its own data sources, has its own quirky logic and has its own demands and priorities. Of course, the methods and some of the software will be common and some will have both retail and service industry analytics in place (your corner coffee store for example). But the gist of it all is that analytics that can predict customer behavior for a specific focused group of people in a specific industry is what makes Big Data interesting. Building a Vertical Behavioral Analysis System Well, that is going to be interesting. I have not seen much going on in that space and if I had to have some criticism on the cloud connect conference it would be the lack of concrete user cases on big data. The telco example, while a step into the vertical behavioral part is not really on big data. It used a sample of data from the customers' data warehouse. One thing I do think, and this is where I think parts of the NoSQL stuff come from, is that we will be doing this analysis where the data is. Over the past 10 years we at Oracle have called this in-database analytics. I guess we were (too) early? Now the entire market is going there including companies like SAS. In-place btw does not mean "no data movement at all", what it means that you will do this on data's permanent home. For SAS that is kind of the current problem. Most of the inputs live in a data warehouse. So why move it into SAS and back? That all worked with 1 TB data warehouses, but when we are looking at 100TB to 500 TB of distilled data... Comments? As it is still early days with these systems, I'm very interested in seeing reactions and thoughts to some of these thoughts...

    Read the article

  • Windows Azure VMs - New "Stopped" VM Options Provide Cost-effective Flexibility for On-Demand Workloads

    - by KeithMayer
    Originally posted on: http://geekswithblogs.net/KeithMayer/archive/2013/06/22/windows-azure-vms---new-stopped-vm-options-provide-cost-effective.aspxDidn’t make it to TechEd this year? Don’t worry!  This month, we’ll be releasing a new article series that highlights the Best of TechEd announcements and technical information for IT Pros.  Today’s article focuses on a new, much-heralded enhancement to Windows Azure Infrastructure Services to make it more cost-effective for spinning VMs up and down on-demand on the Windows Azure cloud platform. NEW! VMs that are shutdown from the Windows Azure Management Portal will no longer continue to accumulate compute charges while stopped! Previous to this enhancement being available, the Azure platform maintained fabric resource reservations for VMs, even in a shutdown state, to ensure consistent resource availability when starting those VMs in the future.  And, this meant that VMs had to be exported and completely deprovisioned when not in use to avoid compute charges. In this article, I'll provide more details on the scenarios that this enhancement best fits, and I'll also review the new options and considerations that we now have for performing safe shutdowns of Windows Azure VMs. Which scenarios does the new enhancement best fit? Being able to easily shutdown VMs from the Windows Azure Management Portal without continued compute charges is a great enhancement for certain cloud use cases, such as: On-demand dev/test/lab environments - Freely start and stop lab VMs so that they are only accumulating compute charges when being actively used.  "Bursting" load-balanced web applications - Provision a number of load-balanced VMs, but keep the minimum number of VMs running to support "normal" loads. Easily start-up the remaining VMs only when needed to support peak loads. Disaster Recovery - Start-up "cold" VMs when needed to recover from disaster scenarios. BUT ... there is a consideration to keep in mind when using the Windows Azure Management Portal to shutdown VMs: although performing a VM shutdown via the Windows Azure Management Portal causes that VM to no longer accumulate compute charges, it also deallocates the VM from fabric resources to which it was previously assigned.  These fabric resources include compute resources such as virtual CPU cores and memory, as well as network resources, such as IP addresses.  This means that when the VM is later started after being shutdown from the portal, the VM could be assigned a different IP address or placed on a different compute node within the fabric. In some cases, you may want to shutdown VMs using the old approach, where fabric resource assignments are maintained while the VM is in a shutdown state.  Specifically, you may wish to do this when temporarily shutting down or restarting a "7x24" VM as part of a maintenance activity.  Good news - you can still revert back to the old VM shutdown behavior when necessary by using the alternate VM shutdown approaches listed below.  Let's walk through each approach for performing a VM Shutdown action on Windows Azure so that we can understand the benefits and considerations of each... How many ways can I shutdown a VM? In Windows Azure Infrastructure Services, there's three general ways that can be used to safely shutdown VMs: Shutdown VM via Windows Azure Management Portal Shutdown Guest Operating System inside the VM Stop VM via Windows PowerShell using Windows Azure PowerShell Module Although each of these options performs a safe shutdown of the guest operation system and the VM itself, each option handles the VM shutdown end state differently. Shutdown VM via Windows Azure Management Portal When clicking the Shutdown button at the bottom of the Virtual Machines page in the Windows Azure Management Portal, the VM is safely shutdown and "deallocated" from fabric resources.  Shutdown button on Virtual Machines page in Windows Azure Management Portal  When the shutdown process completes, the VM will be shown on the Virtual Machines page with a "Stopped ( Deallocated )" status as shown in the figure below. Virtual Machine in a "Stopped (Deallocated)" Status "Deallocated" means that the VM configuration is no longer being actively associated with fabric resources, such as virtual CPUs, memory and networks. In this state, the VM will not continue to allocate compute charges, but since fabric resources are deallocated, the VM could receive a different internal IP address ( called "Dynamic IPs" or "DIPs" in Windows Azure ) the next time it is started.  TIP: If you are leveraging this shutdown option and consistency of DIPs is important to applications running inside your VMs, you should consider using virtual networks with your VMs.  Virtual networks permit you to assign a specific IP Address Space for use with VMs that are assigned to that virtual network.  As long as you start VMs in the same order in which they were originally provisioned, each VM should be reassigned the same DIP that it was previously using. What about consistency of External IP Addresses? Great question! External IP addresses ( called "Virtual IPs" or "VIPs" in Windows Azure ) are associated with the cloud service in which one or more Windows Azure VMs are running.  As long as at least 1 VM inside a cloud service remains in a "Running" state, the VIP assigned to a cloud service will be preserved.  If all VMs inside a cloud service are in a "Stopped ( Deallocated )" status, then the cloud service may receive a different VIP when VMs are next restarted. TIP: If consistency of VIPs is important for the cloud services in which you are running VMs, consider keeping one VM inside each cloud service in the alternate VM shutdown state listed below to preserve the VIP associated with the cloud service. Shutdown Guest Operating System inside the VM When performing a Guest OS shutdown or restart ( ie., a shutdown or restart operation initiated from the Guest OS running inside the VM ), the VM configuration will not be deallocated from fabric resources. In the figure below, the VM has been shutdown from within the Guest OS and is shown with a "Stopped" VM status rather than the "Stopped ( Deallocated )" VM status that was shown in the previous figure. Note that it may require a few minutes for the Windows Azure Management Portal to reflect that the VM is in a "Stopped" state in this scenario, because we are performing an OS shutdown inside the VM rather than through an Azure management endpoint. Virtual Machine in a "Stopped" Status VMs shown in a "Stopped" status will continue to accumulate compute charges, because fabric resources are still being reserved for these VMs.  However, this also means that DIPs and VIPs are preserved for VMs in this state, so you don't have to worry about VMs and cloud services getting different IP addresses when they are started in the future. Stop VM via Windows PowerShell In the latest version of the Windows Azure PowerShell Module, a new -StayProvisioned parameter has been added to the Stop-AzureVM cmdlet. This new parameter provides the flexibility to choose the VM configuration end result when stopping VMs using PowerShell: When running the Stop-AzureVM cmdlet without the -StayProvisioned parameter specified, the VM will be safely stopped and deallocated; that is, the VM will be left in a "Stopped ( Deallocated )" status just like the end result when a VM Shutdown operation is performed via the Windows Azure Management Portal.  When running the Stop-AzureVM cmdlet with the -StayProvisioned parameter specified, the VM will be safely stopped but fabric resource reservations will be preserved; that is the VM will be left in a "Stopped" status just like the end result when performing a Guest OS shutdown operation. So, with PowerShell, you can choose how Windows Azure should handle VM configuration and fabric resource reservations when stopping VMs on a case-by-case basis. TIP: It's important to note that the -StayProvisioned parameter is only available in the latest version of the Windows Azure PowerShell Module.  So, if you've previously downloaded this module, be sure to download and install the latest version to get this new functionality. Want to Learn More about Windows Azure Infrastructure Services? To learn more about Windows Azure Infrastructure Services, be sure to check-out these additional FREE resources: Become our next "Early Expert"! Complete the Early Experts "Cloud Quest" and build a multi-VM lab network in the cloud for FREE!  Build some cool scenarios! Check out our list of over 20+ Step-by-Step Lab Guides based on key scenarios that IT Pros are implementing on Windows Azure Infrastructure Services TODAY!  Looking forward to seeing you in the Cloud! - Keith Build Your Lab! Download Windows Server 2012 Don’t Have a Lab? Build Your Lab in the Cloud with Windows Azure Virtual Machines Want to Get Certified? Join our Windows Server 2012 "Early Experts" Study Group

    Read the article

  • CodePlex Daily Summary for Monday, March 15, 2010

    CodePlex Daily Summary for Monday, March 15, 2010New ProjectsAT Accounts: AT Accounts helps developers to intergrate accounting functionality in their applications. It has both the WPF userinterface and SilverlightChild page list(for dnn4/5): A free module which can display sub pages list for a selected tab. It is template based and support options like Recursive/Child tab prefix/link...dashCommerce: dashCommerce is the leading ASP.NET e-commerce platform.Fire Utilities: My Development Utiltites and base classes: New Zealand Bank Account ValidatorFlyCatch (Bugtracking System): A simple webbased Bugtracking System.fracback: Fractal feedback concepts, based on video feedbackftc3650: code for ftc 3650Google AJAX Search Services for jQuery: This plug-in encapsulates part of the Google AJAX Search API to streamline the process of Google Search integration.Little Black Book DB: This is the Database for the following Projects: SQL Azure PHP Connection SQL Azure Ruby Connection SQL Azure Python Connection SQL Azure .NE...MediaCommMVC: MediaCommMVC is a community platform focusing on photos, videos and discussions. It's based on ASP.NET MVC and uses (fluent) nhibernate, jquery an...Miracle OS: The Miracle OS is an OS from Fox. We work on it, but it isn't ready. Do you want help us? Please send a mail to victor@fox.fi.stMultiwfn: (1)Plotting various graph(filled color/contour/relief map...) (2)Generate Cube file (3)Manipulate & analyze wavefunction Supportting lots of proper...MySpace DataRelay: Data Relay is the foundation of MySpace's middle tier. At its heart, it is a messaging system for relaying information both between clients and ser...NinjaCMS: Ninja CMS is an asp.net based content management system which provides a designer friendly, developer friendly interface to work with. It's flexibl...open gaze and mouse analyzer: Ogama allows recording and analyzing eye- and mouse-tracking data from slideshow eyetracking experiments in parallel. It´s developed in C#.NET and ...Özkasoft.Net | E-Commerce: Özkasoft's E-Commerce ProjectProfiCV: Profi CVpyTarget: Implement a powerful iscsi target in python, and easily use under most popular systems. It also includes the following features: multi-target, mult...SharePoint Platform Extensions: SharePoint Platform Extensions by Espora. Sorting Algorithm Visualization: Sorting Algorithm Visualization Displays Bead Sort, Binary Tree Sort, Bubble Sort, Bucket Sort, Cocktail Sort, Counting Sort, Gnome Sort, In Place ...Specify: A framework for creating executable specifications in .NET. Spell Corrector: A spell corrector that uses Bayes algorithm and BK (Burkhard-Keller) tree.SQL Azure Ruby Connection: This is a demo to show how to connect to SQL Azure with Ruby on Rails.uManage - AD Self-Service Portal: uManage is an Active Directory Self-Service Portal as well as Help Desk web application designed for use on intranet systems. It allows users to u...Winforms Rounded Group Box Control: Rounded Group Box - A Grouping control with Rounded Corners, Gradients, and Drop ShadowWizard Engine: Host application agnostic wizard engine platform, that allows you to fluently define complex conditional flows and provides means for execution of ...WS-Transfer based File Upload: WS-Transer based upload of large files in multiple partsXAMLStylePad: XAMLStylePad - is a simple in use styles and templates XAML-editor. It designed for comfortable coding in XAML with real-time preview result on aut...Your Twitt Engine: Ovo je aplikacija za sve ljude koji su na svom radnom mjestu pod prismotrom poslodavca ili sefa, koji kontroliraju njihov monitor. Tako uz ovu apl...New ReleasesAmiBroker Plug-ins with C#. A non official AmiBroker Plug-in SDK: AniBroker Plug-in SDK v0.0.5: Removed dependency on .NET 4.0, now it works fine with .NET 2.0BeerMath.net: 0.1: Version 0.1Initial set of calculations supported: IBUs Color ABV/ABWChild page list(for dnn4/5): Child Page List 2.6: Source code is also include in module package.dashCommerce: dashCommerce Releases: You can download both Source and WebReady packages at http://www.dashcommerce.org. If you wish to submit patches, then use the Source Code tab her...ExcelDna: ExcelDna Version 0.23: ExcelDna Version 0.23 2010/03/14 - Packing and other features This release adds a number of features to ExcelDna: Add ExplicitExports attribute to ...Family Tree Analyzer: Version 1.0.7.1: Version 1.0.7.0 Update Census form to show family totals Fix England and Wales Lost Cousins reports to be England OR Wales Problems with Gedcom in...Foursquare BlogEngine Widget: foursquare widget for BlogEngine.NET Version 0.2: To see the changes which have been made, visit http://philippkueng.ch/post/Foursquare-BlogEngineNET-Widget-Version-02.aspx For installation instruc...GLB Virtual Player Builder: 0.4.0 Official Archetypes Release: Updated for new archetypes. The builder still includes the old player formats, and you can still import your old players' builds. Please PM me an...Home Access Plus+: v3.1.4.0: Version 3.1.3.1 Release Change Log: Added Breadcrumbs to My Computer File Changes: ~/bin/CHS Extranet.dll ~/bin/CHS Extranet.pdb ~/images/arro...Little Black Book DB: Little Black Book R1: This is the first release of the Little black book presentation I presented at Confoo. I decided to package the Database along with the Windows Az...mite.net - .NET API for mite: Version 1.2.1: Added Support for budget type Modified TimerMapper to return timers Fixed Encoding issue in xml conversionMultiwfn: multiwfn1.0: multiwfn1.0Multiwfn: multiwfn1.0_source: multiwfn1.0_sourceMultiwfn: multiwfn1.1: multiwfn1.1Multiwfn: multiwfn1.1_source: multiwfn1.1_sourceMultiwfn: multiwfn1.2: 1.2 2010-FEB-9 *加入了对10f型轨道的支持。 *新支持非限制性Post-HF波函数用以计算自旋密度。 *新增加直接读入高斯03/09的fch文件的支持,可以观看NBO轨道,详见readme实例4.10。 *绘制平面图时允许通过输入三个点坐标定义平面,允许自定义平面的原点与平移向...Multiwfn: multiwfn1.2_source: Include all the file that needed by compilation in CVF6.5PowerShell Community Extensions: 2.0 Beta 2: Release NotesThis is a pretty close to final release. We have eliminated all of the names that ran afound of the module loading mechanism which me...pyTarget: pyTarget.binary-for-windows-x86.rar: pyTarget.binary-for-windows-x86.rarpyTarget: pyTarget.src.tar.bz2: pyTarget.src.tar.bz2RedBulb for XNA Framework: RedBulbConsole (Console, Menu and TrackHUD Sample): http://bayimg.com/image/jalhmaacd.jpgScrum Sprint Monitor: 1.0.0.45262 (.NET 4.0 RC): Tested against TFS 2010 RC. For the .NET 3.5 SP1 platform, use the .NET 3.5 SP1 download. What is new in this release? Major performance increase ...sELedit: sELedit v1.1: Removed: Clone and Delete Button Added: Context Menu to Item List Added: Clone and Delete button to Context Menu Added: Export / Import Item ...Sorting Algorithm Visualization: Beta 1: Sorting Algorithm VisualizationSpecify: Version 1.0: Version 1.0Spell Corrector: Spell Corrector 0.1: A basic version that supports basic functionality.Spell Corrector: Spell Corrector 0.1 Source Code: Source code of version 0.1Spiral Architecture Driven Development (SADD): SADD v.0.9: Pre-final release with the NEW materials now all in English ! The Final release is coming soon. After guest column for SADD publication in MS Ar...Spiral Architecture Driven Development (SADD) for Russian: SADD v.0.9: Pre-final release with the NEW materials now all in English ! The Final release is coming soon. After guest column for SADD publication in MS Ar...SQL Azure Ruby Connection: Little Black Book Ruby R1: This is the Ruby Demo that I demostrated at Confoo. Special Thanks to Tony Thompson for putting this demo together. To check out Tony's Portfolio ...The Scrum Factory: The Scrum Factory Server - V1a: This is the newest version of the server. Some minor bugs from version v1 were fixed, and some slighted changed were made some database views.twNowplaying: twNowplaying 1.0.0.4: Please note that the user has to press the Twitter logo to log in the first time the application is started.uManage - AD Self-Service Portal: uManage - v1.0 (.NET 4.0 RC): Initial Release of uManage. NOTE: Designed for ASP.NET and .NET 4.0 RC ONLY! This is the initial release of uManage and covers the first phase of ...Virtu: Virtu 0.8: Source Requirements.NET Framework 3.5 with Service Pack 1 Visual Studio 2008 with Service Pack 1, or Visual C# 2008 Express Edition with Service Pa...Visual Studio DSite: Speech Synthesizer (Text to Speech) in Visual C++: A very simple text to speech program written in visual c 2008.White Tiger: 0.0.4.0: *now you can disable the file security checks *winforms aplications created to manage tablesWinforms Rounded Group Box Control: Release 1.0: To use this control simply add the class to your project and compile it. It will then show up in the projects components section in the toolbox. ...WS-Transfer based File Upload: 0.5: Implements the binary file transfer mechanism onlyXsltDb - DotNetNuke XSLT module: 01.00.89: Super modules configuration names. 16767 - Fixed more bug fixes...Yakiimo3D: DirectX11 Rheinhard Tonemapping Source and Binary: DirectX11 Rheinhard tonemapping source and binary.Your Twitt Engine: test: Slobodno probajte sa vasim twitter korisničkim računomMost Popular ProjectsMetaSharpWBFS ManagerRawrAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitASP.NET Ajax LibraryWindows Presentation Foundation (WPF)ASP.NETLiveUpload to FacebookMost Active ProjectsLINQ to TwitterRawrN2 CMSBlogEngine.NETpatterns & practices – Enterprise LibrarySharePoint Team-MailerjQuery Library for SharePoint Web ServicesCaliburn: An Application Framework for WPF and SilverlightFarseer Physics EngineCalcium: A modular application toolset leveraging Prism

    Read the article

  • How can I read the verbose output from a Cmdlet in C# using Exchange Powershell

    - by mrkeith
    Environment: Exchange 2007 sp3 (2003 sp2 mixed mode) Visual Studio 2008, .Net 3.5 Hello, I'm working with an Exchange powershell move-mailbox cmdlet and have noted when I do so from the Exchange Management shell (using the Verbose switch) there is a ton of real-time information provided. To provide a little context, I'm attempting to create a UI application that moves mailboxes similarly to the Exchange Management Console but desire to support an input file and specific server/database destinations for each entry (and threading). Here's roughly what I have at present but I'm not sure if there is an event I need to register for or what... And to be clear, I desire to get this information in real-time so I may update my UI to reflect what's occurring in the move sequence for the appropriate user (pretty much like the native functionality offered in the Management Console). And in case you are wondering, the reason why I'm not content with the Management Console functionality is, I have an algorithm which I'm using to balance users depending on storage limit, Blackberry use, journaling, exception mailbox size etc which demands user be mapped to specific locations... and I do not desire to create many/several move groups for each common destination or to hunt for lists of users individually through the management console UI. I can not seem to find any good documentation or examples of how to tie into reading the verbose messages that are provided within the console using C# (I see value in being able to read this kind of information in many different scenarios). I've explored the Invoke and InvokeAsync methods and the StateChanged & DataReady events but none of these seem to provide the information (verbose comments) that I'm after. Any direction or examples that can be provided will be very appreciated! A code sample which is little more than how I would ordinarily call any other powershell command follows: // config to use ExMgmt shell, create runspace and open it RunspaceConfiguration rsConfig = RunspaceConfiguration.Create(); PSSnapInException snapInException = null; PSSnapInInfo info = rsConfig.AddPSSnapIn("Microsoft.Exchange.Management.PowerShell.Admin", out snapInException); if (snapInException != null) throw snapInException; Runspace runspace = RunspaceFactory.CreateRunspace(rsConfig); try { runspace.Open(); // create a pipeline and feed script text Pipeline pipeline = runspace.CreatePipeline(); string targetDatabase = @"myServer\myStorageGroup\myDB"; string mbxOwner = "[email protected]"; Command myMoveMailbox = new Command("Move-Mailbox", false, false); myMoveMailbox.Parameters.Add("Identity", mbxOwner); myMoveMailbox.Parameters.Add("TargetDatabase", targetDatabase); myMoveMailbox.Parameters.Add("Verbose"); myMoveMailbox.Parameters.Add("ValidateOnly"); myMoveMailbox.Parameters.Add("Confirm", false); pipeline.Commands.Add(myMoveMailbox); System.Collections.ObjectModel.Collection output = null; // these next few lines that are commented out are where I've tried // registering for events and calling asynchronously but this doesn't // seem to get me anywhere closer // //pipeline.StateChanged += new EventHandler(pipeline_StateChanged); //pipeline.Output.DataReady += new EventHandler(Output_DataReady); //pipeline.InvokeAsync(); //pipeline.Input.Close(); //return; tried these variations that are commented out but none seem to be useful output = pipeline.Invoke(); // Check for errors in the pipeline and throw an exception if necessary if (pipeline.Error != null && pipeline.Error.Count 0) { StringBuilder pipelineError = new StringBuilder(); pipelineError.AppendFormat("Error calling Test() Cmdlet. "); foreach (object item in pipeline.Error.ReadToEnd()) pipelineError.AppendFormat("{0}\n", item.ToString()); throw new Exception(pipelineError.ToString()); } foreach (PSObject psObject in output) { // blah, blah, blah // this is normally where I would read details about a particular PS command // but really pertains to a command once it finishes and has nothing to do with // the verbose messages that I'm after... since this part of the methods pertains // to the after-effects of a command having run, I'm suspecting I need to look to // the asynch invoke method but am not certain or knowing how. } } finally { runspace.Close(); } Thanks! Keith

    Read the article

  • How to show form in front in C#

    - by corlettk
    Folks, Please does anyone know how to show a Form from an otherwise invisible application, and have it get the focus (i.e. appear on top of other windows)? I'm working in C# .NET 3.5. I suspect I've taken "completely the wrong approach"... I do not Application.Run(new TheForm ()) instead I (new TheForm()).ShowModal()... The Form is basically a modal dialogue, with a few check-boxes; a text-box, and OK and Cancel Buttons. The user ticks a checkbox and types in a description (or whatever) then presses OK, the form disappears and the process reads the user-input from the Form, Disposes it, and continues processing. This works, except when the form is show it doesn't get the focus, instead it appears behind the "host" application, until you click on it in the taskbar (or whatever). This is a most annoying behaviour, which I predict will cause many "support calls", and the existing VB6 version doesn't have this problem, so I'm going backwards in usability... and users won't accept that (and nor should they). So... I'm starting to think I need to rethink the whole shebang... I should show the form up front, as a "normal application" and attach the remainer of the processing to the OK-button-click event. It should work, But that will take time which I don't have (I'm already over time/budget)... so first I really need to try to make the current approach work... even by quick-and-dirty methods. So please does anyone know how to "force" a .NET 3.5 Form (by fair means or fowl) to get the focus? I'm thinking "magic" windows API calls (I know Twilight Zone: This only appears to be an issue at work, we're I'm using Visual Studio 2008 on Windows XP SP3... I've just failed to reproduce the problem with an SSCCE (see below) at home on Visual C# 2008 on Vista Ulimate... This works fine. Huh? WTF? Also, I'd swear that at work yesterday showed the form when I ran the EXE, but not when F5'ed (or Ctrl-F5'ed) straight from the IDE (which I just put up with)... At home the form shows fine either way. Totaly confusterpating! It may or may not be relevant, but Visual Studio crashed-and-burned this morning when the project was running in debug mode and editing the code "on the fly"... it got stuck what I presumed was an endless loop of error messages. The error message was something about "can't debug this project because it is not the current project, or something... So I just killed it off with process explorer. It started up again fine, and even offered to recover the "lost" file, an offer which I accepted. using System; using System.Windows.Forms; namespace ShowFormOnTop { static class Program { [STAThread] static void Main() { Application.EnableVisualStyles(); Application.SetCompatibleTextRenderingDefault(false); //Application.Run(new Form1()); Form1 frm = new Form1(); frm.ShowDialog(); } } } Background: I'm porting an existing VB6 implementation to .NET... It's a "plugin" for a "client" GIS application called MapInfo. The existing client "worked invisibly" and my instructions are "to keep the new version as close as possible to the old version", which works well enough (after years of patching); it's just written in an unsupported language, so we need to port it. About me: I'm pretty much a noob to C# and .NET generally, though I've got a bottoms wiping certificate, I have been a professional programmer for 10 years; So I sort of "know some stuff". Any insights would be most welcome... and Thank you all for taking the time to read this far. Consiseness isn't (apparently) my forte. Cheers. Keith.

    Read the article

  • How Expedia Made My New Bride Cry

    - by Lance Robinson
    Tweet this? Email Expedia and ask them to give me and my new wife our honeymoon? When Expedia followed up their failure with our honeymoon trip with a complete and total lack of acknowledgement of any responsibility for the problem and endless loops of explaining the issue over and over again - I swore that they would make it right. When they brought my new bride to tears, I got an immediate and endless supply of motivation. I hope you will help me make them make it right by posting our story on Twitter, Facebook, your blog, on Expedia itself, and when talking to your friends in person about their own travel plans.   If you are considering using them now for an important trip - reconsider. Short summary: We arrived early for a flight - but Expedia had made a mistake with the data they supplied to JetBlue and Emirates, which resulted in us not being able to check in (one leg of our trip was missing)!  At the time of this post, three people (myself, my wife, and an exceptionally patient JetBlue employee named Mary) each spent hours on the phone with Expedia.  I myself spent right at 3 hours (according to iPhone records), Lauren spent an hour and a half or so, and poor Mary was probably on the phone for a good 3.5 hours.  This is after 5 hours total at the airport.  If you add up our phone time, that is nearly 8 hours of phone time over a 5 hour period with little or no help, stall tactics (?), run-around, denial, shifting of blame, and holding. Details below (times are approximate): First, my wife and I were married yesterday - June 18th, the 3 year anniversary of our first date. She is awesome. She is the nicest person I have ever known, a ton of fun, absolutely beautiful in every way. Ok enough mushy - here are the dirty details. 2:30 AM - Early Check-in Attempt - we attempted to check-in for our flight online. Some sort of technology error on website, instructed to checkin at desk. 4:30 AM - Arrive at airport. Try to check-in at kiosk, get the same error. We got to the JetBlue desk at RDU International Airport, where Mary helped us. Mary discovered that the Expedia provided itinerary does not match the Expedia provided tickets. We are informed that when that happens American, JetBlue, and others that use the same software cannot check you in for the flight because. Why? Because the itinerary was missing a leg of our flight! Basically we were not shown in the system as definitely being able to make it home. Mary called Expedia and was put on hold by their automated system. 4:55 AM - Mary, myself, and my brand new bride all waited for about 25 minutes when finally I decided I would make a call myself on my iPhone while Mary was on the airport phone. In their automated system, I chose "make a new reservation", thinking they might answer a little more quickly than "customer service". Not surprisingly I was connected to an Expedia person within 1 minute. They informed me that they would have to forward me to a customer service specialist. I explained to them that we were already on hold for that and had been for nearly half an hour, that we were going on our honeymoon and that our flight would be leaving soon - could they please help us. "Yes, I will help you". I hand the phone to JetBlue Mary who explains the situation 3 or 4 times. Obviously I couldn't hear both ends of the conversation at this point, but the Expedia person explained what the problem was by stating exactly what Mary had just spent 15 minutes explaining. Mary calmly confirms that this is the problem, and asks Expedia to re-issue the itinerary. Expedia tells Mary that they'll have to transfer her to customer service. Mary asks for someone specific so that we get an answer this time, and goes on hold. Mary get's connected, explains the situation, and then Mary's connection gets terminated. 5:10 AM - Mary calls back to the Expedia automated system again, and we wait for about 5 minutes on hold this time before I pick up my iPhone and call Expedia again myself. Again I go to sales, a person picks up the phone in less than a minute. I explain the situation and let them know that we are now very close to missing our flight for our honeymoon, could they please help us. "Yes, I will help you". Again I give the phone to Mary who provides them with a call back number in case we get disconnected again and explains the situation again. More back and forth with Expedia doing nothing but repeating the same questions, Mary answering the questions with the same information she provided in the original explanation, and Expedia simply restating the problem. Mary again asks them to re-issue the itinerary, and explains that doing so will fix the problem. Expedia again repeats the problem instead of fixing it, and Mary's connection gets terminated. 5:20 AM - Mary again calls back to Expedia. My beautiful bride also calls on her own phone. At this point she is struggling to hold back her tears, stumbling through an explanation of all that has happened and that we are about to miss our flight. Please help us. "Yes, I will help". My beautiful bride's connection gets terminated. Ok, maybe this disconnection isn't an accident. We've now been disconnected 3 times on two different phones. 5:45 AM - I walk away and pleadingly beg a person to help me. They "escalate" the issue to "Rosy" (sp?) at Expedia. I go through the whole song and dance again with Rosy, who gives me the same treatment Mary was given. Rosy blames JetBlue for now having the correct data. Meanwhile Mary is on the phone with Emirates Air (the airline for the second leg of our trip), who agrees with JetBlue that Expedia's data isn't up to date. We are informed by two airport employees that issues like this with Expedia are not uncommon, and that the fix is simple. On the phone iwth Rosy, I ask her to re-issue the itinerary because we are about to miss our flight. She again explains the problem to me. At this point, I am standing at the window, pleading with Rosy to help us get to our honeymoon, watching our airplane. Then our airplane leaves without us. 6:03 AM - At this point we have missed our flight. Re-issuing the itinerary is no longer a solution. I ask Rosy to start from the beginning and work us up a new trip. She says that she cannot do that. She says that she needs to talk to JetBlue and Emirates and find out why we cannot check-in for our flight. I remind Rosy that our flight has already left - I just watched it taxi away - it no longer matters why (not to mention the fact that we already knew why, and have known why since 4:30 AM), and have known the solution since 4:30 AM. Rosy, can you please book a new trip? Yes, but it will cost $400. Excuse me? Now you can, but it will cost ME to fix your mistake? Rosy says that she can escalate the situation to her supervisor but that will take 1.5 hours. 6:15 AM - I told Rosy that if they had re-issued the itinerary as JetBlue asked (at 4:30 AM), my new wife and I might be on the airplane now instead of dealing with this on the phone and missing the beginning (and how much more?) of our honeymoon. Rosy said that it was not necessary to re-issue the itinerary. Out of curiosity, i asked Rosy if there was some financial burden on them to re-issue the itinerary. "No", said Rosy. I asked her if it was a large time burden on Expedia to re-issue the itinerary. "No", said Rosy. I directly asked Rosy: Why wouldn't Expedia have re-issued the itinerary when JetBlue asked? No answer. I asked Rosy: If you had re-issued the itinerary at 4:30, isn't it possible that I would be on that flight right now? She actually surprised me by answering "Yes" to that question. So I pointed out that it followed that Expedia was responsible for the fact that we missed out flight, and she immediately went into more about how the problem was with JetBlue - but now it was ALSO an Emirates Air problem as well. I tell Rosy to go ahead and escalate the issue again, and please call me back in that 1.5 hours (which how is about 1 hour and 10 minutes away). 6:30 AM - I start tweeting my frustration with iPhone. It's now pretty much impossible for us to make it to The Maldives by 3pm, which is the time at which we would need to arrive in order to be allowed service to the actual island where we are staying. Expedia has now given me the run-around for 2 hours, caused me to miss my flight, and worst of all caused my amazing new wife Lauren to miss our honeymoon. You think I was mad? No. Furious. Its ok to make mistakes - but to refuse to fix them and to ruin our honeymoon? No, not ok, Expedia. I swore right then that Expedia would make this right. 7:45 AM - JetBlue mary is still talking her tail off to other people in JetBlue and Emirates Air. Mary works it out so that if Expedia simply books a new trip, JetBlue and Emirates will both waive all the fees. Now we just have to convince Expedia to fix their mistake and get us on our way! Around this time Expedia Rosy calls me back! I inform her of the excellent work of JetBlue Mary - that JetBlue and Emirates both will waive the fees so Expedia can fix their mistake and get us going on our way. She says that she sees documentation of this in her system and that she needs to put me on hold "for 1 to 10 minutes" to talk to Emirates Air (why I'm not exactly sure). I say ok. 8:45 AM - After an hour on hold, Rosy comes on the line and asks me to hold more. I ask her to call me back. 9:35 AM - I put down the iPhone Twitter app and picks up the laptop. You think I made some noise with my iPhone? Heh 11:25 AM - Expedia follows me and sends a canned "We're sorry, DM us the details".  If you look at their Twitter feed, 16 out of the most recent 20 tweets are exactly the same canned response.  The other 4?  Ads.  Um - #MultiFAIL? To Expedia:  You now have had (as explained above) 8 hours of 3 different people explaining our situation, you know the email address of our Expedia account, you know my web blog, you know my Twitter address, you know my phone number.  You also know how upset you have made both me and my new bride by treating us with such a ... non caring, scripted, uncooperative, argumentative, and possibly even deceitful manner.  In the wise words of the great Kenan Thompson of SNL: "FIX IT!".  And no, I'm NOT going away until you make this right. Period. 11:45 AM - Expedia corporate office called.  The woman I spoke to was very nice and apologetic.  She listened to me tell the story again, she says she understands the problem and she is going to work to resolve it.  I don't have any details on what exactly that resolution might me, she said she will call me back in 20 minutes.  She found out about the problem via Twitter.  Thank you Twitter, and all of you who helped.  Hopefully social media will win my wife and I our honeymoon, and hopefully Expedia will encourage their customer service teams treat their customers properly. 12:22 PM - Spoke to Fran again from Expedia corporate office.  She has a flight for us tonight.  She is booking it now.  We will arrive at our honeymoon destination of beautiful Veligandu Island Resort only 1 day late.  She cannot confirm today, but she expects that Expedia will pay for the lost honeymoon night.  Thank you everyone for your help.  I will reflect more on this whole situation and confirm its resolution after our flight is 100% confirmed.  For now, I'm going to take a breather and go kiss my wonderful wife! 1:50 PM - Have not yet received the promised phone call.  We did receive an email with a new itinerary for a flight but the booking is not for specific seats, so there is no guarantee that my wife and I will be able to sit together.  With the original booking I carefully selected our seats for every segment of our trip.  I decided to call into the phone number that Fran from the Expedia corporate office gave me.  Its automated voice system identified itself as "Tier 3 Support".  I am currently still on hold with them, I have not gotten through to a human yet. 1:55 PM - Fran from Expedia called me back.  She confirmed us as booked.  She called the airlines to confirm.  Unfortunately, Expedia was unwilling or unable to allow us any type of seat selection.  It is possible that i won't get to sit next to the woman I married less than a day ago on our 40 total hours of flight time (there and back).  In addition, our seats could be the worst seats on the planes, with no reclining seat back or right next to the restroom.  Despite this fact (which in my opinion is huge), the horrible inconvenience, the hours at the airport, and the negative Internet publicity that Expedia is receiving, Expedia declined to offer us any kind of upgrade or to mark us as SFU (suitable for upgrade).  Since they didn't offer - I asked, and was rejected.  I am grateful to finally be heading in the right direction, but not only did Expedia horribly botch this job from the very beginning, they followed that botch job with near zero customer service, followed by a verbally apologetic but otherwise half-hearted resolution.  If this works out favorably for us, great.  If not - I'm not done making noise, Expedia.  You owe us, and I expect you to make it right.  You haven't quite done that yet. Thanks - Thank you to Twitter.  Thanks to all those who sympathize with us and helped us get the attention of Expedia, since three people (one of them an airline employee) using Expedia's normal channels of communication for many hours didn't help.  Thanks especially to my PowerShell and Sharepoint friends, my local friends, and those connectors who encouraged me and spread my story. 5:15 PM - Love Wins - After all this, Lauren and I are exhausted.  We both took a short nap, and when we woke up we talked about the last 24 hours.  It was a big, amazing, story-filled 24 hours.  I said that Expedia won, but Lauren said no.  She pointed out how lucky we are.  We are in love and married.  We have wonderful family and friends.  We are both hard-working successful people who love what they do.  We get to go to an amazing exotic destination for our honeymoon like Veligandu in The Maldives...  That's a lot of good.  Expedia didn't win.  This was (is) a big loss for Expedia.  It is a public blemish for all to see.  But Lauren and I did win, big time.  Expedia may not have made things right - but things are right for us.  Post in progress... I will relay any further comments (or lack of) from Expedia soon, as well as an update on confirmation of their repayment of our lost resort room rates.  I'll also post a picture of us on our honeymoon as soon as I can!

    Read the article

  • Setting up a new Silverlight 4 Project with WCF RIA Services

    - by Kevin Grossnicklaus
    Many of my clients are actively using Silverlight 4 and RIA Services to build powerful line of business applications.  Getting things set up correctly is critical to being to being able to take full advantage of the RIA services plumbing and when developers struggle with the setup they tend to shy away from the solution as a whole.  I’m a big proponent of RIA services and wanted to take the opportunity to share some of my experiences in setting up these types of projects.  In late 2010 I presented a RIA Services Master Class here in St. Louis, MO through my firm (ArchitectNow) and the information shared in this post was promised during that presentation. One other thing I want to mention before diving in is the existence of a number of other great posts on this subject.  I’ve learned a lot from many of them and wanted to call out a few of them.  The purpose of my post is to point out some of the gotchas that people get caught up on in the process but I would still encourage you to do as much additional research as you can to find the perfect setup for your needs. Here are a few additional blog posts and articles you should check out on the subject: http://msdn.microsoft.com/en-us/library/ee707351(VS.91).aspx http://adam-thompson.com/post/2010/07/03/Getting-Started-with-WCF-RIA-Services-for-Silverlight-4.aspx Technologies I don’t intend for this post to turn into a full WCF RIA Services tutorial but I did want to point out what technologies we will be using: Visual Studio.NET 2010 Silverlight 4.0 WCF RIA Services for Visual Studio 2010 Entity Framework 4.0 I also wanted to point out that the screenshots came from my personal development box which has a number of additional plug-ins and frameworks loaded so a few of the screenshots might not match 100% with what you see on your own machines. If you do not have Visual Studio 2010 you can download the express version from http://www.microsoft.com/express.  The Silverlight 4.0 tools and the WCF RIA Services components are installed via the Web Platform Installer (http://www.microsoft.com/web/download). Also, the examples given in this post are done in C#…sorry to you VB folks but the concepts are 100% identical. Setting up anew RIA Services Project This section will provide a step-by-step walkthrough of setting up a new RIA services project using a shared DLL for server side code and a simple Entity Framework model for data access.  All projects are created with the consistent ArchitectNow.RIAServices filename prefix and default namespace.  This would be modified to match your companies standards. First, open Visual Studio and open the new project window via File->New->Project.  In the New Project window, select the Silverlight folder in the Installed Templates section on the left and select “Silverlight Application” as your project type.  Verify your solution name and location are set appropriately.  Note that the project name we specified in the example below ends with .Client.  This indicates the name which will be given to our Silverlight project. I consider Silverlight a client-side technology and thus use this name to reflect that.  Click Ok to continue. During the creation on a new Silverlight 4 project you will be prompted with the following dialog to create a new web ASP.NET web project to host your Silverlight content.  As we are demonstrating the setup of a WCF RIA Services infrastructure, make sure the “Enable WCF RIA Services” option is checked and click OK.  Obviously, there are some other options here which have an effect on your solution and you are welcome to look around.  For our example we are going to leave the ASP.NET Web Application Project selected.  If you are interested in having your Silverlight project hosted in an MVC 2 application or a Web Site project these options are available as well.  Also, whichever web project type you select, the name can be modified here as well.  Note that it defaults to the same name as your Silverlight project with the addition of a .Web suffix. At this point, your full Silverlight 4 project and host ASP.NET Web Application should be created and will now display in your Visual Studio solution explorer as part of a single Visual Studio solution as follows: Now we want to add our WCF RIA Services projects to this same solution.  To do so, right-click on the Solution node in the solution explorer and select Add->New Project.  In the New Project dialog again select the Silverlight folder under the Visual C# node on the left and, in the main area of the screen, select the WCF RIA Services Class Library project template as shown below.  Make sure your project name is set appropriately as well.  For the sample below, we will name the project “ArchitectNow.RIAServices.Server.Entities”.   The .Server.Entities suffix we use is meant to simply indicate that this particular project will contain our WCF RIA Services entity classes (as you will see below).  Click OK to continue. Once you have created the WCF RIA Services Class Library specified above, Visual Studio will automatically add TWO projects to your solution.  The first will be an project called .Server.Entities (using our naming conventions) and the other will have the same name with a .Web extension.  The full solution (with all 4 projects) is shown in the image below.  The .Entities project will essentially remain empty and is actually a Silverlight 4 class library that will contain generated RIA Services domain objects.  It will be referenced by our front-end Silverlight project and thus allow for simplified sharing of code between the client and the server.   The .Entities.Web project is a .NET 4.0 class library into which we will put our data access code (via Entity Framework).  This is our server side code and business logic and the RIA Services plumbing will maintain a link between this project and the front end.  Specific entities such as our domain objects and other code we set to be shared will be copied automatically into the .Entities project to be used in both the front end and the back end. At this point, we want to do a little cleanup of the projects in our solution and we will do so by deleting the “Class1.cs” class from both the .Entities project and the .Entities.Web project.  (Has anyone ever intentionally named a class “Class1”?) Next, we need to configure a few references to make RIA Services work.  THIS IS A KEY STEP THAT CAUSES MANY HEADACHES FOR DEVELOPERS NEW TO THIS INFRASTRUCTURE! Using the Add References dialog in Visual Studio, add a project reference from the *.Client project (our Silverlight 4 client) to the *.Entities project (our RIA Services class library).  Next, again using the Add References dialog in Visual Studio, add a project reference from the *.Client.Web project (our ASP.NET host project) to the *.Entities.Web project (our back-end data services DLL).  To get to the Add References dialog, simply right-click on the project you with to add a reference to in the Visual Studio solution explorer and select “Add Reference” from the resulting context menu.  You will want to make sure these references are added as “Project” references to simplify your future debugging.  To reiterate the reference direction using the project names we have utilized in this example thus far:  .Client references .Entities and .Client.Web reference .Entities.Web.  If you have opted for a different naming convention, then the Silverlight project must reference the RIA Services Silverlight class library and the ASP.NET host project must reference the server-side class library. Next, we are going to add a new Entity Framework data model to our data services project (.Entities.Web).  We will do this by right clicking on this project (ArchitectNow.Server.Entities.Web in the above diagram) and selecting Add->New Project.  In the New Project dialog we will select ADO.NET Entity Data Model as in the following diagram.  For now we will call this simply SampleDataModel.edmx and click OK. It is worth pointing out that WCF RIA Services is in no way tied to the Entity Framework as a means of accessing data and any data access technology is supported (as long as the server side implementation maps to the RIA Services pattern which is a topic beyond the scope of this post).  We are using EF to quickly demonstrate the RIA Services concepts and setup infrastructure, as such, I am not providing a database schema with this post but am instead connecting to a small sample database on my local machine.  The following diagram shows a simple EF Data Model with two tables that I reverse engineered from a local data store.   If you are putting together your own solution, feel free to reverse engineer a few tables from any local database to which you have access. At this point, once you have an EF data model generated as an EDMX into your .Entites.Web project YOU MUST BUILD YOUR SOLUTION.  I know it seems strange to call that out but it important that the solution be built at this point for the next step to be successful.  Obviously, if you have any build errors, these must be addressed at this point. At this point we will add a RIA Services Domain Service to our .Entities.Web project (our server side code).  We will need to right-click on the .Entities.Web project and select Add->New Item.  In the Add New Item dialog, select Domain Service Class and verify the name of your new Domain Service is correct (ours is called SampleService.cs in the image below).  Next, click "Add”. After clicking “Add” to include the Domain Service Class in the selected project, you will be presented with the following dialog.  In it, you can choose which entities from the selected EDMX to include in your services and if they should be allowed to be edited (i.e. inserted, updated, or deleted) via this service.  If the “Available DataContext/ObjectContext classes” dropdown is empty, this indicates you have not yes successfully built your project after adding your EDMX.  I would also recommend verifying that the “Generate associated classes for metadata” option is selected.  Once you have selected the appropriate options, click “OK”. Once you have added the domain service class to the .Entities.Web project, the resulting solution should look similar to the following: Note that in the solution you now have a SampleDataModel.edmx which represents your EF data mapping to your database and a SampleService.cs which will contain a large amount of generated RIA Services code which RIA Services utilizes to access this data from the Silverlight front-end.  You will put all your server side data access code and logic into the SampleService.cs class.  The SampleService.metadata.cs class is for decorating the generated domain objects with attributes from the System.ComponentModel.DataAnnotations namespace for validation purposes. FINAL AND KEY CONFIGURATION STEP!  One key step that causes significant headache to developers configuring RIA Services for the first time is the fact that, when we added the EDMX to the .Entities.Web project for our EF data access, a connection string was generated and placed within a newly generated App.Context file within that project.  While we didn’t point it out at the time you can see it in the image above.  This connection string will be required for the EF data model to successfully locate it’s data.  Also, when we added the Domain Service class to the .Entities.Web project, a number of RIA Services configuration options were added to the same App.Config file.   Unfortunately, when we ultimately begin to utilize the RIA Services infrastructure, our Silverlight UI will be making RIA services calls through the ASP.NET host project (i.e. .Client.Web).  This host project has a reference to the .Entities.Web project which actually contains the code so all will pass through correctly EXCEPT the fact that the host project will utilize it’s own Web.Config for any configuration settings.  For this reason we must now merge all the sections of the App.Config file in the .Entities.Web project into the Web.Config file in the .Client.Web project.  I know this is a bit tedious and I wish there were a simpler solution but it is required for our RIA Services Domain Service to be made available to the front end Silverlight project.  Much of this manual merge can be achieved by simply cutting and pasting from App.Config into Web.Config.  Unfortunately, the <system.webServer> section will exist in both and the contents of this section will need to be manually merged.  Fortunately, this is a step that needs to be taken only once per solution.  As you add additional data structures and Domain Services methods to the server no additional changes will be necessary to the Web.Config. Next Steps At this point, we have walked through the basic setup of a simple RIA services solution.  Unfortunately, there is still a lot to know about RIA services and we have not even begun to take advantage of the plumbing which we just configured (meaning we haven’t even made a single RIA services call).  I plan on posting a few more introductory posts over the next few weeks to take us to this step.  If you have any questions on the content in this post feel free to reach out to me via this Blog and I’ll gladly point you in (hopefully) the right direction. Resources Prior to closing out this post, I wanted to share a number or resources to help you get started with RIA services.  While I plan on posting more on the subject, I didn’t invent any of this stuff and wanted to give credit to the following areas for helping me put a lot of these pieces into place.   The books and online resources below will go a long way to making you extremely productive with RIA services in the shortest time possible.  The only thing required of you is the dedication to take advantage of the resources available. Books Pro Business Applications with Silverlight 4 http://www.amazon.com/Pro-Business-Applications-Silverlight-4/dp/1430272074/ref=sr_1_2?ie=UTF8&qid=1291048751&sr=8-2 Silverlight 4 in Action http://www.amazon.com/Silverlight-4-Action-Pete-Brown/dp/1935182374/ref=sr_1_1?ie=UTF8&qid=1291048751&sr=8-1 Pro Silverlight for the Enterprise (Books for Professionals by Professionals) http://www.amazon.com/Pro-Silverlight-Enterprise-Books-Professionals/dp/1430218673/ref=sr_1_3?ie=UTF8&qid=1291048751&sr=8-3 Web Content RIA Services http://channel9.msdn.com/Blogs/RobBagby/NET-RIA-Services-in-5-Minutes http://silverlight.net/riaservices/ http://www.silverlight.net/learn/videos/all/net-ria-services-intro/ http://www.silverlight.net/learn/videos/all/ria-services-support-visual-studio-2010/ http://channel9.msdn.com/learn/courses/Silverlight4/SL4BusinessModule2/SL4LOB_02_01_RIAServices http://www.myvbprof.com/MainSite/index.aspx#/zSL4_RIA_01 http://channel9.msdn.com/blogs/egibson/silverlight-firestarter-ria-services http://msdn.microsoft.com/en-us/library/ee707336%28v=VS.91%29.aspx Silverlight www.silverlight.net http://msdn.microsoft.com/en-us/silverlight4trainingcourse.aspx http://channel9.msdn.com/shows/silverlighttv

    Read the article

  • JPA Entity (in multiple persistence-unit) in OSGi (Spring DM) Environnement is confusing me.

    - by Vincent Demeester
    Hi, I'm a bit confused about a strange behavior of my JPA's related objects. I have three bundle : The User bundle does contain some user-related objects, but mainly the User object. The Energy bundle does contain some energy-related objects, and particularly a ConsumptionTerminal which contains a List of User. The Index bundle does contain an Index object that has no dependency at all. My OSGi environment is the following : A DataSource bundle that provide 2 services : dataSource and jpaVendorAdapter. The three bundles. They consume dataSource and jpaVendorAdapter. Their module-context.xml file look like : And they all have a persistence.xml file : User <?xml version="1.0" encoding="UTF-8"?> <persistence> <persistence-unit name="securityPU" transaction-type="JTA"> <jta-data-source>java:/securityDataSourceService</jta-data-source> <class>net.nextep.amundsen.security.domain.User</class> <!-- [...] --> <exclude-unlisted-classes>true</exclude-unlisted-classes> <properties> <property name="eclipselink.logging.level" value="INFO" /> <property name="eclipselink.ddl-generation" value="create-tables" /> <property name="eclipselink.ddl-generation.output-mode" value="database" /> <property name="eclipselink.orm.throw.exceptions" value="true" /> </properties> </persistence-unit> </persistence> Energy <?xml version="1.0" encoding="UTF-8"?> <persistence> <persistence-unit name="energyPU" transaction-type="JTA"> <jta-data-source>java:/securityDataSourceService</jta-data-source> <class>net.nextep.amundsen.security.domain.User</class> <class>net.nextep.amundsen.energy.domain.User</class> <!-- [...] --> <exclude-unlisted-classes>true</exclude-unlisted-classes> <properties> <property name="eclipselink.logging.level" value="INFO" /> <property name="eclipselink.ddl-generation" value="create-tables" /> <property name="eclipselink.ddl-generation.output-mode" value="database" /> <property name="eclipselink.orm.throw.exceptions" value="true" /> </properties> </persistence-unit> </persistence> Index : This one has the most simple persistence.xml with just the Index class (no shared Class). I'm using named @PersistenceUnit annotation like @PersitenceUnit(name = 'securityPU') (for the User bundle). And finally, I'm using EclipseLink as Jpa provider and Spring DM (+ Spring DM Server in the development process) The problem is the following : When the User bundle is deployed, I'm able to persist User objects. When the User bundle and Energy bundles are both deployed, I'm not able to persist User objects (neither the Energy object). But I don't have any exception at all ! There is no problem at all with the Index bundle. The bug is dataSource independent (I tried with PostgreSQL and MySQL so far). My first conclusion was that the <class>net.nextep.amundsen.security.domain.User</class> in both persistence unit was causing the trouble. I tried without it (and hiding the User dependent object in the Energy bundle) but it failed too. I'm a bit confused about that bug. I'm also not quite sure about the transaction management in this context. I wasn't the one who designed this architecture (but I tell my intern OK without testing it.. shame on me) but if I could understand this bug and maybe fix it without rewrite the bundle (and break my intern work), I would appreciate. Am I doing something wrong ? (it's obvious, but what..) Did I miss something while reading documentation ? By the way, I'm also looking for some best practices or advices when it comes to JPA, EclipseLink (or whatever JPA Provider) and Spring DM (and OSGi in general). I found interesting slides from Mike Keith about this topic (by browsing Stackoverflow).

    Read the article

< Previous Page | 13 14 15 16 17