Search Results

Search found 6031 results on 242 pages for 'grid infrastructure'.

Page 128/242 | < Previous Page | 124 125 126 127 128 129 130 131 132 133 134 135  | Next Page >

  • Fetch a specific tag from Rally in order to compute a value in another field

    - by 4jas
    I'm extremely new to Rally development so my question may sound dumb (but couldn't find how to do it from rally's help or from previous posts here) :) I've started from the rally freeform grid example - my purpose is to implement a Business Value calculator: I fill the score field with a 5-digit figure where each number is a score in the 1-5 range. Then I compute a business value as the result of a calculation, where each number is weighted by a preset weight. I can sort my stories by Business Value to help me prioritize my backlog: that's the first step, and it works. Now what I want to do is to make my freeform grid editable: I am extracting each of my digits as a separate column, but those columns are display-only. How can I turn them into something editable? What I want to do of course is update back the score field based on the values input in each custom column. Here's an example: I have a record with score "15254", which means Business Value criteria 1 scores 1 out of 5, Business Value criteria 2 scores 5 out of 5, and so on... In the end my Business Value is computed as "1*1 + 5*2 + 2*3 + 5*4 + 4*5 = 57". So far this is the part that works. Now let's say I found that the third criteria should not score 2 but 3, I want to be able to edit the value in the corresponding column and have my score field updated to "15354", and my Business Value to display 60 instead of 57. Here is my current code, I'll be really grateful if you can help me with turning that grid into something editable :) <!--Include SDK--> <script type="text/javascript" src="https://rally1.rallydev.com/apps/2.0p2/sdk-debug.js"></script> <!--App code--> <script type="text/javascript"> Rally.onReady(function() { Ext.define('BVApp', { extend: 'Rally.app.App', componentCls: 'app', launch: function() { Ext.create('Rally.data.WsapiDataStore', { model: 'UserStory', autoLoad: true, listeners: { load: this._onDataLoaded, scope: this } }); }, _onDataLoaded: function(store, data) { var records = []; var li_score; var li_bv1, li_bv2, li_bv3, li_bv4, li_bv5, li_bvtotal; var weights = new Array(1, 2, 3, 4, 5); Ext.Array.each(data, function(record) { //Let's fetch score and compute the business values... li_score = record.get('Score'); if (li_score) { li_bv1 = li_score.toString().substring(0,1); li_bv2 = li_score.toString().substring(1,2); li_bv3 = li_score.toString().substring(2,3); li_bv4 = li_score.toString().substring(3,4); li_bv5 = li_score.toString().substring(4,5); li_bvtotal = li_bv1*weights[0] + li_bv2*weights[1] + li_bv3*weights[2] + li_bv4*weights[3] + li_bv5*weights[4]; } records.push({ FormattedID: record.get('FormattedID'), ref: record.get('_ref'), Name: record.get('Name'), Score: record.get('Score'), Bv1: li_bv1, Bv2: li_bv2, Bv3: li_bv3, Bv4: li_bv4, Bv5: li_bv5, BvTotal: li_bvtotal }); }); this.add({ xtype: 'rallygrid', store: Ext.create('Rally.data.custom.Store', { data: records, pageSize: 5 }), columnCfgs: [ { text: 'FormattedID', dataIndex: 'FormattedID' }, { text: 'ref', dataIndex: 'ref' }, { text: 'Name', dataIndex: 'Name', flex: 1 }, { text: 'Score', dataIndex: 'Score' }, { text: 'BusVal 1', dataIndex: 'Bv1' }, { text: 'BusVal 2', dataIndex: 'Bv2' }, { text: 'BusVal 3', dataIndex: 'Bv3' }, { text: 'BusVal 4', dataIndex: 'Bv4' }, { text: 'BusVal 5', dataIndex: 'Bv5' }, { text: 'BusVal Total', dataIndex: 'BvTotal' } ] }); } }); Rally.launchApp('BVApp', { name: 'Business Values App' }); var exampleHtml = '<div id="example-intro"><h1>Business Values App</h1>' + '<div>Own sample app for Business Values</div>' + '</div>'; // Default app viewport uses layout: 'fit', // so we need to insert a container into the viewport var viewport = Ext.ComponentQuery.query('viewport')[0]; var appComponent = viewport.items.getAt(0); var viewportContainerItems = [{ html: exampleHtml, border: 0 }]; //hide advanced cardboard live previews in examples for now viewportContainerItems.push({ xtype: 'container', items: [appComponent] }); viewport.remove(appComponent, false); viewport.add({ xtype: 'container', layout: 'vbox', items: viewportContainerItems }); }); </script> <!--App styles--> <style type="text/css"> .app { /* Add app styles here */ } </style>

    Read the article

  • Precise explanation of JavaScript <-> DOM circular reference issue

    - by Joey Adams
    One of the touted advantages of jQuery.data versus raw expando properties (arbitrary attributes you can assign to DOM nodes) is that jQuery.data is "safe from circular references and therefore free from memory leaks". An article from Google titled "Optimizing JavaScript code" goes into more detail: The most common memory leaks for web applications involve circular references between the JavaScript script engine and the browsers' C++ objects' implementing the DOM (e.g. between the JavaScript script engine and Internet Explorer's COM infrastructure, or between the JavaScript engine and Firefox XPCOM infrastructure). It lists two examples of circular reference patterns: DOM element → event handler → closure scope → DOM DOM element → via expando → intermediary object → DOM element However, if a reference cycle between a DOM node and a JavaScript object produces a memory leak, doesn't this mean that any non-trivial event handler (e.g. onclick) will produce such a leak? I don't see how it's even possible for an event handler to avoid a reference cycle, because the way I see it: The DOM element references the event handler. The event handler references the DOM (either directly or indirectly). In any case, it's almost impossible to avoid referencing window in any interesting event handler, short of writing a setInterval loop that reads actions from a global queue. Can someone provide a precise explanation of the JavaScript ↔ DOM circular reference problem? Things I'd like clarified: What browsers are effected? A comment in the jQuery source specifically mentions IE6-7, but the Google article suggests Firefox is also affected. Are expando properties and event handlers somehow different concerning memory leaks? Or are both of these code snippets susceptible to the same kind of memory leak? // Create an expando that references to its own element. var elem = document.getElementById('foo'); elem.myself = elem; // Create an event handler that references its own element. var elem = document.getElementById('foo'); elem.onclick = function() { elem.style.display = 'none'; }; If a page leaks memory due to a circular reference, does the leak persist until the entire browser application is closed, or is the memory freed when the window/tab is closed?

    Read the article

  • Find the set of largest contiguous rectangles to cover multiple areas

    - by joelpt
    I'm working on a tool called Quickfort for the game Dwarf Fortress. Quickfort turns spreadsheets in csv/xls format into a series of commands for Dwarf Fortress to carry out in order to plot a "blueprint" within the game. I am currently trying to optimally solve an area-plotting problem for the 2.0 release of this tool. Consider the following "blueprint" which defines plotting commands for a 2-dimensional grid. Each cell in the grid should either be dug out ("d"), channeled ("c"), or left unplotted ("."). Any number of distinct plotting commands might be present in actual usage. . d . d c c d d d d c c . d d d . c d d d d d c . d . d d c To minimize the number of instructions that need to be sent to Dwarf Fortress, I would like to find the set of largest contiguous rectangles that can be formed to completely cover, or "plot", all of the plottable cells. To be valid, all of a given rectangle's cells must contain the same command. This is a faster approach than Quickfort 1.0 took: plotting every cell individually as a 1x1 rectangle. This video shows the performance difference between the two versions. For the above blueprint, the solution looks like this: . 9 . 0 3 2 8 1 1 1 3 2 . 1 1 1 . 2 7 1 1 1 4 2 . 6 . 5 4 2 Each same-numbered rectangle above denotes a contiguous rectangle. The largest rectangles take precedence over smaller rectangles that could also be formed in their areas. The order of the numbering/rectangles is unimportant. My current approach is iterative. In each iteration, I build a list of the largest rectangles that could be formed from each of the grid's plottable cells by extending in all 4 directions from the cell. After sorting the list largest first, I begin with the largest rectangle found, mark its underlying cells as "plotted", and record the rectangle in a list. Before plotting each rectangle, its underlying cells are checked to ensure they are not yet plotted (overlapping a previous plot). We then start again, finding the largest remaining rectangles that can be formed and plotting them until all cells have been plotted as part of some rectangle. I consider this approach slightly more optimized than a dumb brute-force search, but I am wasting a lot of cycles (re)calculating cells' largest rectangles and checking underlying cells' states. Currently, this rectangle-discovery routine takes the lion's share of the total runtime of the tool, especially for large blueprints. I have sacrificed some accuracy for the sake of speed by only considering rectangles from cells which appear to form a rectangle's corner (determined using some neighboring-cell heuristics which aren't always correct). As a result of this 'optimization', my current code doesn't actually generate the above solution correctly, but it's close enough. More broadly, I consider the goal of largest-rectangles-first to be a "good enough" approach for this application. However I observe that if the goal is instead to find the minimum set (fewest number) of rectangles to completely cover multiple areas, the solution would look like this instead: . 3 . 5 6 8 1 3 4 5 6 8 . 3 4 5 . 8 2 3 4 5 7 8 . 3 . 5 7 8 This second goal actually represents a more optimal solution to the problem, as fewer rectangles usually means fewer commands sent to Dwarf Fortress. However, this approach strikes me as closer to NP-Hard, based on my limited math knowledge. Watch the video if you'd like to better understand the overall strategy; I have not addressed other aspects of Quickfort's process, such as finding the shortest cursor-path that plots all rectangles. Possibly there is a solution to this problem that coherently combines these multiple strategies. Help of any form would be appreciated.

    Read the article

  • Web Services with Android - why no support for WSDL ?

    - by Itsik
    I'm creating a Client/Server application with Android (Client) and WCF (Web Service). From reading quite alot of discussions, I'm under the impression that there is no tool available to create the web service client in android automatically from a WSDL file. If this is the situation, what is the easiest approach for creating a communication infrastructure between the client and server, that can be updated easily in the future (plain GET, REST, use SOAP and manually parse responses) Initially, I wanted to build the web service and have the client created automatically with the provided WSDL file. Thanks

    Read the article

  • Crystal Reports : How to add an external assembly class?

    - by Sunil
    I am using VS2010, CrystalReport13 & MVC3. My problem is unable to add an external assembly in Crystal Report using "Database Expert" Option. I have a class named WeeklyReportModel in an external assembly. In my web project, data retrieving from DB as IEnumerable collection of WeeklyReportModel. I tried ProjectData - .NetObjects in Crystal Report for adding the WeeklyReportModel. But this external assembly is not showing under ".NetObjects". Then I tried other option as Create New Connection - ADO.Net – Make New Connection and pointed this External Assembly. It has been added under Ado.Net node, but while expanding displays as "...no items found..." Totally frustrated. Please help. External Assembly Class: namespace SMS.Domain { public class WeeklyReportModel { public int StoreId { get; set; } public string StoreName{ get; set; } public decimal Saturday { get; set; } public decimal Sunday { get; set; } public decimal Monday { get; set; } public decimal Tuesday { get; set; } public decimal Wednesday { get; set; } public decimal Thurday { get; set; } public decimal Friday { get; set; } public decimal Average { get; set; } public string DateRange { get; set; } } } In Controller-action[Data retrieving as Collection Of WeeklyReportModel] namespace SMS.UI.Controllers { public class ReportController : Controller { public ActionResult StoreWeeklyReport(string id) { DateTime weekStart, weekClose; string[] dateArray = id.Split('_'); weekStart = Convert.ToDateTime(dateArray[0].ToString()); weekClose = Convert.ToDateTime(dateArray[1].ToString()); SMS.Infrastructure.Report.AuditReport weeklyReport = new SMS.Infrastructure.Report.AuditReport(); IEnumerable<SMS.Domain.WeeklyReportModel> weeklyRpt = weeklyReport.ReportByStore().WeeklyReport(weekStart, weekClose); Session["WeeklyData"] = weeklyRpt; Response.Redirect("~/Reports/Weekly/StoreWeekly.aspx"); return View(); } } } Thanks in advance.

    Read the article

  • Many-to-one relation exception due to closed session after loading

    - by Nick Thissen
    Hi, I am using NHibernate (version 1.2.1) for the first time so I wrote a simple test application (an ASP.NET project) that uses it. In my database I have two tables: Persons and Categories. Each person gets one category, seems easy enough. | Persons | | Categories | |--------------| |--------------| | Id (PK) | | Id (PK) | | Firstname | | CategoryName | | Lastname | | CreatedTime | | CategoryId | | UpdatedTime | | CreatedTime | | Deleted | | UpdatedTime | | Deleted | The Id, CreatedTime, UpdatedTime and Deleted attributes are a convention I use in all my tables, so I have tried to bring this fact into an additional abstraction layer. I have a project DatabaseFramework which has three important classes: Entity: an abstract class that defines these four properties. All 'entity objects' (in this case Person and Category) must inherit Entity. IEntityManager: a generic interface (type parameter as Entity) that defines methods like Load, Insert, Update, etc. NHibernateEntityManager: an implementation of this interface using NHibernate to do the loading, saving, etc. Now, the Person and Category classes are straightforward, they just define the attributes of the tables of course (keeping in mind that four of them are in the base Entity class). Since the Persons table is related to the Categories table via the CategoryId attribute, the Person class has a Category property that holds the related category. However, in my webpage, I will also need the name of this category (CategoryName), for databinding purposes for example. So I created an additional property CategoryName that returns the CategoryName property of the current Category property, or an empty string if the Category is null: Namespace Database Public Class Person Inherits DatabaseFramework.Entity Public Overridable Property Firstname As String Public Overridable Property Lastname As String Public Overridable Property Category As Category Public Overridable ReadOnly Property CategoryName As String Get Return If(Me.Category Is Nothing, _ String.Empty, _ Me.Category.CategoryName) End Get End Property End Class End Namespace I am mapping the Person class using this mapping file. The many-to-one relation was suggested by Yads in another thread: <id name="Id" column="Id" type="int" unsaved-value="0"> <generator class="identity" /> </id> <property name="CreatedTime" type="DateTime" not-null="true" /> <property name="UpdatedTime" type="DateTime" not-null="true" /> <property name="Deleted" type="Boolean" not-null="true" /> <property name="Firstname" type="String" /> <property name="Lastname" type="String" /> <many-to-one name="Category" column="CategoryId" class="NHibernateWebTest.Database.Category, NHibernateWebTest" /> (I can't get it to show the root node, this forum hides it, I don't know how to escape the html-like tags...) The final important detail is the Load method of the NHibernateEntityManager implementation. (This is in C# as it's in a different project, sorry about that). I simply open a new ISession (ISessionFactory.OpenSession) in the GetSession method and then use that to fill an EntityCollection(Of TEntity) which is just a collection inheriting System.Collections.ObjectModel.Collection(Of T). public virtual EntityCollection< TEntity Load() { using (ISession session = this.GetSession()) { var entities = session .CreateCriteria(typeof (TEntity)) .Add(Expression.Eq("Deleted", false)) .List< TEntity (); return new EntityCollection< TEntity (entities); } } (Again, I can't get it to format the code correctly, it hides the generic type parameters, probably because it reads the angled symbols as a HTML tag..? If you know how to let me do that, let me know!) Now, the idea of this Load method is that I get a fully functional collection of Persons, all their properties set to the correct values (including the Category property, and thus, the CategoryName property should return the correct name). However, it seems that is not the case. When I try to data-bind the result of this Load method to a GridView in ASP.NET, it tells me this: Property accessor 'CategoryName' on object 'NHibernateWebTest.Database.Person' threw the following exception:'Could not initialize proxy - the owning Session was closed.' The exception occurs on the DataBind method call here: public virtual void LoadGrid() { if (this.Grid == null) return; this.Grid.DataSource = this.Manager.Load(); this.Grid.DataBind(); } Well, of course the session is closed, I closed it via the using block. Isn't that the correct approach, should I keep the session open? And for how long? Can I close it after the DataBind method has been run? In each case, I'd really like my Load method to just return a functional collection of items. It seems to me that it is now only getting the Category when it is required (eg, when the GridView wants to read the CategoryName, which wants to read the Category property), but at that time the session is closed. Is that reasoning correct? How do I stop this behavior? Or shouldn't I? And what should I do otherwise? Thanks!

    Read the article

  • Will "Programming in the Cloud" Ever Take off?

    - by Pierreten
    Just got a chance to try out a cloud programming environment that let's you develop .net apps within the browser (http://coderun.com/ide/) I found it pretty interesting, since I was able to develop a mockup ASP.net site on an IPad. With javascript engines in browsers becoming faster and faster, cheap server infrastructure to compile on; will the cloud become the IDE platform of the future?

    Read the article

  • Breaking out of the Google App Engine Python lock-in?

    - by Alterlife
    Are there any guidelines to writing Google App Engine Python code that would work without Google's infrastructure on other platforms? Is there any known attempt to create an open source framework which can run applications designed for Google App Engine on other platforms? Edit: To clarify, the question really is: If I develop an application on Google App Engine now, will I be able to migrate to another platform later, or is it a lock in?

    Read the article

  • Uploading files to a server that has Real Time Antivirus scan running

    - by zecougar
    I need to allow users to upload files onto a server that has an antivirus program running with real-time scanning switched on. What would be a good design to ensure that infected files are not uploaded to the server. Questions - would large files be copied onto disk and then immediately scanned, or would they be scanned as they are copied and not allowed to appear on disk if infected Should i build a seperate infrastructure around this to specifically ionvoke a scan on the copied file ? this might be an issue if the file is deleted through the real-time scan

    Read the article

  • Move from TFS 2008 to TFS 2010

    - by raffaeu
    We have succesfully built our TFS 2010 infrastructure and the first VM using Visual Studio 2010. Now I have a very simple question. How I can move a solution from our existing TFS 2008 to the new one 2010? Is there any tool included in TFS?

    Read the article

  • What Parallel computing APIs take good use of sockets?

    - by Ole Jak
    What Parallel computing APIs take good use of sockets? So my programm uses soskets, what Parallel computing APIs I can use that would help me but will not obligate me to go from sockets to anything else... I mean when we are on claster with some special, not socket infrastructure sistem that API emulates something like socket but uses that infrustructure (so programm peforms much faster then on sockets, but keeps having nice soskets API)

    Read the article

  • MEF CompositionInitializer for WPF

    - by Reed
    The Managed Extensibility Framework is an amazingly useful addition to the .NET Framework.  I was very excited to see System.ComponentModel.Composition added to the core framework.  Personally, I feel that MEF is one tool I’ve always been missing in my .NET development. Unfortunately, one perfect scenario for MEF tends to fall short of it’s full potential is in Windows Presentation Foundation development.  In particular, there are many times when the XAML parser constructs objects in WPF development, which makes composition of those parts difficult.  The current release of MEF (Preview Release 9) addresses this for Silverlight developers via System.ComponentModel.Composition.CompositionInitializer.  However, there is no equivalent class for WPF developers. The CompositionInitializer class provides the means for an object to compose itself.  This is very useful with WPF and Silverlight development, since it allows a View, such as a UserControl, to be generated via the standard XAML parser, and still automatically pull in the appropriate ViewModel in an extensible manner.  Glenn Block has demonstrated the usage for Silverlight in detail, but the same issues apply in WPF. As an example, let’s take a look at a very simple case.  Take the following XAML for a Window: <Window x:Class="WpfApplication1.MainView" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Title="MainWindow" Height="220" Width="300"> <Grid> <TextBlock Text="{Binding TheText}" /> </Grid> </Window> This does nothing but create a Window, add a simple TextBlock control, and use it to display the value of our “TheText” property in our DataContext class.  Since this is our main window, WPF will automatically construct and display this Window, so we need to handle constructing the DataContext and setting it ourselves. We could do this in code or in XAML, but in order to do it directly, we would need to hard code the ViewModel type directly into our XAML code, or we would need to construct the ViewModel class and set it in the code behind.  Both have disadvantages, and the disadvantages grow if we’re using MEF to compose our ViewModel. Ideally, we’d like to be able to have MEF construct our ViewModel for us.  This way, it can provide any construction requirements for our ViewModel via [ImportingConstructor], and it can handle fully composing the imported properties on our ViewModel.  CompositionInitializer allows this to occur. We use CompositionInitializer within our View’s constructor, and use it for self-composition of our View.  Using CompositionInitializer, we can modify our code behind to: public partial class MainView : Window { public MainView() { InitializeComponent(); CompositionInitializer.SatisfyImports(this); } [Import("MainViewModel")] public object ViewModel { get { return this.DataContext; } set { this.DataContext = value; } } } We then can add an Export on our ViewModel class like so: [Export("MainViewModel")] public class MainViewModel { public string TheText { get { return "Hello World!"; } } } MEF will automatically compose our application, decoupling our ViewModel injection to the DataContext of our View until runtime.  When we run this, we’ll see: There are many other approaches for using MEF to wire up the extensible parts within your application, of course.  However, any time an object is going to be constructed by code outside of your control, CompositionInitializer allows us to continue to use MEF to satisfy the import requirements of that object. In order to use this from WPF, I’ve ported the code from MEF Preview 9 and Glenn Block’s (now obsolete) PartInitializer port to Windows Presentation Foundation.  There are some subtle changes from the Silverlight port, mainly to handle running in a desktop application context.  The default behavior of my port is to construct an AggregateCatalog containing a DirectoryCatalog set to the location of the entry assembly of the application.  In addition, if an “Extensions” folder exists under the entry assembly’s directory, a second DirectoryCatalog for that folder will be included.  This behavior can be overridden by specifying a CompositionContainer or one or more ComposablePartCatalogs to the System.ComponentModel.Composition.Hosting.CompositionHost static class prior to the first use of CompositionInitializer. Please download CompositionInitializer and CompositionHost for VS 2010 RC, and contact me with any feedback. Composition.Initialization.Desktop.zip Edit on 3/29: Glenn Block has since updated his version of CompositionInitializer (and ExportFactory<T>!), and made it available here: http://cid-f8b2fd72406fb218.skydrive.live.com/self.aspx/blog/Composition.Initialization.Desktop.zip This is a .NET 3.5 solution, and should soon be pushed to CodePlex, and made available on the main MEF site.

    Read the article

  • Blending the Sketchflow Action

    - by GeekAgilistMercenary
    Started a new Sketchflow Prototype in Expression Blend recently and documented each of the steps.  This blog entry covers some of those steps, which are the basic elements of any prototype.  I will have more information regarding design, prototype creation, and the process of the initial phases for development in the future.  For now, I hope you enjoy this short walk through.  Also, be sure to check out my last quick entry on Sketchflow. I started off with a Sketchflow Project, just like I did in my previous entry (more specifics in that entry about how to manipulate and build out the Sketchflow Map). Once I created the project I setup the following Sketchflow Map. The CoreNavigation is a ComponentScreen setup solely for the page navigation at the top of the screen.  The XAML markup in case you want to create a Component Screen with the same design is included below. <UserControl xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:d="http://schemas.microsoft.com/expression/blend/2008" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" mc:Ignorable="d" xmlns:i="clr-namespace:System.Windows.Interactivity;assembly=System.Windows.Interactivity" xmlns:pb="clr-namespace:Microsoft.Expression.Prototyping.Behavior;assembly=Microsoft.Expression.Prototyping.Interactivity" x:Class="RapidPrototypeSketchScreens.CoreNavigation" d:DesignWidth="624" d:DesignHeight="49" Height="49" Width="624">   <Grid x:Name="LayoutRoot"> <TextBlock HorizontalAlignment="Stretch" Margin="307,3,0,0" Style="{StaticResource TitleCenter-Sketch}" Text="Aütøchart Scorecards" TextWrapping="Wrap"> <i:Interaction.Triggers> <i:EventTrigger EventName="MouseLeftButtonDown"> <pb:NavigateToScreenAction TargetScreen="RapidPrototypeSketchScreens.Screen_1"/> </i:EventTrigger> </i:Interaction.Triggers> </TextBlock> <Button HorizontalAlignment="Left" Margin="164,8,0,11" Style="{StaticResource Button-Sketch}" Width="144" Content="Scorecard"> <i:Interaction.Triggers> <i:EventTrigger EventName="Click"> <pb:NavigateToScreenAction TargetScreen="RapidPrototypeSketchScreens.Screen_1_2"/> </i:EventTrigger> </i:Interaction.Triggers> </Button> <Button HorizontalAlignment="Left" Margin="8,8,0,11" Style="{StaticResource Button-Sketch}" Width="152" Content="Standard Reports"> <i:Interaction.Triggers> <i:EventTrigger EventName="Click"> <pb:NavigateToScreenAction TargetScreen="RapidPrototypeSketchScreens.Screen_1_1"/> </i:EventTrigger> </i:Interaction.Triggers> </Button> </Grid> </UserControl> Now that the CoreNavigation Component Screen is done I built out each of the others.  In each of those screens I included the CoreNavigation Screen (all those little green lines in the image) as the top navigation.  In order to do that, as I created each of the pages I would hover over the CoreNavigation Object in the Sketchflow Map.  When the utilities drawer (the small menu that pops down under a node when you hover over it) shows click on the third little icon and drag it onto the page node you want a navigation screen on. Once I created all the screens I setup the navigation by opening up each screen and right clicking on the objects that needed to point to somewhere else in the prototype. Once I was done with the main page, my Home Navigation Page, it looked something like this in the Expression Blend Designer. I fleshed out each of the additional screens.  Once I was done I wanted to try out the deployment package.  The way to deploy a Sketchflow Prototype is to merely click on File –> Package SketchFlow Project and a prompt will appear.  In the prompt enter what you want the package to be called. I like to see the files generated afterwards too, so I checked the box to see that.  When Expression Blend is done generating everything you’ll have a directory like the one shown below, with all the needed files for deployment. Now these files can be copied or moved to any location for viewing.  One can even copy them (such as via FTP) to a server location to share with others.  Once they are deployed and you run the "TestPage.html" the other features of the Sketchflow Package are available. In the image below I have tagged a few sections to show the Sketchflow Player Features.  To the top left is the navigation, which provides a clearly defined area of movement in a list.  To the center right is the actual prototype application.  I have placed lists of things and made edits.  On the left hand side is the highlight feature, which is available in the Feedback section of the lower left.  On the right hand list I underlined the Autochart with an orange marker, and marked out two list items with a red marker. In the lower left hand side in the Feedback section is also an area to type in your feedback.  This can be useful for time based feedback, when you post this somewhere and want people to provide subsequent follow up feedback. Overall lots of great features, that enable some fairly rapid prototyping with customers.  Once one is familiar with the steps and parts of this Sketchflow Prototype Capabilities it is easy to step through an application without even stopping.  It really is that easy.  So get hold of Expression Blend 3 and get ramped up on Sketchflow, it will pay off in the design phases to do so! Original Entry

    Read the article

  • Batch Best Practices and Technical Best Practices Updated

    - by ACShorten
    The Batch Best Practices for Oracle Utilities Application Framework based products (Doc Id: 836362.1) and Technical Best Practices for Oracle Utilities Application Framework Based Products (Doc Id: 560367.1) have been updated with updated and new advice for the various versions of the Oracle Utilities Application Framework based products. These documents cover the following products: Oracle Utilities Customer Care And Billing (V2 and above) Oracle Utilities Meter Data Management (V2 and above) Oracle Utilities Mobile Workforce Management (V2 and above) Oracle Utilities Smart Grid Gateway (V2 and above) – All editions Oracle Enterprise Taxation Management (all versions) Oracle Enterprise Taxation and Policy Management (all versions) Whilst there is new advice, some of which has been posted on this blog, a lot of sections have been updated for advice based upon feedback from customers, partners, consultants, our development teams and our hard working Support personnel. All whitepapers are available from My Oracle Support.

    Read the article

  • Issue 15: The Benefits of Oracle Exastack

    - by rituchhibber
         SOLUTIONS FOCUS The Benefits of Oracle Exastack Paul ThompsonDirector, Alliances and Solutions Partner ProgramsOracle EMEA Alliances & Channels RESOURCES -- Oracle PartnerNetwork (OPN) Oracle Exastack Program Oracle Exastack Ready Oracle Exastack Optimized Oracle Exastack Labs and Enablement Resources Oracle Exastack Labs Video Tour SUBSCRIBE FEEDBACK PREVIOUS ISSUES Exastack is a revolutionary programme supporting Oracle independent software vendor partners across the entire Oracle technology stack. Oracle's core strategy is to engineer software and hardware together, and our ISV strategy is the same. At Oracle we design engineered systems that are pre-integrated to reduce the cost and complexity of IT infrastructures while increasing productivity and performance. Oracle innovates and optimises performance at every layer of the stack to simplify business operations, drive down costs and accelerate business innovation. Our engineered systems are optimised to achieve enterprise performance levels that are unmatched in the industry. Faster time to production is achieved by implementing pre-engineered and pre-assembled hardware and software bundles. Our strategy of delivering a single-vendor stack simplifies and reduces costs associated with purchasing, deploying, and supporting IT environments for our customers and partners. In parallel to this core engineered systems strategy, the Oracle Exastack Program enables our Oracle ISV partners to leverage a scalable, integrated infrastructure that delivers their applications tuned, tested and optimised for high-performance. Specifically, the Oracle Exastack Program helps ISVs run their solutions on the Oracle Exadata Database Machine, Oracle Exalogic Elastic Cloud, and Oracle SPARC SuperCluster T4-4 - integrated systems products in which the software and hardware are engineered to work together. These products provide OPN members with a lower cost and high performance infrastructure for database and application workloads across on-premise and cloud based environments. Ready and Optimized Oracle Partners can now leverage our new Oracle Exastack Program to become Oracle Exastack Ready and Oracle Exastack Optimized. Partners can achieve Oracle Exastack Ready status through their support for Oracle Solaris, Oracle Linux, Oracle VM, Oracle Database, Oracle WebLogic Server, Oracle Exadata Database Machine, Oracle Exalogic Elastic Cloud, and Oracle SPARC SuperCluster T4-4. By doing this, partners can demonstrate to their customers that their applications are available on the latest major releases of these products. The Oracle Exastack Ready programme helps customers readily differentiate Oracle partners from lesser software developers, and identify applications that support Oracle engineered systems. Achieving Oracle Exastack Optimized status demonstrates that an OPN member has proven itself against goals for performance and scalability on Oracle integrated systems. This status enables end customers to readily identify Oracle partners that have tested and tuned their solutions for optimum performance on an Oracle Exadata Database Machine, Oracle Exalogic Elastic Cloud, and Oracle SPARC SuperCluster T4-4. These ISVs can display the Oracle Exadata Optimized, Oracle Exalogic Optimized or Oracle SPARC SuperCluster Optimized logos on websites and on all their collateral to show that they have tested and tuned their application for optimum performance. Deliver higher value to customers Oracle's investment in engineered systems enables ISV partners to deliver higher value to customer business processes. New innovations are enabled through extreme performance unachievable through traditional best-of-breed multi-vendor server/software approaches. Core product requirements can be launched faster, enabling ISVs to focus research and development investment on core competencies in order to bring value to market as quickly as possible. Through Exastack, partners no longer have to worry about the underlying product stack, which allows greater focus on the development of intellectual property above the stack. Partners are not burdened by platform issues and can concentrate simply on furthering their applications. The advantage to end customers is that partners can focus all efforts on business functionality, rather than bullet-proofing underlying technologies, and so will inevitably deliver application updates faster. Exastack provides ISVs with a number of flexible deployment options, such as on-premise or Cloud, while maintaining one single code base for applications regardless of customer deployment preference. Customers buying their solutions from Exastack ISVs can therefore be confident in deploying on their own networks, on private clouds or into a public cloud. The underlying platform will support all conceivable deployments, enabling a focus on the ISV's application itself that wouldn't be possible with other vendor partners. It stands to reason that Exastack accelerates time to value as well as lowering implementation costs all round. There is a big competitive advantage in partners being able to offer customers an optimised, pre-configured solution rather than an assortment of components and a suggested fit. Once a customer has decided to buy an Oracle Exastack Ready or Optimized partner solution, it will be up and running without any need for the customer to conduct testing of its own. Operational costs and complexity are also reduced, thanks to streamlined customer support through standardised configurations and pro-active monitoring. 'Engineered to Work Together' is a significant statement of Oracle strategy. It guarantees smoother deployment of a single vendor solution, clear ownership with no finger-pointing and the peace of mind of the Oracle Support Centre underpinning the entire product stack. Next steps Every OPN member with packaged applications must seriously consider taking steps to become Exastack Ready, or Exastack Optimized at the first opportunity. That first step down the track is to talk to an expert on the OPN Portal, at the Oracle Partner Business Center or to discuss the next steps with the closest Oracle account manager. Oracle Exastack lab environments and other technical enablement resources are available for OPN members wishing to further their knowledge of Oracle Exastack and qualify their applications for Oracle Exastack Optimized. New Boot Camps and Guided Learning Paths (GLPs), tailored specifically for ISVs, are available for Oracle Exadata Database Machine, Oracle Exalogic Elastic Cloud, Oracle Linux, Oracle Solaris, Oracle Database, and Oracle WebLogic Server. More information about these GLPs and Boot Camps (including delivery dates and locations) are posted on the OPN Competency Center and corresponding OPN Knowledge Zones. Learn more about Oracle Exastack labs and ISV specific enablement resources. "Oracle Specialized partners are of course front-and-centre, with potential customers clearly directed to those partners and to Exadata Ready partners as a matter of priority." --More OpenWorld 2011 highlights for Oracle partners and customers Oracle Application Testing Suite 9.3 application testing solution for Web, SOA and Oracle Applications Oracle Application Express Release 4.1 improving the development of database-centric Web 2.0 applications and reports Oracle Unified Directory 11g helping customers manage the critical identity information that drives their business applications Oracle SOA Suite for healthcare integration Oracle Enterprise Pack for Eclipse 11g demonstrating continued commitment to the developer and open source communities Oracle Coherence 3.7.1, the latest release of the industry's leading distributed in-memory data grid Oracle Process Accelerators helping to simplify and accelerate time-to-value for customers' business process management initiatives Oracle's JD Edwards EnterpriseOne on the iPad meeting the increasingly mobile demands of today's workforces Oracle CRM On Demand Release 19 Innovation Pack introducing industry-leading hosted call centre and enterprise-marketing capabilities designed to drive further revenue and productivity while reducing costs and improving the customer experience Oracle's Primavera Portfolio Management 9 for businesses delivering on project portfolio goals with increased versatility, transparency and accuracy Oracle's PeopleSoft Human Capital Management (HCM) 9.1 On Demand Standard Edition helping customers manage their long-term investment in enterprise-wide business applications New versions of Oracle FLEXCUBE Universal Banking and Oracle FLEXCUBE Investor Servicing for Financial Institutions, as well as Oracle Financial Services Enterprise Case Management, Oracle Financial Services Pricing Management, Oracle Financial Management Analytics and Oracle Tax Analytics Oracle Utilities Network Management System 1.11 offering new modelling and analysis features to improve distribution-grid management for electric utilities Oracle Communications Network Charging and Control 4.4 helping communications service providers (CSPs) offer their customers more flexible charging options Plus many, many more technology announcements, enhancements, momentum news and community updates -- Oracle OpenWorld 2012 A date has already been set for Oracle OpenWorld 2012. Held once again in San Francisco, exhibitors, partners, customers and Oracle people will gather from 30 September until 4 November to meet, network and learn together with the rest of the global Oracle community. Register now for Oracle OpenWorld 2012 and save $$$! We'll reward your early planning for Oracle OpenWorld 2012 with reduced rates. Super Saver deals are now available! -- Back to the welcome page

    Read the article

  • data source does not support server-side data paging uisng asp.net Csharp

    - by Aamir Hasan
    Yesterday some one mail me and ask about data source does not support server side data paging.So i write the the solution here please if you have got this problem read this article and see the example code this will help you a Lot.The only change you have to do is in the DataBind().Here you have used the SqlDataReader to read data retrieved from the database, but SqlDataReader is forward only. You can not traverse back and forth on it.So the solution for this is using DataAdapter and DataSet.So your function may change some what like this private void DataBind(){//for grid viewSqlCommand cmdO;string SQL = "select * from TABLE ";conn.Open();cmdO = new SqlCommand(SQL, conn);SqlDataAdapter da = new SqlDataAdapter(cmdO);DataSet ds = new DataSet();da.Fill(ds);GridView1.Visible = true;GridView1.DataSource = ds;GridView1.DataBind();ds.Dispose();da.Dispose();conn.Close();} This surely works. The reset of your code is fine. Enjoy coding.

    Read the article

  • The Dark Knight and Team Fortress 2 Mashup Movie Trailer [Video]

    - by Asian Angel
    If you are a Batman fan then you will find this video mashup interesting to watch. YouTube user TrueOneMoreUser has created a unique Dark Knight Trailer using characters from Team Fortress 2 and select scenes from The Dark Knight movie itself. Team Fortress 2 – The Demo Knight [via Geeks are Sexy] Latest Features How-To Geek ETC How To Make Disposable Sleeves for Your In-Ear Monitors Macs Don’t Make You Creative! So Why Do Artists Really Love Apple? MacX DVD Ripper Pro is Free for How-To Geek Readers (Time Limited!) HTG Explains: What’s a Solid State Drive and What Do I Need to Know? How to Get Amazing Color from Photos in Photoshop, GIMP, and Paint.NET Learn To Adjust Contrast Like a Pro in Photoshop, GIMP, and Paint.NET Bring the Grid to Your Desktop with the TRON Legacy Theme for Windows 7 The Dark Knight and Team Fortress 2 Mashup Movie Trailer [Video] Dirt Cheap DSLR Viewfinder Improves Outdoor DSLR LCD Visibility Lakeside Sunset in the Mountains [Wallpaper] Taskbar Meters Turn Your Taskbar into a System Resource Monitor Create Shortcuts for Your Favorite or Most Used Folders in Ubuntu

    Read the article

  • This Week In Geek History: The Hitchhiker’s Guide, Compact Discs, and Whirlwind Foreshadows Operating Systems

    - by Jason Fitzpatrick
    Every week we look at fascinating facts and trivia from the history of Geekdom. This week we’re taking a look at The Hitchhiker’s Guide to the Galaxy, Compact Discs, and Whirlwind, the first computer to foreshadow modern operating systems. Latest Features How-To Geek ETC How To Make Disposable Sleeves for Your In-Ear Monitors Macs Don’t Make You Creative! So Why Do Artists Really Love Apple? MacX DVD Ripper Pro is Free for How-To Geek Readers (Time Limited!) HTG Explains: What’s a Solid State Drive and What Do I Need to Know? How to Get Amazing Color from Photos in Photoshop, GIMP, and Paint.NET Learn To Adjust Contrast Like a Pro in Photoshop, GIMP, and Paint.NET Bring the Grid to Your Desktop with the TRON Legacy Theme for Windows 7 The Dark Knight and Team Fortress 2 Mashup Movie Trailer [Video] Dirt Cheap DSLR Viewfinder Improves Outdoor DSLR LCD Visibility Lakeside Sunset in the Mountains [Wallpaper] Taskbar Meters Turn Your Taskbar into a System Resource Monitor Create Shortcuts for Your Favorite or Most Used Folders in Ubuntu

    Read the article

  • DIY Super Mario “Kite” Lights Up the Sky [Video]

    - by Jason Fitzpatrick
    Throw some LEDs in helium balloons, string them together in a pixel-style grid, and you’ve got yourself a massive and glowing 8-bit sprite (in this case, a giant Super Mario). Read on to watch the video and see how you can build your own. Check out the video notes for more information on constructing it or, hit up the link below for more projects by Mark Rober. Mark Rober’s Project Blog [Make] HTG Explains: What Is RSS and How Can I Benefit From Using It? HTG Explains: Why You Only Have to Wipe a Disk Once to Erase It HTG Explains: Learn How Websites Are Tracking You Online

    Read the article

< Previous Page | 124 125 126 127 128 129 130 131 132 133 134 135  | Next Page >