Search Results

Search found 4567 results on 183 pages for 'gae models'.

Page 144/183 | < Previous Page | 140 141 142 143 144 145 146 147 148 149 150 151  | Next Page >

  • WPF Binding to a viewmodel

    - by user832747
    I'm simply binding a WPF DataGridTextColumn with a binding to my grid rows. <DataGridTextColumn Header="Name" Binding="{Binding Name}" /> I've bound to my row view models. The Name property has a PRIVATE setter. public string Name { get { return _name; } private set { _name = value; } } Shouldn't the datagrid prevent me from accessing the private setter? The grid allows me to access it. I swear it never used to, unless I'm forgetting something?

    Read the article

  • IEnumerable<SelectListItem> error question

    - by user281180
    I have the following code, but i`m having error of Error 6 foreach statement cannot operate on variables of type 'int' because 'int' does not contain a public definition for 'GetEnumerator' C:\Dev\DEV\Code\MvcUI\Models\MappingModel.cs 100 13 MvcUI How can I solve this? Note: string [] projectID; Class Employee { int id {get; set;} string Name {get;set;} } public IEnumerable<SelectListItem> GetStudents() { List<SelectListItem> result = new List<SelectListItem>(); foreach (var id in Convert.ToInt32(projectID)) { foreach( Employee emp in Project.Load(id)) result.Add(new SelectListItem { Selected = false, Text = emp.ID.ToString(), Value = emp.Name }); return result.AsEnumerable(); } }

    Read the article

  • rails - REST or create another method

    - by user1304740
    Let's assume we have two models linked with a 1-to-many relationship (like clients and invoices - a client can have many invoices). In a view of a 'client' (let's say the 'show' view), there is a form to capture an 'invoice'. I found 2 approaches: This form should be handled by the 'invoice' controller (method create), having client_id passed as a parameter This form should be handled by a new method in 'client' controller, probably a PUT method defined in routes.rb. Is there a 'Rails way', or both of them are good? Is there a preffered way? Thanks!

    Read the article

  • php calling classes functions in separate pages

    - by sys_debug
    I've worked with J2EE recently and like the idea of struts.xml where I can handle the redirection to pages based on return string from action classes. In PHP, in my new under-development site, I am trying to follow the MVC standards without an MVC framework used from the internet. So I create the controllers, models and views (empty now). The only thing I am really stuck at is when I submit the form in view (insert_product.php) then I will need to create another php page to handle the post data and pass them to controllers. Anyway of avoiding creating those pages and maybe having something like struts.xml? Even if I can post data directly to controller class, that would be good. Thanks,

    Read the article

  • Rails: updating an item along with associated items

    - by shmichael
    Suppose I have a simple to-do list application. The application contains two models: lists (have an owner and description) items (have name and due-date) that belong to a specific list I would like to have a single edit screen for a list in which I update the list attributes (such as description) and also create/delete/modify associated items. There should be a single "save" button that will commit all changes. Unless save is pressed, any change to the list and the items should be forgotten. I wasn't able to find an elegant best practice for this. Would greatly appreciate any suggestions and/or references to existing implementations.

    Read the article

  • Rails 3 many-to-many query on includes or joins

    - by Myat
    I have three models User, Activity and ActivityRecord. class User < ActiveRecord::Base # Include default devise modules. Others available are: # :token_authenticatable, :confirmable, # :lockable, :timeoutable and :omniauthable devise :database_authenticatable, :registerable, :recoverable, :rememberable, :trackable, :validatable # Setup accessible (or protected) attributes for your model attr_accessible :first_name, :last_name, :email, :gender, :password, :password_confirmation, :remember_me # attr_accessible :title, :body has_many :activities has_many :activity_records , :through=> :activities end class Activity < ActiveRecord::Base attr_accessible :point, :title belongs_to :user has_many :activity_records end class ActivityRecord < ActiveRecord::Base attr_accessible :activity_id belongs_to :activity scope :today, lambda { where("DATE(#{'activity_records'}.created_at) = '#{Date.today.to_s(:db)}'")} end I would like to query all activities for a user together with the count for their respective activity records for today. For example, after querying and converting to json format, I would like to have something like below [ { id: 23 title: "jogging", point: "5", today_activity_records_count: 1, }, { id: 12 title: "diet dinner", point: "2", today_activity_records_count: 0, }, ] Please kindly guide me how I can achieve that. Thanks

    Read the article

  • Proper way to use before_create

    - by ruevaughn
    Pretty basic question here, I need to write a before filter on my Category model, to ensure that the depth never reaches more than 2. Here is what I have so far. app/models/category.rb before_create :check_depth def check_depth self.depth = 1 if depth > 2 end I need it instead of setting depth to 1, just to return a error message, but I can't even get this current setup to work, I get the error undefined method `>' for nil:NilClass So, instead of setting the depth to one like I'm trying to do how would I send an error instead? And any help getting the current function working for informational purposes? Thanks in advance

    Read the article

  • Using Durandal to Create Single Page Apps

    - by Stephen.Walther
    A few days ago, I gave a talk on building Single Page Apps on the Microsoft Stack. In that talk, I recommended that people use Knockout, Sammy, and RequireJS to build their presentation layer and use the ASP.NET Web API to expose data from their server. After I gave the talk, several people contacted me and suggested that I investigate a new open-source JavaScript library named Durandal. Durandal stitches together Knockout, Sammy, and RequireJS to make it easier to use these technologies together. In this blog entry, I want to provide a brief walkthrough of using Durandal to create a simple Single Page App. I am going to demonstrate how you can create a simple Movies App which contains (virtual) pages for viewing a list of movies, adding new movies, and viewing movie details. The goal of this blog entry is to give you a sense of what it is like to build apps with Durandal. Installing Durandal First things first. How do you get Durandal? The GitHub project for Durandal is located here: https://github.com/BlueSpire/Durandal The Wiki — located at the GitHub project — contains all of the current documentation for Durandal. Currently, the documentation is a little sparse, but it is enough to get you started. Instead of downloading the Durandal source from GitHub, a better option for getting started with Durandal is to install one of the Durandal NuGet packages. I built the Movies App described in this blog entry by first creating a new ASP.NET MVC 4 Web Application with the Basic Template. Next, I executed the following command from the Package Manager Console: Install-Package Durandal.StarterKit As you can see from the screenshot of the Package Manager Console above, the Durandal Starter Kit package has several dependencies including: · jQuery · Knockout · Sammy · Twitter Bootstrap The Durandal Starter Kit package includes a sample Durandal application. You can get to the Starter Kit app by navigating to the Durandal controller. Unfortunately, when I first tried to run the Starter Kit app, I got an error because the Starter Kit is hard-coded to use a particular version of jQuery which is already out of date. You can fix this issue by modifying the App_Start\DurandalBundleConfig.cs file so it is jQuery version agnostic like this: bundles.Add( new ScriptBundle("~/scripts/vendor") .Include("~/Scripts/jquery-{version}.js") .Include("~/Scripts/knockout-{version}.js") .Include("~/Scripts/sammy-{version}.js") // .Include("~/Scripts/jquery-1.9.0.min.js") // .Include("~/Scripts/knockout-2.2.1.js") // .Include("~/Scripts/sammy-0.7.4.min.js") .Include("~/Scripts/bootstrap.min.js") ); The recommendation is that you create a Durandal app in a folder off your project root named App. The App folder in the Starter Kit contains the following subfolders and files: · durandal – This folder contains the actual durandal JavaScript library. · viewmodels – This folder contains all of your application’s view models. · views – This folder contains all of your application’s views. · main.js — This file contains all of the JavaScript startup code for your app including the client-side routing configuration. · main-built.js – This file contains an optimized version of your application. You need to build this file by using the RequireJS optimizer (unfortunately, before you can run the optimizer, you must first install NodeJS). For the purpose of this blog entry, I wanted to start from scratch when building the Movies app, so I deleted all of these files and folders except for the durandal folder which contains the durandal library. Creating the ASP.NET MVC Controller and View A Durandal app is built using a single server-side ASP.NET MVC controller and ASP.NET MVC view. A Durandal app is a Single Page App. When you navigate between pages, you are not navigating to new pages on the server. Instead, you are loading new virtual pages into the one-and-only-one server-side view. For the Movies app, I created the following ASP.NET MVC Home controller: public class HomeController : Controller { public ActionResult Index() { return View(); } } There is nothing special about the Home controller – it is as basic as it gets. Next, I created the following server-side ASP.NET view. This is the one-and-only server-side view used by the Movies app: @{ Layout = null; } <!DOCTYPE html> <html> <head> <title>Index</title> </head> <body> <div id="applicationHost"> Loading app.... </div> @Scripts.Render("~/scripts/vendor") <script type="text/javascript" src="~/App/durandal/amd/require.js" data-main="/App/main"></script> </body> </html> Notice that I set the Layout property for the view to the value null. If you neglect to do this, then the default ASP.NET MVC layout will be applied to the view and you will get the <!DOCTYPE> and opening and closing <html> tags twice. Next, notice that the view contains a DIV element with the Id applicationHost. This marks the area where virtual pages are loaded. When you navigate from page to page in a Durandal app, HTML page fragments are retrieved from the server and stuck in the applicationHost DIV element. Inside the applicationHost element, you can place any content which you want to display when a Durandal app is starting up. For example, you can create a fancy splash screen. I opted for simply displaying the text “Loading app…”: Next, notice the view above includes a call to the Scripts.Render() helper. This helper renders out all of the JavaScript files required by the Durandal library such as jQuery and Knockout. Remember to fix the App_Start\DurandalBundleConfig.cs as described above or Durandal will attempt to load an old version of jQuery and throw a JavaScript exception and stop working. Your application JavaScript code is not included in the scripts rendered by the Scripts.Render helper. Your application code is loaded dynamically by RequireJS with the help of the following SCRIPT element located at the bottom of the view: <script type="text/javascript" src="~/App/durandal/amd/require.js" data-main="/App/main"></script> The data-main attribute on the SCRIPT element causes RequireJS to load your /app/main.js JavaScript file to kick-off your Durandal app. Creating the Durandal Main.js File The Durandal Main.js JavaScript file, located in your App folder, contains all of the code required to configure the behavior of Durandal. Here’s what the Main.js file looks like in the case of the Movies app: require.config({ paths: { 'text': 'durandal/amd/text' } }); define(function (require) { var app = require('durandal/app'), viewLocator = require('durandal/viewLocator'), system = require('durandal/system'), router = require('durandal/plugins/router'); //>>excludeStart("build", true); system.debug(true); //>>excludeEnd("build"); app.start().then(function () { //Replace 'viewmodels' in the moduleId with 'views' to locate the view. //Look for partial views in a 'views' folder in the root. viewLocator.useConvention(); //configure routing router.useConvention(); router.mapNav("movies/show"); router.mapNav("movies/add"); router.mapNav("movies/details/:id"); app.adaptToDevice(); //Show the app by setting the root view model for our application with a transition. app.setRoot('viewmodels/shell', 'entrance'); }); }); There are three important things to notice about the main.js file above. First, notice that it contains a section which enables debugging which looks like this: //>>excludeStart(“build”, true); system.debug(true); //>>excludeEnd(“build”); This code enables debugging for your Durandal app which is very useful when things go wrong. When you call system.debug(true), Durandal writes out debugging information to your browser JavaScript console. For example, you can use the debugging information to diagnose issues with your client-side routes: (The funny looking //> symbols around the system.debug() call are RequireJS optimizer pragmas). The main.js file is also the place where you configure your client-side routes. In the case of the Movies app, the main.js file is used to configure routes for three page: the movies show, add, and details pages. //configure routing router.useConvention(); router.mapNav("movies/show"); router.mapNav("movies/add"); router.mapNav("movies/details/:id");   The route for movie details includes a route parameter named id. Later, we will use the id parameter to lookup and display the details for the right movie. Finally, the main.js file above contains the following line of code: //Show the app by setting the root view model for our application with a transition. app.setRoot('viewmodels/shell', 'entrance'); This line of code causes Durandal to load up a JavaScript file named shell.js and an HTML fragment named shell.html. I’ll discuss the shell in the next section. Creating the Durandal Shell You can think of the Durandal shell as the layout or master page for a Durandal app. The shell is where you put all of the content which you want to remain constant as a user navigates from virtual page to virtual page. For example, the shell is a great place to put your website logo and navigation links. The Durandal shell is composed from two parts: a JavaScript file and an HTML file. Here’s what the HTML file looks like for the Movies app: <h1>Movies App</h1> <div class="container-fluid page-host"> <!--ko compose: { model: router.activeItem, //wiring the router afterCompose: router.afterCompose, //wiring the router transition:'entrance', //use the 'entrance' transition when switching views cacheViews:true //telling composition to keep views in the dom, and reuse them (only a good idea with singleton view models) }--><!--/ko--> </div> And here is what the JavaScript file looks like: define(function (require) { var router = require('durandal/plugins/router'); return { router: router, activate: function () { return router.activate('movies/show'); } }; }); The JavaScript file contains the view model for the shell. This view model returns the Durandal router so you can access the list of configured routes from your shell. Notice that the JavaScript file includes a function named activate(). This function loads the movies/show page as the first page in the Movies app. If you want to create a different default Durandal page, then pass the name of a different age to the router.activate() method. Creating the Movies Show Page Durandal pages are created out of a view model and a view. The view model contains all of the data and view logic required for the view. The view contains all of the HTML markup for rendering the view model. Let’s start with the movies show page. The movies show page displays a list of movies. The view model for the show page looks like this: define(function (require) { var moviesRepository = require("repositories/moviesRepository"); return { movies: ko.observable(), activate: function() { this.movies(moviesRepository.listMovies()); } }; }); You create a view model by defining a new RequireJS module (see http://requirejs.org). You create a RequireJS module by placing all of your JavaScript code into an anonymous function passed to the RequireJS define() method. A RequireJS module has two parts. You retrieve all of the modules which your module requires at the top of your module. The code above depends on another RequireJS module named repositories/moviesRepository. Next, you return the implementation of your module. The code above returns a JavaScript object which contains a property named movies and a method named activate. The activate() method is a magic method which Durandal calls whenever it activates your view model. Your view model is activated whenever you navigate to a page which uses it. In the code above, the activate() method is used to get the list of movies from the movies repository and assign the list to the view model movies property. The HTML for the movies show page looks like this: <table> <thead> <tr> <th>Title</th><th>Director</th> </tr> </thead> <tbody data-bind="foreach:movies"> <tr> <td data-bind="text:title"></td> <td data-bind="text:director"></td> <td><a data-bind="attr:{href:'#/movies/details/'+id}">Details</a></td> </tr> </tbody> </table> <a href="#/movies/add">Add Movie</a> Notice that this is an HTML fragment. This fragment will be stuffed into the page-host DIV element in the shell.html file which is stuffed, in turn, into the applicationHost DIV element in the server-side MVC view. The HTML markup above contains data-bind attributes used by Knockout to display the list of movies (To learn more about Knockout, visit http://knockoutjs.com). The list of movies from the view model is displayed in an HTML table. Notice that the page includes a link to a page for adding a new movie. The link uses the following URL which starts with a hash: #/movies/add. Because the link starts with a hash, clicking the link does not cause a request back to the server. Instead, you navigate to the movies/add page virtually. Creating the Movies Add Page The movies add page also consists of a view model and view. The add page enables you to add a new movie to the movie database. Here’s the view model for the add page: define(function (require) { var app = require('durandal/app'); var router = require('durandal/plugins/router'); var moviesRepository = require("repositories/moviesRepository"); return { movieToAdd: { title: ko.observable(), director: ko.observable() }, activate: function () { this.movieToAdd.title(""); this.movieToAdd.director(""); this._movieAdded = false; }, canDeactivate: function () { if (this._movieAdded == false) { return app.showMessage('Are you sure you want to leave this page?', 'Navigate', ['Yes', 'No']); } else { return true; } }, addMovie: function () { // Add movie to db moviesRepository.addMovie(ko.toJS(this.movieToAdd)); // flag new movie this._movieAdded = true; // return to list of movies router.navigateTo("#/movies/show"); } }; }); The view model contains one property named movieToAdd which is bound to the add movie form. The view model also has the following three methods: 1. activate() – This method is called by Durandal when you navigate to the add movie page. The activate() method resets the add movie form by clearing out the movie title and director properties. 2. canDeactivate() – This method is called by Durandal when you attempt to navigate away from the add movie page. If you return false then navigation is cancelled. 3. addMovie() – This method executes when the add movie form is submitted. This code adds the new movie to the movie repository. I really like the Durandal canDeactivate() method. In the code above, I use the canDeactivate() method to show a warning to a user if they navigate away from the add movie page – either by clicking the Cancel button or by hitting the browser back button – before submitting the add movie form: The view for the add movie page looks like this: <form data-bind="submit:addMovie"> <fieldset> <legend>Add Movie</legend> <div> <label> Title: <input data-bind="value:movieToAdd.title" required /> </label> </div> <div> <label> Director: <input data-bind="value:movieToAdd.director" required /> </label> </div> <div> <input type="submit" value="Add" /> <a href="#/movies/show">Cancel</a> </div> </fieldset> </form> I am using Knockout to bind the movieToAdd property from the view model to the INPUT elements of the HTML form. Notice that the FORM element includes a data-bind attribute which invokes the addMovie() method from the view model when the HTML form is submitted. Creating the Movies Details Page You navigate to the movies details Page by clicking the Details link which appears next to each movie in the movies show page: The Details links pass the movie ids to the details page: #/movies/details/0 #/movies/details/1 #/movies/details/2 Here’s what the view model for the movies details page looks like: define(function (require) { var router = require('durandal/plugins/router'); var moviesRepository = require("repositories/moviesRepository"); return { movieToShow: { title: ko.observable(), director: ko.observable() }, activate: function (context) { // Grab movie from repository var movie = moviesRepository.getMovie(context.id); // Add to view model this.movieToShow.title(movie.title); this.movieToShow.director(movie.director); } }; }); Notice that the view model activate() method accepts a parameter named context. You can take advantage of the context parameter to retrieve route parameters such as the movie Id. In the code above, the context.id property is used to retrieve the correct movie from the movie repository and the movie is assigned to a property named movieToShow exposed by the view model. The movie details view displays the movieToShow property by taking advantage of Knockout bindings: <div> <h2 data-bind="text:movieToShow.title"></h2> directed by <span data-bind="text:movieToShow.director"></span> </div> Summary The goal of this blog entry was to walkthrough building a simple Single Page App using Durandal and to get a feel for what it is like to use this library. I really like how Durandal stitches together Knockout, Sammy, and RequireJS and establishes patterns for using these libraries to build Single Page Apps. Having a standard pattern which developers on a team can use to build new pages is super valuable. Once you get the hang of it, using Durandal to create new virtual pages is dead simple. Just define a new route, view model, and view and you are done. I also appreciate the fact that Durandal did not attempt to re-invent the wheel and that Durandal leverages existing JavaScript libraries such as Knockout, RequireJS, and Sammy. These existing libraries are powerful libraries and I have already invested a considerable amount of time in learning how to use them. Durandal makes it easier to use these libraries together without losing any of their power. Durandal has some additional interesting features which I have not had a chance to play with yet. For example, you can use the RequireJS optimizer to combine and minify all of a Durandal app’s code. Also, Durandal supports a way to create custom widgets (client-side controls) by composing widgets from a controller and view. You can download the code for the Movies app by clicking the following link (this is a Visual Studio 2012 project): Durandal Movie App

    Read the article

  • JavaScript Data Binding Frameworks

    - by dwahlin
    Data binding is where it’s at now days when it comes to building client-centric Web applications. Developers experienced with desktop frameworks like WPF or web frameworks like ASP.NET, Silverlight, or others are used to being able to take model objects containing data and bind them to UI controls quickly and easily. When moving to client-side Web development the data binding story hasn’t been great since neither HTML nor JavaScript natively support data binding. This means that you have to write code to place data in a control and write code to extract it. Although it’s certainly feasible to do it from scratch (many of us have done it this way for years), it’s definitely tedious and not exactly the best solution when it comes to maintenance and re-use. Over the last few years several different script libraries have been released to simply the process of binding data to HTML controls. In fact, the subject of data binding is becoming so popular that it seems like a new script library is being released nearly every week. Many of the libraries provide MVC/MVVM pattern support in client-side JavaScript apps and some even integrate directly with server frameworks like Node.js. Here’s a quick list of a few of the available libraries that support data binding (if you like any others please add a comment and I’ll try to keep the list updated): AngularJS MVC framework for data binding (although closely follows the MVVM pattern). Backbone.js MVC framework with support for models, key/value binding, custom events, and more. Derby Provides a real-time environment that runs in the browser an in Node.js. The library supports data binding and templates. Ember Provides support for templates that automatically update as data changes. JsViews Data binding framework that provides “interactive data-driven views built on top of JsRender templates”. jQXB Expression Binder Lightweight jQuery plugin that supports bi-directional data binding support. KnockoutJS MVVM framework with robust support for data binding. For an excellent look at using KnockoutJS check out John Papa’s course on Pluralsight. Meteor End to end framework that uses Node.js on the server and provides support for data binding on  the client. Simpli5 JavaScript framework that provides support for two-way data binding. WinRT with HTML5/JavaScript If you’re building Windows 8 applications using HTML5 and JavaScript there’s built-in support for data binding in the WinJS library.   I won’t have time to write about each of these frameworks, but in the next post I’m going to talk about my (current) favorite when it comes to client-side JavaScript data binding libraries which is AngularJS. AngularJS provides an extremely clean way – in my opinion - to extend HTML syntax to support data binding while keeping model objects (the objects that hold the data) free from custom framework method calls or other weirdness. While I’m writing up the next post, feel free to visit the AngularJS developer guide if you’d like additional details about the API and want to get started using it.

    Read the article

  • Older SAS1 hardware Vs. newer SAS2 hardware

    - by user12620172
    I got a question today from someone asking about the older SAS1 hardware from over a year ago that we had on the older 7x10 series. They didn't leave an email so I couldn't respond directly, but I said this blog would be blunt, frank, and open so I have no problem addressing it publicly. A quick history lesson here: When Sun first put out the 7x10 family hardware, the 7410 and 7310 used a SAS1 backend connection to a JBOD that had SATA drives in it. This JBOD was not manufactured by Sun nor did Sun own the IP for it. Now, when Oracle took over, they had a problem with that, and I really can’t blame them. The decision was made to cut off that JBOD and it’s manufacturer completely and use our own where Oracle controlled both the IP and the manufacturing. So in the summer of 2010, the cut was made, and the 7410 and 7310 had a hardware refresh and now had a SAS2 backend going to a SAS2 JBOD with SAS2 drives instead of SATA. This new hardware had two big advantages. First, there was a nice performance increase, mostly due to the faster backend. Even better, the SAS2 interface on the drives allowed for a MUCH faster failover between cluster heads, as the SATA drives were the bottleneck on the older hardware. In September of 2010 there was a major refresh of the rest of the 7000 hardware, the controllers and the other family members, and that’s where we got today’s current line-up of the 7x20 series. So the 7x20 has always used the new trays, and the 7410 and 7310 have used the new SAS2 trays since last July of 2010. Now for the bad news. People who have the 7410 and 7310 from BEFORE the July 2010 cutoff have the models with SAS1 HBAs in them to connect to the older SAS1 trays. Remember, that manufacturer cut all ties with us and stopped making the JBOD, so there’s just no way to get more of them, as they don’t exist. There are some options, however. Oracle support does support taking out the SAS1 HBAs in the old 7410 and 7310 and put in newer SAS2 HBAs which can talk to the new trays. Hey, I didn’t say it was a great option, I just said it’s an option. I fully realize that you would then have a SAS1 JBOD full of SATA drives that you could no longer connect. I do know a client that did this, and took the SAS1 JBOD and connected it to another server and formatted the drives and is using it as a plain, non-7000 JBOD. This is not supported by Oracle support. The other option is to just keep it as-is, as it works just fine, but you just can’t expand it. Then you can get a newer 7x20 series, and use the built-in ZFSSA replication feature to move the data over. Now you can use the newer one for your production data and use the older one for DR, snaps and clones.

    Read the article

  • Learn Cloud Computing – It’s Time

    - by Ben Griswold
    Last week, I gave an in-house presentation on cloud computing.  I walked through an overview of cloud computing – characteristics (on demand, elastic, fully managed by provider), why are we interested (virtualization, distributed computing, increased access to high-speed internet, weak economy), various types (public, private, virtual private cloud) and services models (IaaS, PaaS, SaaS.)  Though numerous providers have emerged in the cloud computing space, the presentation focused on Amazon, Google and Microsoft offerings and provided an overview of their platforms, costs, data tier technologies, management and security.  One of the biggest talking points was why developers should consider the cloud as part of their deployment strategy: You only have to pay for what you consume You will be well-positioned for one time event provisioning You will reap the benefits of automated growth and scalable technologies For the record: having deployed dozens of applications on various platforms over the years, pricing tends to be the biggest customer concern.  Yes, scalability is a customer consideration, too, but it comes in distant second.  Boy do I hope you’re still reading… You may be thinking, “Cloud computing is well and good and it sounds catchy, but should I bother?  After all, it’s just another technology bundle which I’m supposed to ramp up on because it’s the latest thing, right?”  Well, my clients used to be 100% reliant upon me to find adequate hosting for them.  Now I find they are often aware of cloud services and some come to me with the “possibility” that deploying to the cloud is the best solution for them.  It’s like the patient who walks into the doctor’s office with their diagnosis and treatment already in mind thanks to the handful of Internet searches they performed earlier that day.  You know what?  The customer may be correct about the cloud. It may be a perfect fit for their app.  But maybe not…  I don’t think there’s a need to learn about every technical thing under the sun, but if you are responsible for identifying hosting solutions for your customers, it is time to get up to speed on cloud computing and the various offerings (if you haven’t already.)  Here are a few references to get you going: DZone Refcardz #82 Getting Started with Cloud Computing by Daniel Rubio Wikipedia Cloud Computing – What is it? Amazon Machine Images (AMI) Google App Engine SDK Azure SDK EC2 Spot Pricing Google App Engine Team Blog Amazon EC2 Team Blog Microsoft Azure Team Blog Amazon EC2 – Cost Calculator Google App Engine – Cost and Billing Resources Microsoft Azure – Cost Calculator Larry Ellison has stated that cloud computing has been defined as "everything that we currently do" and that it will have no effect except to "change the wording on some of our ads" Oracle launches worldwide cloud-computing tour NoSQL Movement  

    Read the article

  • Force.com presents Database.com SQL Azure/Amazon RDS unfazed

    - by Sarang
    At the DreamForce 2010 event in San Francisco Force.com unveiled their next big thing in the Fat SaaS portfolio "Database.com".  I am still wondering how would they would've shelled out for that domain name. Now why would a already established SaaS player foray into a key building block like Database? Potentially allowing enterprises to build apps that do not utilize the Force.com stack! One key reason is being seen as the Fat SaaS player with evey trick in the SaaS space under his belt. You want CRM come hither, want a custom development PaaS like solution welcome home (VMForce), want all your apps to talk to a cloud DB and minimize latency by having it reside closer to you cloud apps? You've come to the right place sire! Other is potentially killing foray of smaller DB players like Oracle (Not surprisingly, the Database.com offering is a highly customized and scalable Oracle database) from entering the lucrative SaaS db marketplace. The feature set promised looks great out of the box for someone who likes to visualize cool new architectures. The ground realities are certainly going to be a lot different considering the SOAP/REST style access patterns in lieu of the comfortable old shoe of SQL. Microsoft suffered heavily with SDS (SQL Data Services) offering in early 2009 and had to pull the plug on the product only to reintroduce as a simple SQL Server in the cloud, SQL Windows Azure. Though MSFT is playing cool by providing OData semantics to work with SQL Windows Azure satisfying atleast some needs of the Web-Style to a DB. The other features like Social data models including Profiles, Status updates, feeds seem interesting as well. (Although I beleive social is just one of the aspects of large scale collaborative computing). All these features start "Free" for devs its a good news but the good news stops here. The overall pricing model of $ per Users per Transactions / Month is highly disproportionate compared to Amazon RDS (Based on MySQL) or SQL Windows Azure (Based on MSSQL). Roger Jennigs of Oakleaf did an interesting comparo based on 3, 10, 100, 500 users and it turns out that Database.com going by current understanding is way too expensive for the services on offer. The offering may not impact the decision for DotNet shops mulling their cloud stategy or even some Java/MySQL shops thinking about Amazon RDS, however for enterprises having already invested in other force.com offerings this could be a very important piece in the cloud strategy jigsaw. One which would address a key cloud DB issue of "Latency" for them at least it will help having the DB in the neighborhood. The tooling and "SQL like" access provider drivers (Think ODBC/JDBC) will be available later this year. Progress Software has already announced their JDBC driver stack for Database.com. It remains to be seen how effective the overall solutions proves to be in the longer run but for starts its a important decision towards consolidating Force.com's already strong positioning in the SaaS space. As always contrasting views are welcome! :)

    Read the article

  • SIM to OIM Migration: A How-to Guide to Avoid Costly Mistakes (SDG Corporation)

    - by Darin Pendergraft
    In the fall of 2012, Oracle launched a major upgrade to its IDM portfolio: the 11gR2 release.  11gR2 had four major focus areas: More simplified and customizable user experience Support for cloud, mobile, and social applications Extreme scalability Clear upgrade path For SUN migration customers, it is critical to develop and execute a clearly defined plan prior to beginning this process.  The plan should include initiation and discovery, assessment and analysis, future state architecture, review and collaboration, and gap analysis.  To help better understand your upgrade choices, SDG, an Oracle partner has developed a series of three whitepapers focused on SUN Identity Manager (SIM) to Oracle Identity Manager (OIM) migration. In the second of this series on SUN Identity Manager (SIM) to Oracle Identity Manager (OIM) migration, Santosh Kumar Singh from SDG  discusses the proper steps that should be taken during the planning-to-post implementation phases to ensure a smooth transition from SIM to OIM. Read the whitepaper for Part 2: Download Part 2 from SDGC.com In the last of this series of white papers, Santosh will talk about Identity and Access Management best practices and how these need to be considered when going through with an OIM migration. If you have not taken the opportunity, please read the first in this series which discusses the Migration Approach, Methodology, and Tools for you to consider when planning a migration from SIM to OIM. Read the white paper for part 1: Download Part 1 from SDGC.com About the Author: Santosh Kumar Singh Identity and Access Management (IAM) Practice Leader Santosh, in his capacity as SDG Identity and Access Management (IAM) Practice Leader, has direct senior management responsibility for the firm's strategy, planning, competency building, and engagement deliverance for this Practice. He brings over 12+ years of extensive IT, business, and project management and delivery experience, primarily within enterprise directory, single sign-on (SSO) application, and federated identity services, provisioning solutions, role and password management, and security audit and enterprise blueprint. Santosh possesses strong architecture and implementation expertise in all areas within these technologies and has repeatedly lead teams in successfully deploying complex technical solutions. About SDG: SDG Corporation empowers forward thinking companies to strategize their future, realize their vision, and minimize their IT risk. SDG distinguishes itself by offering flexible business models to fit their clients’ needs; faster time-to-market with its pre-built solutions and frameworks; a broad-based foundation of domain experts, and deep program management expertise. (www.sdgc.com)

    Read the article

  • Ubuntu Software Center starts, then crashes before fully loaded [closed]

    - by Nathan Weisser
    Possible Duplicate: Software center not opening I am brand new to Linux and Ubuntu, and I couldn't install GIMP without the software center. I looked up earlier how to fix it, and it said to fix my sources list, and I did, but now i get a new error in the terminal. 2012-08-14 15:29:08,941 - softwarecenter.ui.gtk3.app - INFO - setting up proxy 'None' 2012-08-14 15:29:08,954 - softwarecenter.db.database - INFO - open() database: path=None use_axi=True use_agent=True 2012-08-14 15:29:09,407 - softwarecenter.ui.gtk3.app - INFO - building local database 2012-08-14 15:29:09,408 - softwarecenter.db.pkginfo_impl.aptcache - INFO - aptcache.open() 2012-08-14 15:29:17,308 - softwarecenter.db.update - WARNING - Problem creating rebuild path '/var/cache/software-center/xapian_rb'. 2012-08-14 15:29:17,309 - softwarecenter.db.update - WARNING - Please check you have the relevant permissions. 2012-08-14 15:29:17,309 - softwarecenter.db.database - INFO - open() database: path=None use_axi=True use_agent=True 2012-08-14 15:29:18,039 - softwarecenter.backend.reviews - WARNING - Could not get usefulness from server, no username in config file 2012-08-14 15:29:18,431 - softwarecenter.ui.gtk3.app - INFO - show_available_packages: search_text is '', app is None. 2012-08-14 15:29:19,153 - softwarecenter.db.pkginfo_impl.aptcache - INFO - aptcache.open() Traceback (most recent call last): File "/usr/bin/software-center", line 176, in <module> app.run(args) File "/usr/share/software-center/softwarecenter/ui/gtk3/app.py", line 1422, in run self.show_available_packages(args) File "/usr/share/software-center/softwarecenter/ui/gtk3/app.py", line 1352, in show_available_packages self.view_manager.set_active_view(ViewPages.AVAILABLE) File "/usr/share/software-center/softwarecenter/ui/gtk3/session/viewmanager.py", line 154, in set_active_view view_widget.init_view() File "/usr/share/software-center/softwarecenter/ui/gtk3/panes/availablepane.py", line 136, in init_view SoftwarePane.init_view(self) File "/usr/share/software-center/softwarecenter/ui/gtk3/panes/softwarepane.py", line 215, in init_view self.icons, self.show_ratings) File "/usr/share/software-center/softwarecenter/ui/gtk3/views/appview.py", line 69, in __init__ self.helper = AppPropertiesHelper(db, cache, icons) File "/usr/share/software-center/softwarecenter/ui/gtk3/models/appstore2.py", line 109, in __init__ softwarecenter.paths.APP_INSTALL_PATH) File "/usr/share/software-center/softwarecenter/db/categories.py", line 255, in parse_applications_menu category = self._parse_menu_tag(child) File "/usr/share/software-center/softwarecenter/db/categories.py", line 444, in _parse_menu_tag query = self._parse_include_tag(element) File "/usr/share/software-center/softwarecenter/db/categories.py", line 402, in _parse_include_tag xapian.Query.OP_AND) File "/usr/share/software-center/softwarecenter/db/categories.py", line 341, in _parse_and_or_not_tag operator_elem, xapian.Query(), xapian.Query.OP_OR) File "/usr/share/software-center/softwarecenter/db/categories.py", line 385, in _parse_and_or_not_tag q = self.db.xapian_parser.parse_query(s, File "/usr/share/software-center/softwarecenter/db/database.py", line 174, in xapian_parser xapian_parser = self._get_new_xapian_parser() File "/usr/share/software-center/softwarecenter/db/database.py", line 200, in _get_new_xapian_parser xapian_parser.set_database(self.xapiandb) File "/usr/share/software-center/softwarecenter/db/database.py", line 166, in xapiandb self._db_per_thread[thread_name] = self._get_new_xapiandb() File "/usr/share/software-center/softwarecenter/db/database.py", line 179, in _get_new_xapiandb xapiandb = xapian.Database(self._db_pathname) File "/usr/lib/python2.7/dist-packages/xapian/__init__.py", line 3666, in __init__ _xapian.Database_swiginit(self,_xapian.new_Database(*args)) xapian.DatabaseOpeningError: Couldn't detect type of database I'm not sure how to fix the errors, and I couldn't find a topic on them anywhere. Be nice, because I am a two-day old Linux user :/ Tell me if you need my Sources list

    Read the article

  • links for 2010-04-28

    - by Bob Rhubart
    Guido Schmutz: Oracle BPM11g available! Oracle ACE Director Guido Schmutz shares his impressions after attending a hands-on workshop conducted by Masons of SOA member Clemens Utschig-Utschig. (tags: oracle otn oracleace bpm soa soasuite) Elena Zannoni : 2010 Collaboration Summit Impressions Elena Zannoni has collected her thoughts on #C10 and shares them in this great blog post. (tags: oracle otn linux architecture collaborate2010) Hajo Normann: BPMN 2.0 in Oracle BPM Suite: The future of BPM starts now "The BPM Studio sets itself apart from pure play BPMN 2.0 tools by being seamlessly integrated inside a holistic SOA / BPM toolset: BPMN models are placed in SCA-Composites in SOA Suite 11g. This allows to abstract away the complexities of SOA integration aspects from business process aspects. For UIs in BPMN tasks, you have the richness of ADF 11g based Frontends." -- Oracle ACE Director and Masons of SOA member Hajo Normann (tags: oracle otn oracleace bpm soa sca) Brain Dirking: AIIM Best Practice Awards to Two Oracle Customers Brian Dirking's great write-up of the AIIM Awards Banquet, at which the Bureau of Indian Affairs and the Charles Town Police Department were among the winners of the 2010 Carl E. Nelson Best Practices Awards. (tags: oracle otn aiim bpm ecm enterprise2.0) Mark Wilcox: Upcoming Directory Services Live Webcast - Improve Time-to-Market and Reduce Cost with Oracle Directory Services Live Webcast: Improve Time-to-Market and Reduce Cost with Oracle Directory Services Event Date: Thursday, May 27, 2010 Event Time: 10:00 AM Pacific Standard Time / 1:00 Eastern Standard Time (tags: oracle otn webcast security identitymanagement) Celine Beck: Introducing AutoVue Document Print Service Celine Beck offers a detailed overview of Oracle AutoVue. (tags: oracle otn enatarch visualization printing) Vikas Jain: What's new in OWSM 11gR1 PS2 (11.1.1.3.0) ? Vikas Jain shares links to resources relevant to the recently releases patch set for Oracle Web Services Manager 11gR1. (tags: oracle otn soa webservices oswm) @theovanarem: Oracle SOA Suite 11g Release 1 Patch Set 2 Theo Van Arem shares links to several resources relevant to the release of the latest patch set for Oracle SOA Suite 11g. (tags: oracle otn soa soasuite middleware) @vambenepe: Analyzing the VMforce announcement "The new thing is that force.com now supports an additional runtime, in addition to Apex. That new runtime uses the Java language, with the constraint that it is used via the Spring framework. Which is familiar territory to many developers. That’s it." -- William Vambenepe (tags: oracle otn cloud paas)

    Read the article

  • Innovation, Adaptability and Agility Emerge As Common Themes at ACORD LOMA Insurance Forum

    - by [email protected]
    Helen Pitts, senior product marketing manager for Oracle Insurance is blogging from the show floor of the ACORD LOMA Insurance Forum this week. Sessions at the ACORD LOMA Insurance Forum this week highlighted the need for insurance companies to think creatively and be innovative with their technology in order to adapt to continuously shifting market dynamics and drive business efficiency and agility.  LOMA President & CEO Robert Kerzner kicked off the day on Tuesday, citing how the recent downtown and recovery has impacted the insurance industry and the ways that companies are doing business.  He encouraged carriers to look for new ways to deliver solutions and offer a better service experience for consumers.  ACORD President & CEO Gregory Maciag reinforced Kerzner's remarks, noting how the industry's approach to technology and development of industry standards has evolved over the association's 40-year history and cited how the continued rise of mobile computing will change the way many carriers are doing business today and in the future. Drawing from his own experiences, popular keynote speaker and Apple Co-Founder Steve Wozniak continued this theme, delving into ways that insurers can unite business with technology.  "iWoz" encouraged insurers to foster an entrepreneurial mindset in a corporate environment to create a culture of creativity and innovation.  He noted that true innovation in business comes from those who have a passion for what they do.  Innovation was also a common theme in several sessions throughout the day with topics ranging from modernization of core systems, automated underwriting, distribution management, CRM and customer communications management.  It was evident that insurers have begun to move past the "old school" processes and systems that constrain agility, implementing new process models and modern technology to become nimble and more adaptive to the market.   Oracle Insurance executives shared a few examples of how insurers are achieving innovation during our Platinum Sponsor session, "Adaptive System Transformation:  Making Agility More Than a Buzzword." Oracle Insurance Senior Vice President and General Manager Don Russo was joined by Chuck Johnston, vice president, global strategy and alliances, and Srini Venkatasantham, vice president of product strategy.  The three shared how Oracle's adaptive solutions for insurance, with a focus on how the key pillars of an adaptive systems - configurable applications, accessible information, extensible content and flexible process - have helped insurers respond rapidly, perform effectively and win more business. Insurers looking to innovate their business with adaptive insurance solutions including policy administration, business intelligence, enterprise document automation, rating and underwriting, claims, CRM and more stopped by the Oracle Insurance booth on the exhibit floor.  It was a premiere destination for many participating in the exhibit hall tours conducted throughout the day. Finally, red was definitely the color of the evening at the Oracle Insurance "Red Hot" customer celebration at the House of Blues. The event provided a great opportunity for our customers to come together and network with the Oracle Insurance team and their peers in the industry.  We look forward to visiting more with of our customers and making new connections today. Helen Pitts is senior product marketing manager for Oracle Insurance. 

    Read the article

  • Silverlight Cream for March 10, 2010 - 2 -- #811

    - by Dave Campbell
    In this Issue: AfricanGeek, Phil Middlemiss, Damon Payne, David Anson, Jesse Liberty, Jeremy Likness, Jobi Joy(-2-), Fredrik Normén, Bobby Diaz, and Mike Taulty(-2-). Shoutouts: Shawn Wildermuth blogged that they posted My "What's New in Silverlight 3" Video from 0reDev Last Fall Shawn Wildermuth also has a post up for his loyal followers: Where to See Me At MIX10 Jonas Follesø has presentation materials up as well: MVVM presentation from NDC2009 on Vimeo Adam Kinney updated his Favorite Tool and Library Downloads for Silverlight From SilverlightCream.com: Styling Silverlight ListBox with Blend 3 In his latest Video Tutorial, AfricanGeek is animating the ListBox control by way of Expression Blend 3. Animating the Silverlight Opacity Mask Phil Middlemiss has written a Behavior that lets you turn a FrameworkElement into an opacity mask for it's parent container... check out his tutorial and grab the code. AddRange for ObservableCollection in Silverlight 3 Damon Payne has a post up discussing the problem with large amounts of data in an ObservableCollection, and how using AddRange is a performance booster. Easily rotate the axis labels of a Silverlight/WPF Toolkit chart David Anson blogged a solution to rotating the axis labels of a Silverlight and WPF chart. Persisting the Configuration (Updated) Jesse Liberty has a good discussion on the continuation of his HyperVideo Platform talking about what all he is needing from the database in the form of configuration information... including the relationships. Animations and View Models: IAnimationDelegate Check out Jeremy Likness' IAnimationDelegate that lets your ViewModel fire and respond to animations without having to know all about them. Button Style - Silverlight Jobi Joy converted a WPF control template into Silverlight... and you'll want to download the XAML he's got for this :) A Simple Accordion banner using ListBox Jobi Joy also has an Image Accordian created in Expression Blend... and it's a 'drop this XAML in your User Control' kinda thing... again, go grab the XAML :) WCF RIA Services Silverlight Business Application – Using ASP.NET SiteMap for Navigation Fredrik Normén has a code-laden post up on RIA Services and the ASP.NET SiteMap. He is using the Silverlight Business app template that comes with WCF RIA Services. A Simple, Selectable Silverlight TextBlock (sort of)... Bobby Diaz shares with us his solution for a Text control that can be copied from in the same manner 'normal' web controls can be. He also includes a link to another post on the same topic. Silverlight 4 Beta Networking. Part 11 - WCF and TCP Mike Taulty has another pair of video tutorials up in his Networking series. This one is on WCF over TCP Silverlight 4 Beta Networking. Part 12 - WCF and Polling HTTP Mike Taulty's 12th networking video tutorial is on WCF with HTTP polling duplex. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    MIX10

    Read the article

  • Oracle WebCenter: The Best of the Best

    - by kellsey.ruppel(at)oracle.com
    You may remember that the key goals of the new release of WebCenter are providing a Modern User Experience, unparalleled Application Integration, converging all the best of the existing portal platforms into WebCenter and delivering a Common User Experience Architecture.  Last week, we provided an overview of Oracle WebCenter, and this week, we'll focus on Convergence and how the new release of Oracle WebCenter is the Best of the Best..Our development team has been working very hard to bring all the best capabilities from each of the existing portal products into one modern user experience platform that provides a robust foundation for moving customers into the future.  Each of the development teams still maintain their existing products to support the current customers,  but they've been tasked with converging their unique best of breed features into the new WebCenter release so that it will meet the broadest set of use cases possible. For example, we've taken the fastest and most scalable portlet engine in the industry with Oracle WebLogic Portal, integrated it within WebCenter, and improved performance further, to deliver even more performance for our customers.  In addition, we've focused on extending the reach of all the different user experience resources so that customers can deliver robust capabilities into their existing portals, applications, composite applications, dashboards, mobile applications, really any channel that requires information.  And finally, we've combined a whole set of community and multi-site capabilities leveraging the pioneering capabilities of Plumtree portal directly into the new WebCenter release.  While at the same time we've built and delivered the new WebCenter release, we've also provided new feature releases of all the existing products.  In this way, customers can continue to gain value out of their existing investments while at the same time have the smoothest path to upgrading to the new WebCenter release. With the new WebCenter release, we are delivering a converged platform to address all portal requirements that have been delivered by different point products in our portal portfolio in the past. Additionally, this release delivers the most modern user experience that goes well beyond the experience the other portal products provided. This is because the new WebCenter release has been built from the ground up with modern technologies around rich clients, SOA, and customizations compared with other portal products whose architecture has been adapted to add capabilities like AJAX, personalization, and social computing.The new WebCenter release addresses the broadest set of use cases using single product set and single architecture spanning extranet sites to social communities. It helps customers manage, maintain and develop one technology set, but leverage it throughout their organization whether it's embedded in an application or a new destination for improved customer and employee productivity. Additionally, the new release of WebCenter leverages the best and most performant features of all the existing portfolio products to deliver the fastest and most scalable portal platform.  Most importantly, it supports the broadest development models spanning from J2EE/Java to HTML/REST to .NET.Keep checking back this week as we provide additional resources and information on how the new release of Oracle WebCenter is the Best of the Best - converging all the best capabilities from each of the existing portal products into one modern user experience platform.

    Read the article

  • Big Data – Buzz Words: What is NoSQL – Day 5 of 21

    - by Pinal Dave
    In yesterday’s blog post we explored the basic architecture of Big Data . In this article we will take a quick look at one of the four most important buzz words which goes around Big Data – NoSQL. What is NoSQL? NoSQL stands for Not Relational SQL or Not Only SQL. Lots of people think that NoSQL means there is No SQL, which is not true – they both sound same but the meaning is totally different. NoSQL does use SQL but it uses more than SQL to achieve its goal. As per Wikipedia’s NoSQL Database Definition – “A NoSQL database provides a mechanism for storage and retrieval of data that uses looser consistency models than traditional relational databases.“ Why use NoSQL? A traditional relation database usually deals with predictable structured data. Whereas as the world has moved forward with unstructured data we often see the limitations of the traditional relational database in dealing with them. For example, nowadays we have data in format of SMS, wave files, photos and video format. It is a bit difficult to manage them by using a traditional relational database. I often see people using BLOB filed to store such a data. BLOB can store the data but when we have to retrieve them or even process them the same BLOB is extremely slow in processing the unstructured data. A NoSQL database is the type of database that can handle unstructured, unorganized and unpredictable data that our business needs it. Along with the support to unstructured data, the other advantage of NoSQL Database is high performance and high availability. Eventual Consistency Additionally to note that NoSQL Database may not provided 100% ACID (Atomicity, Consistency, Isolation, Durability) compliance.  Though, NoSQL Database does not support ACID they provide eventual consistency. That means over the long period of time all updates can be expected to propagate eventually through the system and data will be consistent. Taxonomy Taxonomy is the practice of classification of things or concepts and the principles. The NoSQL taxonomy supports column store, document store, key-value stores, and graph databases. We will discuss the taxonomy in detail in later blog posts. Here are few of the examples of the each of the No SQL Category. Column: Hbase, Cassandra, Accumulo Document: MongoDB, Couchbase, Raven Key-value : Dynamo, Riak, Azure, Redis, Cache, GT.m Graph: Neo4J, Allegro, Virtuoso, Bigdata As of now there are over 150 NoSQL Database and you can read everything about them in this single link. Tomorrow In tomorrow’s blog post we will discuss Buzz Word – Hadoop. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Oracle AutoVue Key Highlights from Oracle OpenWorld 2012

    - by Celine Beck
    We closed another successful Oracle Open World for AutoVue. Thanks to everyone who joined us this year. As usual, from customer presentations to evening networking activities, there was enough to keep us busy during the entire event. Here is a summary of some of the key highlights of the conference: Sessions:We had two AutoVue-specific sessions during Oracle Open World this year. The first session was part of the Product Lifecycle Management track and covered how AutoVue can be used to help drive effective decision making and streamline design for manufacturing processes. Attendees had the opportunity to learn from customer speaker GLOBALFOUNDRIES how they have been leveraging Oracle AutoVue within Agile PLM to enable high degree of collaboration during the exceptionally creative phases of their product development processes, securely, without risking valuable intellectual property. If you are interested, you can actually download the presentation by visiting launch.oracle.com/?plmopenworld2012.AutoVue was also featured as part of the Utilities track. This session focused on how visualization solutions play a critical role in effective plant optimization and configuration strategies defined by owners and operators of power generation facilities. Attendees learnt how integrated with document management systems, and enterprise applications like Oracle Primavera and Asset Lifecycle Management, AutoVue improves change management processes; minimizes risks by providing access to accurate engineering drawings which capture and reflect the as-maintained status of assets; and allows customers to drive complex maintenance projects to successful completion.Augmented Business Visualization for Agile PLMDuring Oracle Open World, we also showcased an Augmented Business Visualization-based solution for Oracle Agile PLM. An Augmented Business Visualization (ABV) solution is one where your structured data (from Oracle Agile PLM for instance) and your unstructured data (documents, designs, 3D models, etc) come together to allow you to make better decisions (check out our blog posts on the topic: Augment the Value of Your Data (or Time to replace the “attach” button) and Context is Everything ). As part of the Agile PLM, the idea is to support more effective decision-making by turning 3D assemblies into color-coded reports, and streamlining business processes like Engineering Change Management by enabling the automatic creation of engineering change requests in Agile PLM directly from documents being viewed in AutoVue. More on this coming soon...probably during the Oracle Value Chain Summit to be held in San Francisco, from Feb. 4-6, 2013 in San Francisco! Mark your calendars and stay tuned for more information! And thanks again for joining us at Oracle OpenWorld!

    Read the article

  • Microsoft Technical Computing

    - by Daniel Moth
    In the past I have described the team I belong to here at Microsoft (Parallel Computing Platform) in terms of contributing to Visual Studio and related products, e.g. .NET Framework. To be more precise, our team is part of the Technical Computing group, which is still part of the Developer Division. This was officially announced externally earlier this month in an exec email (from Bob Muglia, the president of STB, to which DevDiv belongs). Here is an extract: "… As we build the Technical Computing initiative, we will invest in three core areas: 1. Technical computing to the cloud: Microsoft will play a leading role in bringing technical computing power to scientists, engineers and analysts through the cloud. Existing high- performance computing users will benefit from the ability to augment their on-premises systems with cloud resources that enable ‘just-in-time’ processing. This platform will help ensure processing resources are available whenever they are needed—reliably, consistently and quickly. 2. Simplify parallel development: Today, computers are shipping with more processing power than ever, including multiple cores, but most modern software only uses a small amount of the available processing power. Parallel programs are extremely difficult to write, test and trouble shoot. However, a consistent model for parallel programming can help more developers unlock the tremendous power in today’s modern computers and enable a new generation of technical computing. We are delivering new tools to automate and simplify writing software through parallel processing from the desktop… to the cluster… to the cloud. 3. Develop powerful new technical computing tools and applications: We know scientists, engineers and analysts are pushing common tools (i.e., spreadsheets and databases) to the limits with complex, data-intensive models. They need easy access to more computing power and simplified tools to increase the speed of their work. We are building a platform to do this. Our development efforts will yield new, easy-to-use tools and applications that automate data acquisition, modeling, simulation, visualization, workflow and collaboration. This will allow them to spend more time on their work and less time wrestling with complicated technology. …" Our Parallel Computing Platform team is directly responsible for item #2, and we work very closely with the teams delivering items #1 and #3. At the same time as the exec email, our marketing team unveiled a website with interviews that I invite you to check out: Modeling the World. Comments about this post welcome at the original blog.

    Read the article

  • formula for replicating glTexGen in opengl es 2.0 glsl

    - by visualjc
    I also posted this on the main StackExchange, but this seems like a better place, but for give me for the double post if it shows up twice. I have been trying for several hours to implement a GLSL replacement for glTexGen with GL_OBJECT_LINEAR. For OpenGL ES 2.0. In Ogl GLSL there is the gl_TextureMatrix that makes this easier, but thats not available on OpenGL ES 2.0 / OpenGL ES Shader Language 1.0 Several sites have mentioned that this should be "easy" to do in a GLSL vert shader. But I just can not get it to work. My hunch is that I'm not setting the planes up correctly, or I'm missing something in my understanding. I've pored over the web. But most sites are talking about projected textures, I'm just looking to create UV's based on planar projection. The models are being built in Maya, have 50k polygons and the modeler is using planer mapping, but Maya will not export the UV's. So I'm trying to figure this out. I've looked at the glTexGen manpage information: g = p1xo + p2yo + p3zo + p4wo What is g? Is g the value of s in the texture2d call? I've looked at the site: http://www.opengl.org/wiki/Mathematics_of_glTexGen Another size explains the same function: coord = P1*X + P2*Y + P3*Z + P4*W I don't get how coord (an UV vec2 in my mind) is equal to the dot product (a scalar value)? Same problem I had before with "g". What do I set the plane to be? In my opengl c++ 3.0 code, I set it to [0, 0, 1, 0] (basically unit z) and glTexGen works great. I'm still missing something. My vert shader looks basically like this: WVPMatrix = World View Project Matrix. POSITION is the model vertex position. varying vec4 kOutBaseTCoord; void main() { gl_Position = WVPMatrix * vec4(POSITION, 1.0); vec4 sPlane = vec4(1.0, 0.0, 0.0, 0.0); vec4 tPlane = vec4(0.0, 1.0, 0.0, 0.0); vec4 rPlane = vec4(0.0, 0.0, 0.0, 0.0); vec4 qPlane = vec4(0.0, 0.0, 0.0, 0.0); kOutBaseTCoord.s = dot(vec4(POSITION, 1.0), sPlane); kOutBaseTCoord.t = dot(vec4(POSITION, 1.0), tPlane); //kOutBaseTCoord.r = dot(vec4(POSITION, 1.0), rPlane); //kOutBaseTCoord.q = dot(vec4(POSITION, 1.0), qPlane); } The frag shader precision mediump float; uniform sampler2D BaseSampler; varying mediump vec4 kOutBaseTCoord; void main() { //gl_FragColor = vec4(kOutBaseTCoord.st, 0.0, 1.0); gl_FragColor = texture2D(BaseSampler, kOutBaseTCoord.st); } I've tried texture2DProj in frag shader Here are some of the other links I've looked up http://www.gamedev.net/topic/407961-texgen-not-working-with-glsl-with-fixed-pipeline-is-ok/ Thank you in advance.

    Read the article

  • Innovation and the Role of Social Media

    - by Brian Dirking
    A very interesting post by Andy Mulholland of CAP Gemini this week – “The CIO is trapped between the CEO wanting innovation and the CFO needing compliance” – had many interesting points: “A successful move in one area won’t be recognized and rapidly implemented in other areas to multiply the benefits, or worse unsuccessful ideas will get repeated adding to the cost and time wasted. That’s where the need to really address the combination of social networking, collaboration, knowledge management and business information is required.” Without communicating what works and what doesn’t, the innovations of our organization may be lost, and the failures repeated. That makes sense. If you liked Andy Mulholland’s blog post, you need to hear Howard Beader’s presentation at Enterprise 2.0 Conference on innovation and the role of social media. (Howard will be speaking in the Market Leaders Session at 1 PM on Wednesday June 22nd). Some of the thoughts Howard will share include: • Innovation is more than just ideas, it’s getting ideas to market, and removing the obstacles that stand in the way • Innovation is about parallel processing – you can’t remove the obstacles one by one because you will get to market too late • Innovation can be about product innovation, but it can also be about process innovation This brings us to Andy’s second issue he raises: "..the need for integration with, and visibility of, processes to understand exactly how the enterprise functions and delivers on its policies…" Andy goes on to talk about this from the perspective of compliance and the CFO’s concerns. And it’s true: innovation can come both in product innovation, but also internal process innovation. And process innovation can have as much impact as product innovation.  New supply chain models can disrupt an industry overnight. Many people ignore process innovation as a benefit of social business, because it is perceived as a bottom line rather than top line impact. But it can actually impact your top line by changing your entire business model. Oracle WebCenter sits at this crossroads between product innovation and process innovation, enabling you to drive go-to-market innovations through internal social media tools, removing obstacles in parallel, and also providing you deep insight into your processes so you can identify bottlenecks and realize whole new ways of doing business. Learn more about how at the Enterprise 2.0 Conference, where Oracle will be in booth #213 showing Oracle WebCenter and Oracle Fusion Applications.

    Read the article

  • Angry Bird Makers: Developers Love iOS Over Android To Make Money

    - by Gopinath
    These days web is buzzing with Apple iOS vs Google Android debates. Recently Fortune predicted that Android is going to explode in 2011 and it will surpass Apple’s iOS market share. Yes Android is set to spread its wings across all the devices – smartphones, TVs, set top boxes, in car entertainment devices, what not. Think of any device that requires operating system, Android can be used. On the other than iOS is only available on very selective Apple devices – iPods, iPhones and iPads. When it comes to the count of devices running on a specific OS, Android will be far ahead of iOS but when you consider a quality of devices and providing an eco system for business to make money iOS seems to be the winner. That is what experts and analysts are saysing. Here is an excerpt from Peter Vesterbacka, maker of the popular Angry Birds game, interview to Tech N Marketing site.  He says Apple will be the number one platform for a long time from a developer perspective, they have gotten so many things right. And they know what they are doing and they call the shots. Android is growing, but it’s also growing complexity at the same time. Device fragmentation not the issue, but rather the fragmentation of the ecosystem. So many different shops, so many different models. The carriers messing with the experience again. Open but not really open, a very Google centric ecosystem. And paid content just doesn’t work on Android. Peter says developer prefer iOS over Android as it’s not very easy to make money on Android market. That’s why they released a free version of Angry Birds game with ads support for Android devices. Free is the way to go with Android. Nobody has been successful selling content on Android. We will offer a way to remove the ads by paying for the app, but we don’t expect that to be a huge revenue stream. You can read full interview here. cc image credit: flickr/johanl This article titled,Angry Bird Makers: Developers Love iOS Over Android To Make Money, was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • Can a Printer Print White?

    - by Jason Fitzpatrick
    The vast majority of the time we all print on white media: white paper, white cardstock, and other neutral white surfaces. But what about printing white? Can modern printers print white and if not, why not? Read on as we explore color theory, printer design choices, and why white is the foundation of the printing process. Today’s Question & Answer session comes to us courtesy of SuperUser—a subdivision of Stack Exchange, a community-driven grouping of Q&A web sites. Image by Coiote O.; available as wallpaper here. The Question SuperUser reader Curious_Kid is well, curious, about printers. He writes: I was reading about different color models, when this question hit my mind. Can the CMYK color model generate white color? Printers use CMYK color mode. What will happen if I try to print a white colored image (rabbit) on a black paper with my printer? Will I get any image on the paper? Does the CMYK color model have room for white? The Answer SuperUser contributor Darth Android offers some insight into the CMYK process: You will not get anything on the paper with a basic CMYK inkjet or laser printer. The CMYK color mixing is subtractive, meaning that it requires the base that is being colored to have all colors (i.e., White) So that it can create color variation through subtraction: White - Cyan - Yellow = Green White - Yellow - Magenta = Red White - Cyan - Magenta = Blue White is represented as 0 cyan, 0 yellow, 0 magenta, and 0 black – effectively, 0 ink for a printer that simply has those four cartridges. This works great when you have white media, as “printing no ink” simply leaves the white exposed, but as you can imagine, this doesn’t work for non-white media. If you don’t have a base color to subtract from (i.e., Black), then it doesn’t matter what you subtract from it, you still have the color Black. [But], as others are pointing out, there are special printers which can operate in the CMYW color space, or otherwise have a white ink or toner. These can be used to print light colors on top of dark or otherwise non-white media. You might also find my answer to a different question about color spaces helpful or informative. Given that the majority of printer media in the world is white and printing pure white on non-white colors is a specialty process, it’s no surprise that home and (most) commercial printers alike have no provision for it. Have something to add to the explanation? Sound off in the the comments. Want to read more answers from other tech-savvy Stack Exchange users? Check out the full discussion thread here.     

    Read the article

< Previous Page | 140 141 142 143 144 145 146 147 148 149 150 151  | Next Page >