Search Results

Search found 8397 results on 336 pages for 'implementation'.

Page 227/336 | < Previous Page | 223 224 225 226 227 228 229 230 231 232 233 234  | Next Page >

  • ITT Corporation Goes Live on Oracle Sales and Marketing Cloud Service (Fusion CRM)!

    - by Richard Lefebvre
    Back in Q2 of FY12, a division of ITT invited Oracle to demo our CRM On Demand product while the group was considering Salesforce.com. Chris Porter, our Oracle Direct sales representative learned the players and their needs and began to develop relationships. We lost that deal, but not Chris's persistence. A few months passed and Chris called on the ITT Shape Cutting Division's Director of Sales to see how things were going. Chris was told that the plan was for the division to buy more Salesforce.com. In fact, he informed Chris that he had just sent his team to Salesforce.com training. During the conversation, Chris mentioned that our new Oracle Sales Cloud Service could run with Outlook. This caused the ITT Sales Director to reconsider the plan to move forward with our competition. Oracle was invited back to demo the Oracle Sales and Marketing Cloud Service (Fusion CRM) and after it concluded, the Director stated, "That just blew your competition away." The deal closed on June 5th , 2012 Our Oracle Platinum Partner, Intelenex, began the implementation with ITT on July 30th. We are happy to report that on September 18th, the ITT Shape Cutting Division successfully went live on Oracle Sales and Marketing Cloud Service (Fusion CRM). About: ITT is a diversified leading manufacturer of highly engineered critical components and customized technology solutions for growing industrial end-markets in energy infrastructure, electronics, aerospace and transportation. Building on its heritage of innovation, ITT partners with its customers to deliver enduring solutions to the key industries that underpin our modern way of life. Founded in 1920, ITT is headquartered in White Plains, NY, with 8,500 employees in more than 30 countries and sales in more than 125 countries. The ITT Shape Cutting Division provides plasma lasers and controls with the Burny, Kaliburn, and AMC brands. Oracle Fusion Products: Oracle Sales and Marketing Cloud Service (Fusion CRM) including: • Fusion CRM Base • Fusion Sales Cloud • Fusion Mobile and Desktop Integration • Automated Forecasting Adoption Model: SaaS Partner: Intelenex Business Drivers: The ITT Shape Cutting Division wanted to: better enable its Sales Force with email and mobile CRM capabilities simplify and automate its complex sales processes centrally manage and maintain customer contact information Why We Won: ITT was impressed with the feature-rich capabilities of Oracle Sales and Marketing Cloud Service (Fusion CRM), including sales performance management and integration. The company also liked the product's flexibility and scalability for future growth. Expected Benefits: Streamlined accurate forecasting Increased customer manageability Improved sales performance Better visibility to customer information

    Read the article

  • design a model for a system of dependent variables

    - by dbaseman
    I'm dealing with a modeling system (financial) that has dozens of variables. Some of the variables are independent, and function as inputs to the system; most of them are calculated from other variables (independent and calculated) in the system. What I'm looking for is a clean, elegant way to: define the function of each dependent variable in the system trigger a re-calculation, whenever a variable changes, of the variables that depend on it A naive way to do this would be to write a single class that implements INotifyPropertyChanged, and uses a massive case statement that lists out all the variable names x1, x2, ... xn on which others depend, and, whenever a variable xi changes, triggers a recalculation of each of that variable's dependencies. I feel that this naive approach is flawed, and that there must be a cleaner way. I started down the path of defining a CalculationManager<TModel> class, which would be used (in a simple example) something like as follows: public class Model : INotifyPropertyChanged { private CalculationManager<Model> _calculationManager = new CalculationManager<Model>(); // each setter triggers a "PropertyChanged" event public double? Height { get; set; } public double? Weight { get; set; } public double? BMI { get; set; } public Model() { _calculationManager.DefineDependency<double?>( forProperty: model => model.BMI, usingCalculation: (height, weight) => weight / Math.Pow(height, 2), withInputs: model => model.Height, model.Weight); } // INotifyPropertyChanged implementation here } I won't reproduce CalculationManager<TModel> here, but the basic idea is that it sets up a dependency map, listens for PropertyChanged events, and updates dependent properties as needed. I still feel that I'm missing something major here, and that this isn't the right approach: the (mis)use of INotifyPropertyChanged seems to me like a code smell the withInputs parameter is defined as params Expression<Func<TModel, T>>[] args, which means that the argument list of usingCalculation is not checked at compile time the argument list (weight, height) is redundantly defined in both usingCalculation and withInputs I am sure that this kind of system of dependent variables must be common in computational mathematics, physics, finance, and other fields. Does someone know of an established set of ideas that deal with what I'm grasping at here? Would this be a suitable application for a functional language like F#? Edit More context: The model currently exists in an Excel spreadsheet, and is being migrated to a C# application. It is run on-demand, and the variables can be modified by the user from the application's UI. Its purpose is to retrieve variables that the business is interested in, given current inputs from the markets, and model parameters set by the business.

    Read the article

  • What's the best way to create a static utility class in python? Is using metaclasses code smell?

    - by rsimp
    Ok so I need to create a bunch of utility classes in python. Normally I would just use a simple module for this but I need to be able to inherit in order to share common code between them. The common code needs to reference the state of the module using it so simple imports wouldn't work well. I don't like singletons, and classes that use the classmethod decorator do not have proper support for python properties. One pattern I see used a lot is creating an internal python class prefixed with an underscore and creating a single instance which is then explicitly imported or set as the module itself. This is also used by fabric to create a common environment object (fabric.api.env). I've realized another way to accomplish this would be with metaclasses. For example: #util.py class MetaFooBase(type): @property def file_path(cls): raise NotImplementedError def inherited_method(cls): print cls.file_path #foo.py from util import * import env class MetaFoo(MetaFooBase): @property def file_path(cls): return env.base_path + "relative/path" def another_class_method(cls): pass class Foo(object): __metaclass__ = MetaFoo #client.py from foo import Foo file_path = Foo.file_path I like this approach better than the first pattern for a few reasons: First, instantiating Foo would be meaningless as it has no attributes or methods, which insures this class acts like a true single interface utility, unlike the first pattern which relies on the underscore convention to dissuade client code from creating more instances of the internal class. Second, sub-classing MetaFoo in a different module wouldn't be as awkward because I wouldn't be importing a class with an underscore which is inherently going against its private naming convention. Third, this seems to be the closest approximation to a static class that exists in python, as all the meta code applies only to the class and not to its instances. This is shown by the common convention of using cls instead of self in the class methods. As well, the base class inherits from type instead of object which would prevent users from trying to use it as a base for other non-static classes. It's implementation as a static class is also apparent when using it by the naming convention Foo, as opposed to foo, which denotes a static class method is being used. As much as I think this is a good fit, I feel that others might feel its not pythonic because its not a sanctioned use for metaclasses which should be avoided 99% of the time. I also find most python devs tend to shy away from metaclasses which might affect code reuse/maintainability. Is this code considered code smell in the python community? I ask because I'm creating a pypi package, and would like to do everything I can to increase adoption.

    Read the article

  • A sample Memento pattern: Is it correct?

    - by TheSilverBullet
    Following this query on memento pattern, I have tried to put my understanding to test. Memento pattern stands for three things: Saving state of the "memento" object for its successful retrieval Saving carefully each valid "state" of the memento Encapsulating the saved states from the change inducer so that each state remains unaltered Have I achieved these three with my design? Problem This is a zero player game where the program is initialized with a particular set up of chess pawns - the knight and queen. Then program then needs to keep adding set of pawns or knights and queens so that each pawn is "safe" for the next one move of every other pawn. The condition is that either both pawns should be placed, or none of them should be placed. The chessboard with the most number of non conflicting knights and queens should be returned. Implementation I have 4 classes for this: protected ChessBoard (the Memento) private int [][] ChessBoard; public void ChessBoard(); protected void SetChessBoard(); protected void GetChessBoard(int); public Pawn This is not related to memento. It holds info about the pawns public enum PawnType: int { Empty = 0, Queen = 1, Knight = 2, } //This returns a value that shown if the pawn can be placed safely public bool IsSafeToAddPawn(PawnType); public CareTaker This corresponds to caretaker of memento This is a double dimentional integer array that keeps a track of all states. The reason for having 2D array is to keep track of how many states are stored and which state is currently active. An example: 0 -2 1 -1 2 0 - This is current state. With second index 0/ 3 1 - This state has been saved, but has been undone private int [][]State; private ChessBoard [] MChessBoard; //This gets the chessboard at the position requested and assigns it to originator public ChessBoard GetChessBoard(int); //This overwrites the chessboard at given position public void SetChessBoard(ChessBoard, int); private int [][]State; public PlayGame (This is the originator) private bool status; private ChessBoard oChessBoard; //This sets the state of chessboard at position specified public SetChessBoard(ChessBoard, int); //This gets the state of chessboard at position specified public ChessBoard GetChessBoard(int); //This function tries to place both the pawns and returns the status of this attempt public bool PlacePawns(Pawn);

    Read the article

  • Why would more CPU cores on virtual machine slow compile times?

    - by Sid
    [edit#2] If anyone from VMWare can hit me up with a copy of VMWare Fusion, I'd be more than happy to do the same as a VirtualBox vs VMWare comparison. Somehow I suspect the VMWare hypervisor will be better tuned for hyperthreading (see my answer too) I'm seeing something curious. As I increase the number of cores on my Windows 7 x64 virtual machine, the overall compile time increases instead of decreasing. Compiling is usually very well suited for parallel processing as in the middle part (post dependency mapping) you can simply call a compiler instance on each of your .c/.cpp/.cs/whatever file to build partial objects for the linker to take over. So I would have imagined that compiling would actually scale very well with # of cores. But what I'm seeing is: 8 cores: 1.89 sec 4 cores: 1.33 sec 2 cores: 1.24 sec 1 core: 1.15 sec Is this simply a design artifact due to a particular vendor's hypervisor implementation (type2:virtualbox in my case) or something more pervasive across more VMs to make hypervisor implementations more simpler? With so many factors, I seem to be able to make arguments both for and against this behavior - so if someone knows more about this than me, I'd be curious to read your answer. Thanks Sid [edit:addressing comments] @MartinBeckett: Cold compiles were discarded. @MonsterTruck: Couldn't find an opensource project to compile directly. Would be great but can't screwup my dev env right now. @Mr Lister, @philosodad: Have 8 hw threads, using VirtualBox, so should be 1:1 mapping without emulation @Thorbjorn: I have 6.5GB for the VM and a smallish VS2012 project - it's quite unlikely that I'm swapping in/out trashing the page file. @All: If someone can point to an open source VS2010/VS2012 project, that might be a better community reference than my (proprietary) VS2012 project. Orchard and DNN seem to need environment tweaking to compile in VS2012. I really would like to see if someone with VMWare Fusion also sees this (for VMWare vs VirtualBox compartmentalization) Test details: Hardware: Macbook Pro Retina CPU : Core i7 @ 2.3Ghz (quad core, hyper threaded = 8 cores in windows task manager) Memory : 16 GB Disk : 256GB SSD Host OS: Mac OS X 10.8 VM type: VirtualBox 4.1.18 (type 2 hypervisor) Guest OS: Windows 7 x64 SP1 Compiler: VS2012 compiling a solution with 3 C# Azure projects Compile times measure by VS2012 plugin called 'VSCommands' All tests run 5 times, first 2 runs discarded, last 3 averaged

    Read the article

  • How to suggest using an ORM instead of stored procedures?

    - by Wayne M
    I work at a company that only uses stored procedures for all data access, which makes it very annoying to keep our local databases in sync as every commit we have to run new procs. I have used some basic ORMs in the past and I find the experience much better and cleaner. I'd like to suggest to the development manager and rest of the team that we look into using an ORM Of some kind for future development (the rest of the team are only familiar with stored procedures and have never used anything else). The current architecture is .NET 3.5 written like .NET 1.1, with "god classes" that use a strange implementation of ActiveRecord and return untyped DataSets which are looped over in code-behind files - the classes work something like this: class Foo { public bool LoadFoo() { bool blnResult = false; if (this.FooID == 0) { throw new Exception("FooID must be set before calling this method."); } DataSet ds = // ... call to Sproc if (ds.Tables[0].Rows.Count > 0) { foo.FooName = ds.Tables[0].Rows[0]["FooName"].ToString(); // other properties set blnResult = true; } return blnResult; } } // Consumer Foo foo = new Foo(); foo.FooID = 1234; foo.LoadFoo(); // do stuff with foo... There is pretty much no application of any design patterns. There are no tests whatsoever (nobody else knows how to write unit tests, and testing is done through manually loading up the website and poking around). Looking through our database we have: 199 tables, 13 views, a whopping 926 stored procedures and 93 functions. About 30 or so tables are used for batch jobs or external things, the remainder are used in our core application. Is it even worth pursuing a different approach in this scenario? I'm talking about moving forward only since we aren't allowed to refactor the existing code since "it works" so we cannot change the existing classes to use an ORM, but I don't know how often we add brand new modules instead of adding to/fixing current modules so I'm not sure if an ORM is the right approach (too much invested in stored procedures and DataSets). If it is the right choice, how should I present the case for using one? Off the top of my head the only benefits I can think of is having cleaner code (although it might not be, since the current architecture isn't built with ORMs in mind so we would basically be jury-rigging ORMs on to future modules but the old ones would still be using the DataSets) and less hassle to have to remember what procedure scripts have been run and which need to be run, etc. but that's it, and I don't know how compelling an argument that would be. Maintainability is another concern but one that nobody except me seems to be concerned about.

    Read the article

  • Changing Silverlight application themes at runtime

    We have received a lot of questions how can the application theme be changed at run time. The most important thing here to mark is that each time the application theme is changed all the controls should be re-drawn. Without going into too much detail, we could explain the application themes as a mechanism to replace the content of the Generic.xaml file in every loaded Telerik assembly at runtime. This does not affect the controls that already have default style applied, hence the need to create new instances. Because in the Silverlight applications the RootVisual cannot be changed at run time, we need a way to reset the application UI. The following code is in App.xaml.cs. private void Application_Startup(object sender, StartupEventArgs e)     {           // Before:           // this.RootVisual = new MainPage();            this.RootVisual = new Grid();         this.ResetRootVisual();     }        public void ResetRootVisual()     {         var rootVisual = Application.Current.RootVisual as Grid;         rootVisual.Children.Clear();         rootVisual.Children.Add(new MainPage());     }   In Application_Startup() instead of creating new MainPage UserControl instance as RootVisual, we create a new Grid panel, that will contain the MainPage UserControl. In the ResetRootVisual() method we create the instance of MainPage and add it to the RootVisual panel. Then we have to create a method in the code behind which will set StyleManager.ApplicationTheme and then will call the ResetRootVisual() method: private void ChangeApplicationTheme(Theme theme) {     StyleManager.ApplicationTheme = theme;     (Application.Current as App).ResetRootVisual(); }   Here you can find an example which illustrates the described implementation of a Silverlight theme. For more information please refer to Teleriks online demos for Silverlight, the demos for WPF and help documentation for WPF and help documentation for Silverlight. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Why can't a blendShader sample anything but the current coordinate of the background image?

    - by Triynko
    In Flash, you can set a DisplayObject's blendShader property to a pixel shader (flash.shaders.Shader class). The mechanism is nice, because Flash automatically provides your Shader with two input images, including the background surface and the foreground display object's bitmap. The problem is that at runtime, the shader doesn't allow you to sample the background anywhere but under the current output coordinate. If you try to sample other coordinates, it just returns the color of the current coordinate instead, ignoring the coordinates you specified. This seems to occur only at runtime, because it works properly in the Pixel Bender toolkit. This limitation makes it impossible to simulate, for example, the Aero Glass effect in Windows Vista/7, because you cannot sample the background properly for blurring. I must mention that it is possible to create the effect in Flash through manual composition techniques, but it's hard to determine when it actually needs updated, because Flash does not provide information about when a particular area of the screen or a particular display object needs re-rendered. For example, you may have a fixed glass surface with objects moving underneath it that don't dispatch events when they move. The only alternative is to re-render the glass bar every frame, which is inefficient, which is why I am trying to do it through a blendShader so Flash determines when it needs rendered automatically. Is there a technical reason for this limitation, or is it an oversight of some sort? Does anyone know of a workaround, or a way I could provide my manual composition implementation with information about when it needs re-rendered? The limitation is mentioned with no explanation in the last note in this page: http://help.adobe.com/en_US/as3/dev/WSB19E965E-CCD2-4174-8077-8E5D0141A4A8.html It says: "Note: When a Pixel Bender shader program is run as a blend in Flash Player or AIR, the sampling and outCoord() functions behave differently than in other contexts.In a blend, a sampling function will always return the current pixel being evaluated by the shader. You cannot, for example, use add an offset to outCoord() in order to sample a neighboring pixel. Likewise, if you use the outCoord() function outside a sampling function, its coordinates always evaluate to 0. You cannot, for example, use the position of a pixel to influence how the blended images are combined."

    Read the article

  • Behaviour Trees with irregular updates

    - by Robominister
    I'm interested in behaviour trees that aren't iterated every game tick, but every so often. (Edit: the tree could specify how many frames within the main game loop to wait before running its tick function again). Every theoretical implementation I have seen of behaviour trees talks of the tree search being carried out every game update - which seems necessary, because a leaf node (eg a behaviour, like 'return to base') needs to be constantly checked to see if is still running, failed or completed. Can anyone suggest how I might start implementing a tree that isnt run every tick, or point me in the direction of good material specific to this case (I am struggling to find anything)? My thoughts so far: action leaf nodes (when they start) must only push some kind of action object onto a list for an entity, rather than directly calling any code that makes the entity do something. The list of actions for the entity would be run every frame (update any that need to run, pop any that have completed from the list). the return state from a given action must be fed back into the tree, so that when we run the tree iteration again (and reach the same action leaf node - so the tree has so far determined that we ought to still be trying this action) - that the action has completed, or is still running etc. If my actual action code is running from an action list on an entity, then I possibly need to cancel previously running actions in the list - i am thinking that I can just delete the entire stack of queued up actions. I've seen the idea of ActionLists which block lower priority actions when a higher priority one is added, but this seems like very close logic to behaviour trees, and I dont want to be duplicating behaviour. This leaves me with some questions 1) How would I feed the action return state back into the tree? Its obvious I need to store some information relating to 'currently executing actions' on the entity, and check that in the tree tick, but I can't imagine how. 2) Does having a seperate behaviour tree (for deciding behaviour) and action list (for carrying out actual queued up actions) sound like a reasonable approach? 3) Is the approach of updating a behaviour tree irregularly actually used by anyone? It seems like a nice idea for budgeting ai search time when you have a lot of ai entities to process. (Edit) - I am also thinking about storing a single instance of a given behaviour tree in memory, and providing it by reference to any entity that uses it. So any information about what action was last selected for execution on an entity must be stored in a data context relative to the entity (which the tree can check). (I am probably answering my own questions as i go!) I hope I have expressed my questions adequately! Thanks in advance for any help :)

    Read the article

  • What kind of position matches my skills, experience and interests? [closed]

    - by Ryan
    I work in a large firm and my current job covers a variety of different duties. Due to several factors I am seriously considering finding a new job (hours, pay-cut, limited career growth). I have worked for the company nearly 4 years and almost 2 years ago I transitioned into more of a business analyst role (previously I was working in a client facing role for our audit group). In this role I have overseen all aspects of the development of a large scale re-platforming of our firm's key tool in analyzing investment portfolios. I gathered requirements, wrote specs, designed the UI and functionality, worked closely with developers (onshore and offshore) to see to it the implementation was correct, managed schedules and was the lead tester. This is a large scale system used by thousands of people around the world. I've also written Excel macros, reports in SQL, given trainings, written technical manuals, interfaced with senior managers and partners, etc. I've been on a couple interviews sporadically, most of which were for positions aimed at higher management consulting type positions, dealing with strategy, overall process management, project management, etc. What really interests me is the technical stuff and overseeing a project from beginning to end (although I would rather not have to do so many of the tasks on my own). I genuinely like a lot of what I do, but the company culture and attitude towards overworking people combined with my recent pay-cut (my overtime was cut due to a promotion to a higher level) has lead me to want to seek work elsewhere. The problem is - what type of work could I realistically do? I feel like traditional business analysis is too much business and not enough tech stuff, and I've really taken a shine lately to beefing up my programming abilities and creating small programs to automate things around work. I also feel that because my actual years of experience as a business analyst (figure 1.5 years realistically) puts me at a junior level doing a lot of grunt requirements gathering, when the work that I have been doing with my current company is more in line with what a Program Manager does (depending on your definition I guess). So in reality, when I'm job hunting I get a bit perplexed because I feel like the traditional BA stuff wouldn't really suit me, and even if it did it's usually something along the lines of 5-10 years experience for the type of work that is similar to what I've done (and I've also found most BA jobs to be contract only which at the moment I'm not too keen on). Program Manager is something that interests me, but again I feel like the experience is lacking because that's a much more senior position. Am I in some kind of career no-man's land? Any idea what would best suit me given my experience and abilities, as well as my interests? I plan to keep learning programming on the side, but don't expect to get a job being a straight programmer given my relative inexperience with programming.

    Read the article

  • WebCenter Customer Spotlight: Azul Brazilian Airlines

    - by me
    Author: Peter Reiser - Social Business Evangelist, Oracle WebCenter  Solution SummaryAzul Linhas Aéreas Brasileiras (Azul Brazilian Airlines) is the third-largest airline in Brazil serving  42 destinations with a fleet of 49 aircraft and employs 4,500 crew members. The company wanted to offer an innovative site with a simple purchasing process for customers to search for and buy tickets and for the company’s marketing team to more effectively conduct its campaigns. To this end, Azul implemented Oracle WebCenter Sites, succeeding in gathering all of the site’s key information onto a single platform. Azul can now complete the Web site content updating process—which used to take approximately 48 hours—in less than five minutes. Company OverviewAzul Linhas Aéreas Brasileiras (Azul Brazilian Airlines) has established itself as the third-largest airline in Brazil, based on a business model that combines low prices with a high level of service. Azul serves 42 destinations with a fleet of 49 aircraft. It operates 350 daily flights with a team of 4,500 crew members. Last year, the company transported 15 million passengers, achieving a 10% share of the Brazilian market, according to the Agência Nacional de Aviação Civil (ANAC, or the National Civil Aviation Agency). Business ChallengesThe company wanted to offer an innovative site with a simple purchasing process for customers to search for and buy tickets and for the company’s marketing team to more effectively conduct its campaigns. Provide customers with an  innovative Web site with a simple process for purchasing flight tickets Bring dynamism to the Web site’s content updating process to provide autonomy to the airline’s strategic departments, such as marketing and product development Facilitate integration among the site’s different application providers, such as ticket availability and payment process, on which ticket sales depend Solution DeployedAzul worked with the  Oracle partner TQI to implement Oracle WebCenter Sites, succeeding in gathering all of the site’s key information onto a single platform. Previously, at least three servers and corporate information environments had directed data to the portal. The single Oracle-based platform now facilitates site updates, which are daily and constant. Business Results Gained development freedom in all processes—from implementation to content editing Gathered all of the Web site’s key information onto a single platform, facilitating its daily and constant updating, whereas the information was previously spread among at least three IT environments and had to go through a complex process to be made available online to customers Reduced time needed to update banners and other Web site content from an average of 48 hours to less than five minutes Simplified the flight ticket sales process thanks to tool flexibility that enabled the company to improve Website usability “Oracle WebCenter Sites provides an easy-to-use platform that enables our marketing department to spend less time updating content and more time on innovative activities. Previously, it would take 48 hours to update content on our Web site; now it takes less than five minutes. We have shown the market that we are innovators, enabling customer convenience through an improved flight ticket purchase process.” Kleber Linhares, Information Technology and E-Commerce Director, Azul Linhas Aéreas Brasileiras Additional Information Azul Brazilian Airlines Case Study Oracle WebCenter Sites Oracle WebCenter Sites Satellite Server

    Read the article

  • SOA Summit - Oracle Session Replay

    - by Bruce Tierney
    If you think you missed the most recent Integration Developer News (IDN) "SOA Summit" 2013...good news, you didn't.  At least not the replay of the Oracle session titled: Three Solutionsfor Simplifying Cloud/On-Premises Integration As you will see in the reply below, this session introduces Three common reasons for integration complexity: Disparate Toolkits Lack of API Management Rigid, Brittle Infrastructure and then the Three solutions to these challenges: Unify Cloud On-premises Integration Enable Multi-channel Development with API Management Plan for the Unexpected - Future Readiness The last solution on future readiness describes how you can transition from being reactive to new trends, such as the Internet of Things (IoT), by modifying your integration strategy to enable business agility and how to recognize trends through Fast Data event processing ahead of your competition. Oracle SOA Suite customer SFpark's (San Francisco Metropolitan Transit Authority) implementation with API Management is covered as shown in the screenshot to the right This case study covers the core areas of API Management for partners to build their own applications by leveraging parking availability and real-time pricing as well as mobile enablement of data integrated by SOA Suite underneath.  Download the free SFpark app from the Apple and Android app stores to check it out. When looking into the future, the discussion starts with a historical look to better prepare for what comes next.   As shown in the image below, one of the next frontiers after mobile and cloud integration is a deeper level of direct "enterprise to customer" interaction.  Much of this relates to the Internet of Things.  Examples of IoT from the perspective of SOA and integration is also covered in the session. For example, early adopter Turkcell and their tracking of mobile phone users as they move from point A to B to C is shown in the image the right.   As you look into more "smart services" such as Location-Based Services, how "future ready" is your application infrastructure?  . . . Check out the replay by clicking the video image below to learn about these three challenges and solution including how to "future ready" your application infrastructure:

    Read the article

  • How to interview a natural scientist for a dev position?

    - by Silas
    I already did some interviews for my company, mostly computer scientists for dev positions but also some testers and project managers. Now I have to fill a vacancy in our research group within the R&D department (side note: “research” means that we try to solve problems in our professional domain/market niche using software in research projects together with universities, other companies, research centres and end user organisations. It’s not computer science research; we’re not going to solve the P=NP problem). Now we invited a guy holding an MSc in chemistry (with a lot of physics in his CV, too), who never had any computer science lesson. I already talked with him about half an hour at a local university’s career days and there’s no doubt the guy is smart. Also his marks are excellent and he graduated with distinction. For his BSc he needed to teach himself programming in Mathematica and told me believably that he liked programming a lot. Also he solved some physical chemistry problem that I probably don’t understand using his own software, implemented in Mathematica, for his MSc thesis. It includes a GUI and a notable size of 8,000 LoC. He seems to be very attracted by what we’re doing in our research group and to be honest it’s quite difficult for an SME like us to get good people. I also am very interested in hiring him since he could assist me in writing project proposals, reports, doing presentations and so on. He would probably fit to our team, too. The only question left is: How can I check if he will get the programming skills he needs to do software implementation in our projects since this will be a significant part of the job? Of course I will ask him what it is, that is fascinating him about programming. I’ll also ask how he proceeded to write his natural science software and how he structured it. I’ll ask about how he managed to obtain the skills and information about software development he needed. But is there something more I could ask? Something more concrete perhaps? Should I ask him to explain his Mathematica solution? To be clear: I’m not looking for knowledge in a particular language or technology stack. We’re a .NET shop in product development but I want to have a free choice for our research projects. So I’m interested in the meta-competence being able to learn whatever is actually needed. I hope this question is answerable and not open-ended since I really like to know if there is a default way to check for the ability to get further programming skills on the job. If something is not clear to you please give me some comments and let me improve my question.

    Read the article

  • Why do "Joke" programming languages exist? [closed]

    - by ThePlan
    First of all please be aware this post contains some abusive language but I hope it will not bother anyone. I apologize for the bad language but that's what the name is. As I've been doing documentation on existing programming languages attempting to make a complete list of them I stumbled across terrible programming languages, which were clearly not made for actual use and implementation due to their insane difficulty. Languages such as Brainfu*k and LOLCODE or Whitespace are fool languages because they have no real use. For example, a "Hello world" program written in BrainFu*k. Taken from Wikipedia: The following program prints "Hello World!" and a newline to the screen: +++++ +++++ initialize counter (cell #0) to 10 [ use loop to set the next four cells to 70/100/30/10 > +++++ ++ add 7 to cell #1 > +++++ +++++ add 10 to cell #2 > +++ add 3 to cell #3 > + add 1 to cell #4 <<<< - decrement counter (cell #0) ] > ++ . print 'H' > + . print 'e' +++++ ++ . print 'l' . print 'l' +++ . print 'o' > ++ . print ' ' << +++++ +++++ +++++ . print 'W' > . print 'o' +++ . print 'r' ----- - . print 'l' ----- --- . print 'd' > + . print '!' > . print '\n' or another example taken from LOLCODE language: HAI CAN HAS STDIO? PLZ OPEN FILE "LOLCATS.TXT"? AWSUM THX VISIBLE FILE O NOES INVISIBLE "ERROR!" KTHXBYE These languages are very difficult to learn/read/work with. My question is - Why do they exist? What is the purpose of them? Also, is there an official "name" for these type of languages?

    Read the article

  • Dadaism and Agility

    - by alexhildyard
    We all have our little bugbears, and something that has given me particular pause over the years is the place of Agility in the software development life cycle. While I have seen it used successfully on both small and Enterprise-level projects, I have also seen many instances in which long-standing technical debt has also originated under its watch. Ironically the problem in such cases seems to me not that the practitioners in question have failed to follow due process (Test, Develop, Refactor -- a common "what" of Agile), but basically that they have missed the point (the "why" of Agile). It's probably a sign of my age that I'm much more interested in the "why" than the "what", since I feel that the latter falls out naturally from the former, but that this is not a reciprocal relationship.Consider Dadaism, precursor to the Surrealist movement in the early part of the twentieth century. Anyone could stand up and proclaim he or she was Dada; anyone could write cut-ups, or pull words out a hat, or produce gibberish on duelling typewriters under the inspiration of Dada. And all that took place at such performances was a manifestation of Dada, and all the artefacts that resulted were also Dada. Hence one commentator's engimatic observation that 'when one speaks of Dada, then one speaks of Dada. But when one does not speak of Dada, one still speaks of Dada.'What is Dada? Literally, Dada is what you say it is. But that's also missing the point. Dada is about erecting a framework within which utterances like this are valid; Dada is about preparing a stage for itself. Dadaism exemplifies the purity of a process-driven ideology -- in fact an ideology that is almost pure process, with nothing extraneous in the way of formal method, and while perhaps Agile delivery should not embrace the liberties of Dadaism too literally, some of the similarities nevertheless are salutary.Agile -- like Dada -- is an attitude; it is about *being* agile; it is not really about doing a specific set of things that are somehow *part* of being Agile. It is an abstract base rather than an implementation, a characteristic rather than a factor. It is the pragmatic response to the need for change in the face of partial information, ephemeral requirements and a healthy dose of systematic uncertainty. In practice this will usually mean repeatedly making the smallest useful changes to a system, recognising that systems evolve, and that all change carries risk. It will usually mean that instead of investing effort in future-proofing a system against a known technology roadmap, one instead invests one's energies in the daily repetition and incremental development of processes best designed to accommodate change quickly. But though it may mean these things in practice, it isn't actually *about* either of these things; it's about the mindset, the attitude that conceives of such responses as sensible solutions given the larger and ultimately unclassifiable thing that constitutes the development lifecycle of a specific project.

    Read the article

  • Current State EA: Focus on the Integration!!!

    - by Eric A. Stephens
    A recent project has me at the front end of a large implementation effort covering multiple software components. In addition to the challenges of integrating 15-20 separate and new software components there is the challenge of integrating the portfolio into an existing environment. Like other clients I've worked with and other environments I've worked in for many years, this is typical. The applications are undocumented and under patched leading to a mystery for any architect leading change.  We can boil down most architecture development methodologies (ADM) into first understanding the current/baseline state and then envisioning one or more future states. Many pundits emphasize the need to focus on the future/target states. I agree since enterprise architecture (EA) is about where you are going and not so much where you have been. But to be effective in the future, I contend some focused time needs to be spent on the current state. And specifically on the integration. Integration is always the difficult part of a project (I might put it more coarsely at a cocktail party). While I don't have a case study, my anecdotal experience suggests poorly integrated application portfolios tend to cost more to operate and create entropy when trying to respond to new changes and opportunities. In the aforementioned project, I was able to get one of our EAs assigned to focus on just integration almost immediately. While we're still early in the process, this EA is uncovering all sorts of information that will greatly assist our future state planning for this solution. This information is driving early decision making that we anticipate will accelerate our efforts moving forward. #next_pages_container { width: 5px; hight: 5px; position: absolute; top: -100px; left: -100px; z-index: 2147483647 !important; } #next_pages_container { width: 5px; hight: 5px; position: absolute; top: -100px; left: -100px; z-index: 2147483647 !important; } #next_pages_container { width: 5px; hight: 5px; position: absolute; top: -100px; left: -100px; z-index: 2147483647 !important; } #next_pages_container { width: 5px; hight: 5px; position: absolute; top: -100px; left: -100px; z-index: 2147483647 !important; }

    Read the article

  • Eloqua Sales Awareness Training for Partners - London, November 28-29

    - by Richard Lefebvre
    We are pleased to invite you to the free of charge Oracle Eloqua Sales Awareness Training for Partners - London, November 28&29 - COME LEARN WHAT ALL THE BUZZ IS ABOUT! WHEN SALES & MARKETING BOND, REVENUE GROWS It’s not one thing that makes a company successful with marketing automation – it’s the combination of the best implementation, flexible long-term support and access to a community of marketing innovation that ensures you have the assistance you need every step of the way. Our customers choose Oracle|Eloqua because they know when sales and marketing work together, leads are called upon, quotas are crushed and revenues climb. Learn how Oracle|Eloqua can help you put marketing to work for you! COVERED IN THIS TRAINING Eloqua and the Customer Experience Strategy How Modern Marketing Works Introduction to Eloqua – Whiteboard POV Go To Market Playbook Competitive Landscape Integration Options Service Offerings With case studies, workshops and product demos to help you on your way to a new world of marketing expertise LOGISTICS If you are ready to join us on this journey, now’s the time to let us know! Tell us a little about your experience – complete this form to help us know you better (Please ignore if your firm has already completed) Complete this form to RSVP by Thursday, November 21, 2013. Find out more about the Oracle|Eloqua, join the Oracle Eloqua Marketing Cloud Service Knowledge Zone and keep up on all the news! QUESTIONS? Contact [email protected] Please note that similar events will take place in other cities (Brussels, Istanbul and more come) later in 2014. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";}

    Read the article

  • NHibernate Session Load vs Get when using Table per Hierarchy. Always use ISession.Get&lt;T&gt; for TPH to work.

    - by Rohit Gupta
    Originally posted on: http://geekswithblogs.net/rgupta/archive/2014/06/01/nhibernate-session-load-vs-get-when-using-table-per-hierarchy.aspxNHibernate ISession has two methods on it : Load and Get. Load allows the entity to be loaded lazily, meaning the actual call to the database is made only when properties on the entity being loaded is first accessed. Additionally, if the entity has already been loaded into NHibernate Cache, then the entity is loaded directly from the cache instead of querying the underlying database. ISession.Get<T> instead makes the call to the database, every time it is invoked. With this background, it is obvious that we would prefer ISession.Load<T> over ISession.Get<T> most of the times for performance reasons to avoid making the expensive call to the database. let us consider the impact of using ISession.Load<T> when we are using the Table per Hierarchy implementation of NHibernate. Thus we have base class/ table Animal, there is a derived class named Snake with the Discriminator column being Type which in this case is “Snake”. If we load This Snake entity using the Repository for Animal, we would have a entity loaded, as shown below: public T GetByKey(object key, bool lazy = false) { if (lazy) return CurrentSession.Load<T>(key); return CurrentSession.Get<T>(key); } var tRepo = new NHibernateReadWriteRepository<TPHAnimal>(); var animal = tRepo.GetByKey(new Guid("602DAB56-D1BD-4ECC-B4BB-1C14BF87F47B"), true); var snake = animal as Snake; snake is null As you can see that the animal entity retrieved from the database cannot be cast to Snake even though the entity is actually a snake. The reason being ISession.Load prevents the entity to be cast to Snake and will throw the following exception: System.InvalidCastException :  Message=Unable to cast object of type 'TPHAnimalProxy' to type 'NHibernateChecker.Model.Snake'. Thus we can see that if we lazy load the entity using ISession.Load<TPHAnimal> then we get a TPHAnimalProxy and not a snake. =============================================================== However if do not lazy load the same cast works perfectly fine, this is since we are loading the entity from database and the entity being loaded is not a proxy. Thus the following code does not throw any exceptions, infact the snake variable is not null: var tRepo = new NHibernateReadWriteRepository<TPHAnimal>(); var animal = tRepo.GetByKey(new Guid("602DAB56-D1BD-4ECC-B4BB-1C14BF87F47B"), false); var snake = animal as Snake; if (snake == null) { var snake22 = (Snake) animal; }

    Read the article

  • When to use typedef?

    - by futlib
    I'm a bit confused about if and when I should use typedef in C++. I feel it's a balancing act between readability and clarity. Here's a code sample without any typedefs: int sum(std::vector<int>::const_iterator first, std::vector<int>::const_iterator last) { static std::map<std::tuple<std::vector<int>::const_iterator, std::vector<int>::const_iterator>, int> lookup_table; std::map<std::tuple<std::vector<int>::const_iterator, std::vector<int>::const_iterator>, int>::iterator lookup_it = lookup_table.find(lookup_key); if (lookup_it != lookup_table.end()) return lookup_it->second; ... } Pretty ugly IMO. So I'll add some typedefs within the function to make it look nicer: int sum(std::vector<int>::const_iterator first, std::vector<int>::const_iterator last) { typedef std::tuple<std::vector<int>::const_iterator, std::vector<int>::const_iterator> Lookup_key; typedef std::map<Lookup_key, int> Lookup_table; static Lookup_table lookup_table; Lookup_table::iterator lookup_it = lookup_table.find(lookup_key); if (lookup_it != lookup_table.end()) return lookup_it->second; ... } The code is still a bit clumsy, but I get rid of most nightmare material. But there's still the int vector iterators, this variant gets rid of those: typedef std::vector<int>::const_iterator Input_iterator; int sum(Input_iterator first, Input_iterator last) { typedef std::tuple<Input_iterator, Input_iterator> Lookup_key; typedef std::map<Lookup_key, int> Lookup_table; static Lookup_table lookup_table; Lookup_table::iterator lookup_it = lookup_table.find(lookup_key); if (lookup_it != lookup_table.end()) return lookup_it->second; ... } This looks clean, but is it still readable? When should I use a typedef? As soon as I have a nightmare type? As soon as it occurs more than once? Where should I put them? Should I use them in function signatures or keep them to the implementation?

    Read the article

  • Maximizing the Value of Software

    - by David Dorf
    A few years ago we decided to increase our investments in documenting retail processes and architectures.  There were several goals but the main two were to help retailers maximize the value they derive from our software and help system integrators implement our software faster.  The sale is only part of our success metric -- its actually more important that the customer realize the benefits of the software.  That's when we actually celebrate. This week many of our customers are gathered in Chicago to discuss their successes during our annual Crosstalk conference.  That provides the perfect forum to announce the release of the Oracle Retail Reference Library.  The RRL is available for free to Oracle Retail customers and partners.  It contains 1000s of hours of work and represents years of experience in the retail industry.  The Retail Reference Library is composed of three offerings: Retail Reference Model We've been sharing the RRM for several years now, with lots of accolades.  The RRM is a set of business process diagrams at varying levels of granularity. This release marks the debut of Visio documents, which should make it easier for retailers to adopt and edit the diagrams.  The processes represent an approximation of the Oracle Retail software, but at higher levels they are pretty generic and therefore usable with other software as well.  Using these processes, the business and IT are better able to communicate the expectations of the software.  They can be used to guide customization when necessary, and help identify areas for optimization in the organization. Retail Reference Architecture When embarking on a software implementation project, it can be daunting to start from a blank sheet of paper.  So we offer the RRA, a comprehensive set of documents that describe the retail enterprise in terms of logical architecture, physical deployments, and systems integration.  These documents and diagrams describe how all the systems typically found in a retailer enterprise work together.  They serve as a way to jump-start implementations using best practices we've captured over the years. Retail Semantic Glossary Have you ever seen two people argue over something because they're using misaligned terminology?  Its a huge waste and happens all the time.  The Retail Semantic Glossary is a simple application that allows retailers to define terms and metrics in a centralized database.  This initial version comes with limited content with the goal of adding more over subsequent releases.  This is the single source for defining key performance indicators, metrics, algorithms, and terms so that the retail organization speaks in a consistent language. These three offerings are downloaded from MyOracleSupport separately and linked together using the start page above.  Everything is navigated using a Web browser.  See the Oracle Retail Documentation blog for more details.

    Read the article

  • Oracle PeopleSoft PeopleTools 8.53 Release Value Proposition (RVP) published

    - by Greg Kelly
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} The Oracle PeopleSoft PeopleTools 8.53 Release Value Proposition (RVP) can be found at: https://supporthtml.oracle.com/epmos/faces/ui/km/DocumentDisplay.jspx?id=1473194.1 The PeopleSoft PeopleTools 8.53 release continues Oracle’s commitment to protect and extend the value of your PeopleSoft implementation, provide additional technology options and enhancements that reduce ongoing operating costs and provide the applications user a dramatically improved experience. Across the PeopleSoft product development organization we have defined three design principles: Simplicity, Productivity and Total Cost of Ownership. These development principles have directly influenced the PeopleTools product direction during the past few releases. The scope for the PeopleTools 8.53 release again builds additional functionality into the product as a result of direct customer input, industry analysis and internal feature design. New features, bug fixes and new certifications found in PeopleTools 8.53 combine to offer customers improved application user experience, page interaction, and cost-effectiveness. Key PeopleTools 8.53 features include: · PeopleSoft Styles and User Interaction Model · PeopleSoft Data Migration Workbench · PeopleSoft Update Manager · Secure by Default Initiative Be sure to check out the PeopleSoft Update Manager. Many other things are also happening in this time frame. · See the posting on the PeopleSoft Interaction Hub https://blogs.oracle.com/peopletools/entry/introducing_the_peoplesoft_interaction_hub · The application 9.2 RVPs will also be published over the next few months · If you haven't seen it, check out John Webb's posting on the PeopleSoft Information Portal https://blogs.oracle.com/peoplesoft/entry/peoplesoft_information_find_it_quickly

    Read the article

  • Generating tileable terrain using Perlin Noise [duplicate]

    - by terrorcell
    This question already has an answer here: How do you generate tileable Perlin noise? 9 answers I'm having trouble figuring out the solution to this particular algorithm. I'm using the Perlin Noise implementation from: https://code.google.com/p/mikeralib/source/browse/trunk/Mikera/src/main/java/mikera/math/PerlinNoise.java Here's what I have so far: for (Chunk chunk : chunks) { PerlinNoise noise = new PerlinNoise(); for (int y = 0; y < CHUNK_SIZE_HEIGHT; ++y) { for (int x = 0; x < CHUNK_SIZE_WIDTH; ++x) { int index = get1DIndex(y, CHUNK_SIZE_WIDTH, x); float val = 0; for (int i = 2; i <= 32; i *= i) { double n = noise.tileableNoise2(i * x / (float)CHUNK_SIZE_WIDTH, i * y / (float)CHUNK_SIZE_HEIGHT, CHUNK_SIZE_WIDTH, CHUNK_SIZE_HEIGHT); val += n / i; } // chunk tile at [index] gets set to the colour 'val' } } } Which produces something like this: Each chunk is made up of CHUNK_SIZE number of tiles, and each tile has a TILE_SIZE_WIDTH/HEIGHT. I think it has something to do with the inner-most for loop and the x/y co-ords given to the noise function, but I could be wrong. Solved: PerlinNoise noise = new PerlinNoise(); for (Chunk chunk : chunks) { for (int y = 0; y < CHUNK_SIZE_HEIGHT; ++y) { for (int x = 0; x < CHUNK_SIZE_WIDTH; ++x) { int index = get1DIndex(y, CHUNK_SIZE_WIDTH, x); float val = 0; float xx = x * TILE_SIZE_WIDTH + chunk.x; float yy = y * TILE_SIZE_HEIGHT + chunk.h; int w = CHUNK_SIZE_WIDTH * TILE_SIZE_WIDTH; int h = CHUNK_SIZE_HEIGHT * TILE_SIZE_HEIGHT; for (int i = 2; i <= 32; i *= i) { double n = noise.tileableNoise2(i * xx / (float)w, i * yy / (float)h, w, h); val += n / i; } // chunk tile at [index] gets set to the colour 'val' } } }

    Read the article

  • Arçelik A.S. Uses Advanced Analytics to Improve Product Development

    - by Sylvie MacKenzie, PMP
    "Oracle’s Primavera P6 Enterprise Project Portfolio Management’s advanced analytics gives us better insight into the product development process by helping us to identify potential roadblocks.” – Iffet Iyigun Meydanli, Innovation and System Development Manager, R&D Center, Arçelik A.S. Founded in 1955, Arçelik A.S. is now the leading household appliance manufacturer in Turkey, and the third-largest household appliance company in Europe. It operates 14 production facilities in five countries (Turkey, Romania, Russia, China, and South Africa), with international sales and marketing offices in 20 countries. Additionally, the company manages 10 brands (Arçelik, Beko, Grundig, Blomberg, Elektrabregenz, Arctic, Leisure, Flavel, Defy, and Altus). The company has a household presence in more than 100 countries, including China and the United States. Arçelik’s Beko brand is among the top-10 household appliance brands in world, as a market leader for refrigerators, freezers, and washing machines in the United Kingdom. Arçelik implemented Oracle’s Primavera P6 Enterprise Project Portfolio Management for improved management of its design and manufacturing projects. With the solution, Arelik has improved its research and development (R&D) with the ability to evaluate technology risks when planning its projects. Also, it is now more easy to make plans for several locations, monitor all resources, and plan for future projects.  Challenges Improve monitoring of R&D resources?including human resources and critical laboratory equipment?to optimize management of the company’s R&D project portfolio Establish a transparent project platform to enable better product and process planning, gain insight into product performance, and facilitate advanced analytics that support R&D and overall business decisions Identify potential roadblocks for better risk management Solutions Worked with Oracle Partner PRM to implement Oracle’s Primavera P6 Enterprise Project Portfolio Management to manage the entire household-appliance, R&D project portfolio lifecycle, enabling managers and project leaders to better track and monitor resources and deliverables in real time Improved risk analysis and evaluation abilities for R&D projects Supported long-term planning needs Used advanced reporting features to capture data needed for budgeting and other project details, including employee performance evaluations Improved monitoring abilities and insight into the overall performance of products postproduction Enabled flexible, fast, and customized reporting with the P6 dashboard on a centralized platform to meet custom reporting needs for project leaders and support on-time and on-budget deliverables Integrated with other corporate departments, such as accounts payable, to upload project invoice data into the Primavera solution and the company’s e-mail system, so that project leaders will be alerted about milestones and other project related information Partner“Oracle Partner PRM provided us with a quick, reliable, and solution-focused approach to its support,” said Iffet Iyigun Meydanli, innovation and system development manager, R&D Center, Arçelik A.S. “The company’s service covered the entire spectrum of our needs, including implementation, training, configuration, problem solving, and integration.”

    Read the article

  • Move Data into the Grid for Scalable, Predictable Response Times

    - by JuergenKress
    CloudTran is pleased to introduce the availability of the CloudTran Transaction and Persistence Manager for creating scalable, reliable data services on the Oracle Coherence In-Memory Data Grid (IMDG). Use of IMDG architectures has been key to handling today’s web-scale loads because it eliminates database latency by storing important and frequently access data in memory instead of on disk. The CloudTran product lets developers easily use an IMDG for full ACID-compliant transactions without having to be concerned about the location or spread of data. The system has its own implementation of fast, scalable distributed transactions that does NOT depend on XA protocols but still guarantees all ACID properties. Plus, CloudTran asynchronously replicates data going into the IMDG to back-end datastores and back-up data centers, again ensuring ACID properties. CloudTran can be accessed through Java Persistence API (JPA via TopLink Grid) and now, through a new Low-Level API, or LLAPI. This is ideal for use in SOA applications that need data reliability, high availability, performance, and scalability. Still in limited beta release, the LLAPI gives developers the ability to use standard put/remove logic available in Coherence and then wrap logic with simple Spring annotations or XML+AspectJ to start transactions. An important feature of LLAPI is the ability to join transactions. This is a common outcome for SOA applications that need to reduce network traffic by aggregating data into single cache entries and then doing SOA service processing in the node holding the data. This results in the need to orchestrate transaction processing across multiple service calls. CloudTran has the capability to handle these “multi-client” transactions at speed with no loss in ACID properties. Developing software around an IMDG like Oracle Coherence is an important choice for today’s web-scale applications and services. But this introduces new architectural considerations to maintain scalability in light of increased network loads and data movement. Without using CloudTran, developers are faced with an incredibly difficult task to ensure data reliability, availability, performance, and scalability when working with an IMDG. Working with highly distributed data that is entirely volatile while stored in memory presents numerous edge cases where failures can result in data loss. The CloudTran product takes care of all of this, leaving developers with the confidence and peace of mind that all data is processed correctly. For those interested in evaluating the CloudTran product and IMDGs, take a look at this link for more information: http://www.CloudTran.com/downloadAPI.php, or, send your questions to [email protected]. WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. BlogTwitterLinkedInMixForumWiki Technorati Tags: Coherence,cloudtran,cache,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • Cocos2d: Adding a CCSequence to a CCArray

    - by Axort
    I have a problem with an action performed by a sprite. I have one CCSequence in a CCArray and I have an scheduled method (is called every 5 seconds) that make the sprite run the action. The action is performed correctly only the first time (the first 5 seconds), after that, the action do whatever it wants lol. Here is the code: In .h - @interface PowerUpLayer : CCLayer { PowerUp *powerUp; CCArray *trajectories; } @property (nonatomic, retain) CCArray *trajectories; In .mm - @implementation PowerUpLayer @synthesize trajectories; -(id)init { if((self = [super init])) { [self createTrajectories]; self.isTouchEnabled = YES; [self schedule:@selector(spawn:) interval:5]; } return self; } -(void)createTrajectories { self.trajectories = [CCArray arrayWithCapacity:1]; //Wave trajectory ccBezierConfig firstWave, secondWave; firstWave.controlPoint_1 = CGPointMake([[CCDirector sharedDirector] winSize].width + 30, [[CCDirector sharedDirector] winSize].height / 2);//powerUp.sprite.position.x, powerUp.sprite.position.y); firstWave.controlPoint_2 = CGPointMake([[CCDirector sharedDirector] winSize].width - ([[CCDirector sharedDirector] winSize].width / 4), 0); firstWave.endPosition = CGPointMake([[CCDirector sharedDirector] winSize].width / 2, [[CCDirector sharedDirector] winSize].height / 2); secondWave.controlPoint_1 = CGPointMake([[CCDirector sharedDirector] winSize].width / 2, [[CCDirector sharedDirector] winSize].height / 2); secondWave.controlPoint_2 = CGPointMake([[CCDirector sharedDirector] winSize].width / 4, [[CCDirector sharedDirector] winSize].height); secondWave.endPosition = CGPointMake(-30, [[CCDirector sharedDirector] winSize].height / 2); id bezierWave1 = [CCBezierTo actionWithDuration:1 bezier:firstWave]; id bezierWave2 = [CCBezierTo actionWithDuration:1 bezier:secondWave]; id waveTrajectory = [CCSequence actions:bezierWave1, bezierWave2, [CCCallFuncN actionWithTarget:self selector:@selector(setInvisible:)], nil]; [self.trajectories addObject:waveTrajectory]; //[powerUp.sprite runAction:bezierForward]; // [CCMoveBy actionWithDuration:3 position:CGPointMake(-[[CCDirector sharedDirector] winSize].width - powerUp.sprite.contentSize.width, 0)] //[powerUp.sprite runAction:[CCSequence actions:bezierWave1, bezierWave2, [CCCallFuncN actionWithTarget:self selector:@selector(setInvisible:)], nil]]; } -(void)setInvisible:(id)sender { if(powerUp != nil) { [self removeChild:sender cleanup:YES]; powerUp = nil; } } This is the scheduled method: -(void)spawn:(ccTime)dt { if(powerUp == nil) { powerUp = [[PowerUp alloc] initWithType:0]; powerUp.sprite.position = CGPointMake([[CCDirector sharedDirector] winSize].width + powerUp.sprite.contentSize.width, [[CCDirector sharedDirector] winSize].height / 2); [self addChild:powerUp.sprite z:-1]; [powerUp.sprite runAction:((CCSequence *)[self.trajectories objectAtIndex:0])]; } } I don't know what is happening; I never modify the content of the CCSequence after the first time. Thanks!

    Read the article

< Previous Page | 223 224 225 226 227 228 229 230 231 232 233 234  | Next Page >