Search Results

Search found 7166 results on 287 pages for 'platform'.

Page 7/287 | < Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >

  • How cross-platform .Net framework really is?

    - by Ivan
    What is normally to be done to run a WinForms application on a Mac or Linux machine? a. Just copy and run (assuming they have a Framework installed). b. Rebuild. c. Cosmetic source code modifications. d. Heavy source code modifications and forms redesign. Assuming that the application is developed as 100% managed C# 3 code by means Visual C# Express or Visual Studio 2008 targeting .Net Framework 3.5, developed without using any 3-rd party components/libraries, without encapsulating nonmanaged code or any low-level hacks - only standard Microsoft-documented .Net Framework C# API used). Or the same conditions but C# 4 language, .Net Framework 4 and Visual Studio 2010.

    Read the article

  • Storing a file in the user's directory in cross-platform Java

    - by dalyons
    I have a Java application that runs on Mac and Windows, directly off a CD/DVD with no installation. Now, I need to store a file containing per-user data (think favourites etc.) somewhere on the local file system, so that it can be written to. So, where do you think is a good location for this file? I am thinking something like: <USER_DOCUMENTS_AND_SETTINGS>/application data/myapp/favourites.db for windows <USER_HOME_DIR>/.myapp/favourites.db for mac/nix Thoughts? And does anyone know the best way to determine these paths within Java?

    Read the article

  • Cross-platform editing for LaTeX documents?

    - by pufferfish
    What solutions are there for working on a LaTeX document on both Windows and Linux? It's a large document, and I will be working daily on both platforms so compatibility is essential if it's two different pieces of software. Bonus points for a solution that includes easy previewing.

    Read the article

  • What web platform is right for me?

    - by egervari
    I've been looking at web frameworks like Rails, Grails, etc. I'm used to doing applications in Spring Framework with Hibernate... and I want something more productive. One of the things I realized is that while some of the things in Grails is sexy, there are some serious problems with it. Grails' controllers: 1) are implemented awfully. They don't seem to be able to extend from super classes at runtime. I tried this to add base actions and helper methods, and this seems to cause grails to blow up. 2) are based on an obsolete request parameters model (rather than form backing objects, which are much nicer). 3) are hard to test. Command objects are treated totally differently... and it's actually MUCH harder to write the test than it is to write the controller code. 4) Command objects operate totally differently. They are pre-validated and bound, which causes a lot of inconsistencies than basic parameter model. 5) Command objects are not reusable, and it's a pain in the rear to reuse most of the stuff from the domain classes, like constraints and fields. This is TRIVIAL to do in basic Spring. Why the hell was it not trivial to do in Grails? 6) The scaffolding that is generated is pure crap. It doesn't generalize inserts and updates... and it actually copy/pastes a pile of code in two views: create.gsp and edit.gsp. The views themselves are gargantuan piles of doggie do-do. This is further compounded by the fact that it uses low-level parameters and not objects. Integration tests are 30x slower than a Spring integration test. It is disgusting. Some mocking tests are so hard to write and aren't guaranteed to work when it's deployed, that I think it discourages fast, tdd test cycles. Most things seem to screw up grails while it's running, like adding a taglib, or anything really. The server restart problem wasn't solved at all. I'm starting to think going with Spring/Hibernate/Java is the only way to go. While there is a pretty big cost at startup, I know it'll eventually smooth out. It sucks I can't use a language like Scala... because idiomatically, it is so incompatible with Hibernate. This app is also not a run-of-the-mill UI over a database. It's got some of that, but it's not going to be a slouch. I am deathly scared of Grails now because of how crap it is in the Controller layer. Suggestions on what I can do?

    Read the article

  • Would there be a market for this idea (cross platform VM for iPhone OS)

    - by Tzury Bar Yochay
    For a long time I wondered if the following idea worth a nickel or just a waste of time and energy. I am willing to start a project which will provide a kind of a VM for all iPxxx apps - so developed once for iPxxx can run on a Macbook, iMac, Linux, Android and windows (desktop and mobile). You get the idea, right? I want to do to the current iPhone SDK, the same as what Mono did to Microsoft .Net and perhaps a more complete set of implementation. I tend to believe that if overnight all apps on appstore become available on the android market as well that would be a mini revolution. Think about running iPad apps on every tablet that will come out to the market in the future. Wouldn't it be fantastic to all the developers, which from now on, can write once and sell everywhere? The main questions which I ask myself repeatedly is: "Is This Legal?" - I mean, say I have done this, would apple's lawyers will start sending me all kinds of nasty emails? I am willing to hear your opinion about this idea as well as if some of you willing and able to join forces and start this open source project.

    Read the article

  • Creating many native GUI frontends for a cross-platform application

    - by Hugh Young
    I've been away from GUI programming for quite some time so please pardon my ignorance. I would like to attempt the following: Write a Mac OSX app but still be able to port to Win/Linux (i.e. C++ core with Obj-C GUI) Avoid Qt/other toolkits on OSX (i.e. talk to Cocoa directly - I feel that many Qt apps I use stick out like sore thumbs compared to the rest of my system) Not as important, but it would be nice to avoid Visual Studio if it means I can have the freedom to use newer C++ features even on Windows if they help create better code. I believe this configuration might get me what I'm looking for: Core C++ Static Library OSX GUI (Cocoa) Windows GUI (Qt+MinGW?) OR (no new C++ features, Visual Studio + ManagedC++/C#/????) Linux GUI (Qt) Once again, sorry for my ignorance but is this possible? Is this sane? Are there any real-world open source examples accomplish something like this?

    Read the article

  • CISDI Cloud - Industrial Cloud Computing Platform based on Oracle Products

    - by Wenyu Duan
    In today's era, Cloud Computing is becoming integral to the vision and corporate strategy of leading organizations and is often seen as a key business driver to achieve growth and innovation. Headquartered in Chongqing, China, CISDI Engineering Co., Ltd. is a large state-owned engineering company, offering consulting, engineering design, EPC contracting, and equipment integration services to steel producers all over the world. With over 50 years of experience, CISDI offers quality services for every aspect of production for projects in the metal industry and the company has evolved into a leading international engineering service group with 18 subsidiaries providing complete lifecycle for E&C projects. CISDI group delegation led by Mr. Zhaohui Yu, CEO of CISDI Group, Mr. Zhiyou Li, CEO of CISDI Info, Mr. Qing Peng, CTO of CISDI Info and Mr. Xin Xiao, Head of CISDI Info's R&D joined Oracle OpenWorld 2012 and presented a very impressive cloud initiative case in their session titled “E&C Industry Solution in CISDI Cloud - An Industrial Cloud Computing Platform Based on Oracle Products”. CISDI group plans to expand through three phases in the construction of its cloud computing platform: first, it will relocate its existing technologies to Oracle systems, along with establishing private cloud for CISDI; secondly, it will gradually provide mixed cloud services for its subsidiaries and partners; and finally it plans to launch an industrial cloud with a highly mature, secure and scalable environment providing cloud services for customers in the engineering construction and steel industries, among others. “CISDI Cloud” will become the growth engine for the organization to expand its global reach through online services and achieving the strategic objective of being the preferred choice of E&C companies worldwide. The new cloud computing platform is designed to provide access to the shared computing resources pool in a self-service, dynamic, elastic and measurable way. It’s flexible and scalable grid structure can support elastic expansion and sustainable growth, and can bring significant benefits in speed, agility and efficiency. Further, the platform can greatly cut down deployment and maintenance costs. CISDI delegation highlighted these points as the key reasons why the group decided to have a strategic collaboration with Oracle for building this world class industrial cloud - - Oracle’s strategy: Open, Complete and Integrated - Oracle as the only company who can provide engineered system, with complete product chain of hardware and software - Exadata, Exalogic, EM 12c to provide solid foundation for "CISDI Cloud" The cloud blueprint and advanced architecture for industrial cloud computing platform presented in the session shows how Oracle products and technologies together with industrial applications from CISDI can provide end-end portfolio of E&C industry services in cloud. CISDI group was recognized for business leadership and innovative solutions and was presented with Engineering and Construction Industry Excellence Award during Oracle OpenWorld.

    Read the article

  • Using an alternate search platform in Commerce Server 2009

    - by Lewis Benge
    Although Microsoft Commerce Server 2009's architecture is built upon Microsoft SQL Server, and has the full power of the SQL Full Text Indexing Search Platform, there are time however when you may require a richer or alternate search platform. One of these scenarios if when you want to implement a faceted (refinement) search into your site, which provides dynamic refinements based on the search results dataset. Faceted search is becoming popular in most online retail environments as a way of providing an enhanced user experience when browsing a larger catalogue. This is powerful for two reasons, firstly with a traditional search it is down to a user to think of a search term suitable for the product they are trying to find. This typically will not return similar products or help in any way to refine a larger dataset. Faceted searches on the other hand provide a comprehensive list of product properties, grouped together by similarity to help the user narrow down the results returned, as the user progressively restricts the search criteria by selecting additional criteria to search again, these facets needs to continually refresh. The whole experience allows users to explore alternate brands, price-ranges, or find products they hadn't initially thought of or where looking for in a bid to enhance cross sell in the retail environment. The second advantage of this type of search from a business perspective is also to harvest the search result to start to profile your user. Even though anonymous users may routinely visit your site, and will not necessarily register or complete a transaction to build up marketing data- profiling, you can still achieve the same result by recording search facets used within the search sequence. Below is a faceted search scenario generated from eBay using the search term "server". By creating a search profile of clicking through Computer & Networking -> Servers -> Dell - > New and recording this information against my user profile you can start to predict with a lot more certainty what types of products I am interested in. This will allow you to apply shopping-cart analysis against your search data and provide great cross-sale or advertising opportunity, or personalise the user experience based on your prediction of what the user may be interested in. This type of search is extremely beneficial in e-Commerce environments but achieving it out of the box with Commerce Server and SQL Full Text indexing can be challenging. In many deployments it is often easier to use an alternate search platform such as Microsoft's FAST, Apache SOLR, or Endecca, however you still want these products to integrate natively into Commerce Server to ensure that up-to-date inventory information is presented, profile information is generated, and you provide a consistant API. To do so we make the most of the Commerce Server extensibilty points called operation sequence components. In this example I will be talking about Apache Solr hosted on Apache Tomcat, in this specific example I have used the SolrNet C# library to interface to the Java platform. Also I am not going to talk about Solr configuration of indexing – but in a production envionrment this would typically happen by using Powershell to call the Commerce Server management webservice to export your catalog as XML, apply an XSLT transform to the file to make it conform to SOLR and use a simple HTTP Post to send it to the search enginge for indexing. Essentially a sequance component is a step in a serial workflow used to call a data repository (which in most cases is usually the Commerce Server pipelines or databases) and map to and from a Commerce Entity object whilst enforcing any business rules. So the first step in the process is to add a new class library to your existing Commerce Server site. You will need to use a new library as Sequence Components will need to be strongly named to be deployed. Once you are inside of your new project, add a new class file and add a reference to the Microsoft.Commerce.Providers, Microsoft.Commerce.Contracts and the Microsoft.Commerce.Broker assemblies. Now make your new class derive from the base object Microsoft.Commerce.Providers.Components.OperationSequanceComponent and overide the ExecuteQueryMethod. Your screen will then look something similar ot this: As all we are doing on this component is conducting a search we are only interested in the ExecuteQuery method. This method accepts three arguments, queryOperation, operationCache, and response. The queryOperation will be the object in which we receive our search parameters, the cache allows access to the Commerce Server cache allowing us to store regulary accessed information, and the response object is the object which we will return the result of our search upon. Inside this method is simply where we are going to inject our logic for our third party search platform. As I am not going to explain the inner-workings of actually making a SOLR call, I'll simply provide the sample code here. I would highly recommend however looking at the SolrNet wiki as they have some great explinations of how the API works. What you will find however is that there are some further extensions required when attempting to integrate a custom search provider. Firstly you out of the box the CommerceQueryOperation you will receive into the method when conducting a search against a catalog is specifically geared towards a SQL Full Text Search with properties such as a Where clause. To make the operation you receive more relevant you will need to create another class, this time derived from Microsoft.Commerce.Contract.Messages.CommerceSearchCriteria and within this you need to detail the properties you will require to allow you to submit as parameters to the SOLR search API. My exmaple looks like this: [DataContract(Namespace = "http://schemas.microsoft.com/microsoft-multi-channel-commerce-foundation/types/2008/03")] public class CommerceCatalogSolrSearch : CommerceSearchCriteria { private Dictionary<string, string> _facetQueries;   public CommerceCatalogSolrSearch() { _facetQueries = new Dictionary<String, String>();   }     public Dictionary<String, String> FacetQueries { get { return _facetQueries; } set { _facetQueries = value; } }   public String SearchPhrase{ get; set; } public int PageIndex { get; set; } public int PageSize { get; set; } public IEnumerable<String> Facets { get; set; }   public string Sort { get; set; }   public new int FirstItemIndex { get { return (PageIndex-1)*PageSize; } }   public int LastItemIndex { get { return FirstItemIndex + PageSize; } } }  To allow you to construct a CommerceQueryOperation call within the API you will also need to construct another class to derived from Microsoft.Commerce.Common.MessageBuilders.CommerceSearchCriteriaBuilder and is simply used to construct an instance of the CommerceQueryOperation you have just created and expose the properties you want set. My Message builder looks like this: public class CommerceCatalogSolrSearchBuilder : CommerceSearchCriteriaBuilder { private CommerceCatalogSolrSearch _solrSearch;   public CommerceCatalogSolrSearchBuilder() { _solrSearch = new CommerceCatalogSolrSearch(); }   public String SearchPhrase { get { return _solrSearch.SearchPhrase; } set { _solrSearch.SearchPhrase = value; } }   public int PageIndex { get { return _solrSearch.PageIndex; } set { _solrSearch.PageIndex = value; } }   public int PageSize { get { return _solrSearch.PageSize; } set { _solrSearch.PageSize = value; } }   public Dictionary<String,String> FacetQueries { get { return _solrSearch.FacetQueries; } set { _solrSearch.FacetQueries = value; } }   public String[] Facets { get { return _solrSearch.Facets.ToArray(); } set { _solrSearch.Facets = value; } } public override CommerceSearchCriteria ToSearchCriteria() { return _solrSearch; } }  Once you have these two classes in place you can now safely cast the CommerceOperation you receive as an argument of the overidden ExecuteQuery method in the SequenceComponent to the CommerceCatalogSolrSearch operation you have just created, e.g. public CommerceCatalogSolrSearch TryGetSearchCriteria(CommerceOperation operation) { var searchCriteria = operation as CommerceQueryOperation; if (searchCriteria == null) throw new Exception("No search criteria present");   var local = (CommerceCatalogSolrSearch) searchCriteria.SearchCriteria; if (local == null) throw new Exception("Unexpected Search Criteria in Operation");   return local; }  Now you have all of your search parameters present, you can go off an call the external search platform API. You will of-course get proprietry objects returned, so the next step in the process is to convert the results being returned back into CommerceEntities. You do this via another extensibility point within the Commerce Server API called translatators. Translators are another separate class, this time derived inheriting the interface Microsoft.Commerce.Providers.Translators.IToCommerceEntityTranslator . As you can imaginge this interface is specific for the conversion of the object TO a CommerceEntity, you will need to implement a separate interface if you also need to go in the opposite direction. If you implement the required method for the interace you will get a single translate method which has a source onkect, destination CommerceEntity, and a collection of properties as arguments. For simplicity sake in this example I have hard-coded the mappings, however best practice would dictate you map the objects using your metadatadefintions.xml file . Once complete your translator would look something like the following: public class SolrEntityTranslator : IToCommerceEntityTranslator { #region IToCommerceEntityTranslator Members   public void Translate(object source, CommerceEntity destinationCommerceEntity, CommercePropertyCollection propertiesToReturn) { if (source.GetType().Equals(typeof (SearchProduct))) { var searchResult = (SearchProduct) source;   destinationCommerceEntity.Id = searchResult.ProductId; destinationCommerceEntity.SetPropertyValue("DisplayName", searchResult.Title); destinationCommerceEntity.ModelName = "Product";   } }  Once you have a translator in place you can then safely map the results of your search platform into Commerce Entities and attach them on to the CommerceResponse object in a fashion similar to this: foreach (SearchProduct result in matchingProducts) { var destinationEntity = new CommerceEntity(_returnModelName);   Translator.ToCommerceEntity(result, destinationEntity, _queryOperation.Model.Properties); response.CommerceEntities.Add(destinationEntity); }  In SOLR I actually have two objects being returned – a product, and a collection of facets so I have an additional translator for facet (which maps to a custom facet CommerceEntity) and my facet response from SOLR is passed into the Translator helper class seperatley. When all of this is pieced together you have sucessfully completed the extensiblity point coding. You would have created a new OperationSequanceComponent, a custom SearchCritiera object and message builder class, and translators to convert the objects into Commerce Entities. Now you simply need to configure them, and can start calling them in your code. Make sure you sign you assembly, compile it and identiy its signature. Next you need to put this a reference of your new assembly into the Channel.Config configuration file replacing that of the existing SQL Full Text component: You will also need to add your translators to the Translators node of your Channel.Config too: Lastly add any custom CommerceEntities you have developed to your MetaDataDefintions.xml file. Your configuration is now complete, and you should now be able to happily make a call to the Commerce Foundation API, which will act as a proxy to your third party search platform and return back CommerceEntities of your search results. If you require data to be enriched, or logged, or any other logic applied then simply add further sequence components into the OperationSequence (obviously keeping the search response first) to the node of your Channel.Config file. Now to call your code you simply request it as per any other CommerceQuery operation, but taking into account you may be receiving multiple types of CommerceEntity returned: public KeyValuePair<FacetCollection ,List<Product>> DoFacetedProductQuerySearch(string searchPhrase, string orderKey, string sortOrder, int recordIndex, int recordsPerPage, Dictionary<string, string> facetQueries, out int totalItemCount) { var products = new List<Product>(); var query = new CommerceQuery<CatalogEntity, CommerceCatalogSolrSearchBuilder>();   query.SearchCriteria.PageIndex = recordIndex; query.SearchCriteria.PageSize = recordsPerPage; query.SearchCriteria.SearchPhrase = searchPhrase; query.SearchCriteria.FacetQueries = facetQueries;     totalItemCount = 0; CommerceResponse response = SiteContext.ProcessRequest(query.ToRequest()); var queryResponse = response.OperationResponses[0] as CommerceQueryOperationResponse;   // No results. Return the empty list if (queryResponse != null && queryResponse.CommerceEntities.Count == 0) return new KeyValuePair<FacetCollection, List<Product>>();   totalItemCount = (int)queryResponse.TotalItemCount;   // Prepare a multi-operation to retrieve the product variants var multiOperation = new CommerceMultiOperation();     //Add products to results foreach (Product product in queryResponse.CommerceEntities.Where(x => x.ModelName == "Product")) { var productQuery = new CommerceQuery<Product>(Product.ModelNameDefinition); productQuery.SearchCriteria.Model.Id = product.Id; productQuery.SearchCriteria.Model.CatalogId = product.CatalogId;   var variantQuery = new CommerceQueryRelatedItem<Variant>(Product.RelationshipName.Variants);   productQuery.RelatedOperations.Add(variantQuery);   multiOperation.Add(productQuery); }   CommerceResponse variantsResponse = SiteContext.ProcessRequest(multiOperation.ToRequest()); foreach (CommerceQueryOperationResponse queryOpResponse in variantsResponse.OperationResponses) { if (queryOpResponse.CommerceEntities.Count() > 0) products.Add(queryOpResponse.CommerceEntities[0]); }   //Get facet collection FacetCollection facetCollection = queryResponse.CommerceEntities.Where(x => x.ModelName == "FacetCollection").FirstOrDefault();     return new KeyValuePair<FacetCollection, List<Product>>(facetCollection, products); }    ..And that is it – simply a few classes and some configuration will allow you to extend the Commerce Server query operations to call a third party search platform, whilst still maintaing a unifed API in the remainder of your code. This logic stands for any extensibility within CommerceServer, which requires excution in a serial fashioon such as call to LOB systems or web service to validate or enrich data. Feel free to use this example on other applications, and if you have any questions please feel free to e-mail and I'll help out where I can!

    Read the article

  • A Primer on Migrating Oracle Applications to a New Platform

    - by Nick Quarmby
    In Support we field a lot of questions about the migration of Oracle Applications to different platforms.  This article describes the techniques available for migrating an Oracle Applications environment to a new platform and discusses some of the common questions that arise during migration.  This subject has been frequently discussed in previous blog articles but there still seems to be a gap regarding the type of questions we are frequently asked in Service Requests. Some of the questions we see are quite abstract. Customers simply want to get a grip on understanding how they approach a migration. Others want to know if a particular architecture is viable. Other customers ask about mixing different platforms within a single Oracle Applications environment.    Just to clarify, throughout this article, the term 'platform' refers specifically to operating systems and not to the underlying hardware. For a clear definition of 'platform' in the context of Oracle Applications Support then Terri's very timely article:Oracle E-Business Suite Platform SmörgåsbordThe migration process is very similar for both 11i and R12 so this article only mentions specific differences where relevant.

    Read the article

  • Cross-Platform Mobile Development With Mono for Android and MonoTouch

    - by Wallym
    Many years ago, in fact pre-Java, I remember a hallway discussion about the desire to write a single application that could easily run across various platforms. At the time, we were only worried about writing applications on Windows 3.1 and Mac OS 7.x. There were many discussions about windows, user interface concepts, and specifically a rather long discussion as to whether Mac users would accept a Mac application that didn't have balloon help. Thankfully, the marketplace answered this question for us with the Windows API winning the battle.A similar set of questions is currently going on in the mobile world. Unfortunately, at this point in time, there is currently no winning API and none currently in sight. What's a developer to do? Here are some questions that developers have (and there are many more):How can mobile developers target Android and the iPhone with the same code?How can .NET developers share their code across Android, iPhone and other platforms?How can developers give applications the look and feel of the specific platform and still allow as much code as possible to be shared?Mobile devices share many common features, such as cameras, accelerometers, and address books. How can we take advantage of them in a platform independent way and still give the users the look of every other application running on their platform?In this article, we'll look at some solutions to these cross-platform and code-sharing questions between Mono for Android, MonoTouch and the .NET Framework available to developers. 

    Read the article

  • Parallel Computing Platform Developer Lab

    - by Daniel Moth
    This is an exciting announcement that I must share: "Microsoft Developer & Platform Evangelism, in collaboration with the Microsoft Parallel Computing Platform product team, is hosting a developer lab at the Platform Adoption Center on April 12-15, 2010.  This event is for Microsoft Partners and Customers seeking to incorporate either .NET Framework 4 or Visual C++ 2010 parallelism features into their new or existing applications, and to gain expertise with new Visual Studio 2010 tools including the Parallel Tasks and Parallel Stacks debugger toolwindows, and the Concurrency Visualizer in the profiler. Opportunities for attendees include: Gain expert design assistance with your Parallel Computing Platform based solution. Develop a solution prototype in collaboration with Microsoft Software Engineers. Attend topical presentations and “chalk-talk” sessions. Your team will be assigned private, secure offices for confidential collaboration activities. The event has limited capacity, thus enrollment is based on an application process.   Please download and complete the application form then return it to the event management team per instructions included within the form.  Applications will be evaluated based upon the technical solution scenario along with indicated project readiness timelines.  Microsoft event management team members may contact you directly for additional clarification and discussion of your project scenario during the nomination process." Comments about this post welcome at the original blog.

    Read the article

  • Cross-Platform Migration using Heterogeneous Data Guard

    - by Roy F. Swonger
    Most people think of Data Guard as a disaster recovery solution, and it certainly excels in that role. However, did you know that you can also use Data Guard for platform migration under some conditions? While you would normally have your primary and standby Data Guard systems running on the same OS and hardware platform, there are some heterogeneous combinations of primary and stanby system that are supported by Data Guard Physical Standby. One example of heterogenous Data Guard support is the ability to go between Linux and Windows on many processor architectures. Another is the support for environments that are running HP-UX on both PA-RIsC and Itanium hardware. Brand new in 11.2.0.2 is the ability to have both SPARC Solaris and IBM AIX on Power Systems in the same Data Guard environment. See My Oracle Support note 413484.1 for all the details about supported platform combinations. So, why mention this in an upgrade blog? Simple: much of the time required for a platform migration is usually spent copying files from one system to another. If you are moving between systems that are supported by heterogenous Data Guard, then you can reduce that migration downtime to a matter of minutes. This can be a big win when downtime is at a premium (and isn't downtime always at a premium? In addition, you get the benefit of being able to keep the old and new environments synchronized until you are sure the migration is successful! A great case study of using Data Guard for a technology refresh is located on this OTN page. The case study showing CERN's methodology isn't highlighted as a link on the overview page, but it is clickable. As always, make sure you are fully versed on the details and restrictions by reading the available documentation and MOS notes. Happy migrating!

    Read the article

  • Creating an expandable, cross-platform compatible program "core".

    - by Thomas Clayson
    Hi there. Basically the brief is relatively simple. We need to create a program core. An engine that will power all sorts of programs with a large number of distinct potential applications and deployments. The core will be an analytics and algorithmic processor which will essentially take user-specific input and output scenarios based on the information it gets, whilst recording this information for reporting. It needs to be cross platform compatible. Something that can have platform specific layers put on top which can interface with the core. It also needs to be able to be expandable, for instance, modular with developers being able to write "add-ons" or "extensions" which can alter the function of the end program and can use the core to its full extent. (For instance, a good example of what I'm looking to create is a browser. It has its main core, the web-kit engine, for instance, and then on top of this is has a platform-specific GUI and can also have add-ons and extensions which can change the behavior of the program.) Our problem is that the extensions need to interface directly with the main core and expand/alter that functionality rather than the platform specific "layer". So, given that I have no experience in this whatsoever (I have a PHP background and recently objective-c), where should I start, and is there any knowledge/wisdom you can impart on me please? Thanks for all the help and advice you can give me. :) If you need any more explanation just ask. At the moment its in the very early stages of development, so we're just researching all possible routes of development. Thanks a lot

    Read the article

  • WebLogic 12c and Mobile Platform sales kits

    - by JuergenKress
    At our WebLogic Community Workspace (WebLogic Community membership required) you can find the latest sales plays to update your sales team. Kits include a overview presentation to train your sales teams, cheat sheets for your pocket and customer ppt presentations: WebLogic 12c FY15 sales resources FY15 CAF Sales Opportunities Webcast - PPT WebLogic Platform-as-a-Service | Customer Presentation Cheat Sheet WebLogic Coherence Best for Oracle Database - Customer Presentation | Cheat Sheet Capture New Java Projects - Customer Presentation | Cheat Sheet Upsell EM for WebLogic - Customer Presentation | Cheat Sheet WebLogic for ODA - Customer Presentation | Cheat Sheet Mobile Platform 12c FY15 sales resources FY15 Oracle Mobile Platform Sales Opportunities Webcast -| PPT Oracle Mobile Strategy (CVC Deck) - Customer Presentation Develop New Mobile Apps - Customer Presentation | Cheat Sheet Mobilize Enterprise Apps - Customer Presentation | Cheat Sheet Mobile Security - Customer Presentation | Cheat Sheet Download: FY15 Mobile Sales Play Content - ZIP (61Mb) Please use these documents in the spirit of our joint partnership. Please do NOT publish any WebLogic 121.3 and the Mobile Platform details before general availability. WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: WebLogic,WebLogic Community,Oracle,OPN,Jürgen Kress,sales,sales plays,sales kit

    Read the article

  • Collision detection - player gets stuck in platform when jumping

    - by Sun
    So I'm having some problems with my collision detection with my platformer. Take the image below as an example. When I'm running right I am unable to go through the platform, but when I hold my right key and jump, I end up going through the object as shown in the image, below is the code im using: if(shapePlatform.intersects(player.getCollisionShape())){ Vector2f vectorSide = new Vector2f(shapePlatform.getCenter()[0] - player.getCollisionShape().getCenter()[0], shapePlatform.getCenter()[1] - player.getCollisionShape().getCenter()[1]); player.setVerticleSpeed(0f); player.setJumping(false); if(vectorSide.x > 0 && !(vectorSide.y > 0)){ player.getPosition().set(player.getPosition().x-3, player.getPosition().y); }else if(vectorSide.y > 0){ player.getPosition().set(player.getPosition().x, player.getPosition().y); }else if(vectorSide.x < 0 && !(vectorSide.y > 0)){ player.getPosition().set(player.getPosition().x+3, player.getPosition().y); } } I'm basically getting the difference between the centre of the player and the centre of the colliding platform to determine which side the player is colliding with. When my player jumps and walks right on the platform he goes right through. The same can also be observed when I jump on the actual platform, should I be resetting the players y in this situation?

    Read the article

  • Web-based clients vs thick/rich clients?

    - by rudolfv
    My company is a software solutions provider to a major telecommunications company. The environment is currently IBM WebSphere-based with front-end IBM Portal servers talking to a cluster of back-end WebSphere Application Servers providing EJB services. Some of the portlets use our own home-grown MVC-pattern and some are written in JSF. Recently we did a proof-of-concept rich/thick-client application that communicates directly with the EJB's on the back-end servers. It was written in NetBeans Platform and uses the WebSphere application client library to establish communication with the EJB's. The really painful bit was getting the client to use secure JAAS/SSL communications. But, after that was resolved, we've found that the rich client has a number of advantages over the web-based portal client applications we've become accustomed to: Enormous performance advantage (CORBA vs. HTTP, cut out the Portal Server middle man) Development is simplified and faster due to use of NetBeans' visual designer and Swing's generally robust architecture The debug cycle is shortened by not having to deploy your client application to a test server No mishmash of technologies as with web-based development (Struts, JSF, JQuery, HTML, JSTL etc., etc.) After enduring the pain of web-based development (even JSF) for a while now, I've come to the following conclusion: Rich clients aren't right for every situation, but when you're developing an in-house intranet-based solution, then you'd be crazy not to consider NetBeans Platform or Eclipse RCP. Any comments/experiences with rich clients vs. web clients?

    Read the article

  • Roger Jennings’ Cloud Computing with the Windows Azure Platform

    - by guybarrette
    Writing and publishing a book about a technology early in its infancy is cruel.  Your subjected to many product changes and your book might be outdated the day it reaches the book stores.  I bought Roger Jennings “Cloud Computing with the Windows Azure Platform” book knowing that it was published in October 2009 and that many changes occurred to the Azure platform in 2009. Right off the bat and from a technology point of view, some chapters are now outdated but don’t reject this book because of that.  In the first few chapters, Jennings does a great job at explaining Cloud Computing and the Azure platform from a business point of view, something that few Azure articles and blogs fail to do right now.  You may want to wait for the second edition and read Jennings’ outstanding Azure focused blog in the meantime.   var addthis_pub="guybarrette";

    Read the article

  • Mike Neuenschwander on the Identity Platform

    - by Naresh Persaud
    If you are in London on March 22nd, check out the Identity Platform Event. Mike is deeply passionate about the platform. I caught up with Mike recently for an interview to discuss his perspective on the Oracle Identity Platform. Identity Management is not a department level initiative. To unlock the business potential of Identity Management, we have to think organizationally and holistically. To learn more about how to take a strategic approach to Identity Management, visit one of our physical events globally.  Here are some of the listings and registrations world wide: North America, Asia Pacific, Europe .

    Read the article

  • Parallel Computing Platform Developer Lab

    This is an exciting announcement that I must share: "Microsoft Developer & Platform Evangelism, in collaboration with the Microsoft Parallel Computing Platform product team, is hosting a developer lab at the Platform Adoption Center on April 12-15, 2010.  This event is for Microsoft Partners and Customers seeking to incorporate either .NET Framework 4 or Visual C++ 2010 parallelism features into their new or existing applications, and to gain expertise with new Visual Studio 2010 tools...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Changing Platform

    - by Liam McLennan
    From time to time a developer makes a break from their platform of choice (.NET, Java, VB, Access, COBOL) and moves to perceived greener pastures. Zed Shaw did it, jumping from Ruby to Python, and Mike Gunderloy went from .NET to Rails. But it can be difficult to change platform. My clients don’t come to me looking for  a software developer, they come looking for a .NET developer. This is a tragic side effect of big software companies marketing. If your village is under attack by bandits, would you turn away the first seven samurai who offered to help because you didn’t like their swords? What matters is how effectively they can defend your village. You should not tell your carpenter what sort of hammer to use and you should not tell your software developer what platform to use.

    Read the article

  • Creuna Platform

    - by csharp-source.net
    Creuna Platform is a an open source web application framework based on Microsoft .NET and is fully written in C#. The aim for Creuna Platform is to make life easier for system developers by providing a highly competent component toolkit that increases the productivity and quality of a system. The framework contains components for data access, configuration handling, messaging and a broad range of utility classes, controls and services. The framework also has several components for the EPiServer CMS. Creuna Platform is licensed under Affero GNU General Public License Version 3.

    Read the article

  • The Business Case for a Platform Approach

    - by Naresh Persaud
    Most customers have assembled a collection of Identity Management products over time, as they have reacted to industry regulations, compliance mandates and security threats, typically selecting best of breed products.  The resulting infrastructure is a patchwork of systems that has served the short term IDM goals, but is overly complex, hard to manage and cannot scale to meets the needs of the future social/mobile enterprise. The solution is to rethink Identity Management as a Platform, rather than individual products. Aberdeen Research has shown that taking a vendor integrated platform approach to Identity Management can reduce cost, make your IT organization more responsive to the needs of a changing business environment, and reduce audit deficiencies.  View the slide show below to see how companies like Agilent, Cisco, ING Bank and Toyota have all built the business case and embraced the Oracle Identity Management Platform approach. Biz case-keynote-final copy View more PowerPoint from OracleIDM

    Read the article

  • Announcing a new Free Windows Azure Platform Trial offer

    - by Eric Nelson
    We now have a  truly useful Windows Azure Platform trial. Which makes me very happy as I was a vocal critic of the original trial offer. Simply put, the small number of compute hours it included made it useless for many potential early adopters. This is now fixed. The new Introductory Special now includes a generous 750 hours of compute – enough to run a web role 24/7. Enjoy! Related Links Full announcement If you are an ISV then there is a better offer for you via Microsoft Platform Ready and Cloud Essentials and keep an eye on our events for ISVs as we will be doing Windows Azure Platform technical briefings starting March 31st.

    Read the article

< Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >