Search Results

Search found 1281 results on 52 pages for 'joes 2 pros'.

Page 10/52 | < Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >

  • Oracle thin driver vs. OCI driver. Pros and Cons?

    - by Zwei Steinen
    Hi, When you develop a Java application that talks to oracle DBs, there are 2 options right? One is oracle thin driver, and the other is OCI driver that requires its own installation (please correct if I'm misunderstanding). Now, what are the pros and cons? Obviously thin driver sounds much better in terms of installation, but is there anything that OCI can and the thin one can't? Develop environment is Tomcat6 + Spring 3.0 + JPA(Hibernate) + appache-DBCP Thanks in advance.

    Read the article

  • Linux Lightweight Distro and X Windows for Development

    - by Fernando Barrocal
    Heyall... I want to build a lightweight linux configuration to use for development. The first idea is to use it inside a Virtual Machine under Windows, or old Laptops with 1Gb RAM top. Maybe even a distributable environment for developers. So the whole idea is to use a LAMP server, Java Application Server (Tomcat or Jetty) and X Windows (any Window manager, from FVWM to Enlightment), Eclipse, maybe jEdit and of course Firefox. Edit: I am changing this post to compile a possible list of distros and window managers that can be used to configure a real lightweight development environment. I am using as base personal experiences on this matter. Info about the distros can be easily found in their sites. So please, focus on personal use of those systems Distros Ubuntu / Xubuntu Pros: Personal Experience in old systems or low RAM environment - @Schroeder, @SCdF Several sugestions based on personal knowledge - @Kyle, @Peter Hoffmann Gentoo Pros: Not targeted to Desktop Users - @paan Don't come with a huge ammount of applications - @paan Slackware Pros: Suggested as best performance in a wise install/configuration - @Ryan Damn Small Linux Pros: Main focus is the lightweight factor - 50MB LiveCD - @Ryan Debian Pros: Very versatile, can be configured for both heavy and lightweight computers - @Ryan APT as package manager - @Kyle Based on compatibility and usability - @Kyle -- Fell Free to add Prós and Cons on this, so we can compile a good Reference. -- X Windows suggestion keep coming about XFCE. If others are to add here, open a session for it Like the distro one :)

    Read the article

  • When does implementing MVVM not make sense

    - by Kelly Sommers
    I am a big fan of various patterns and enjoy learning new ones all the time however I think with all the evangelism around popular patterns and anti-patterns sometimes this causes blind adoption. I think most things have individual pros and cons and it's important to educate what the cons are and when it doesn't make sense to make a particular choice. The pros are constantly advocated. "It depends" I think applies most times but the industry does a poor job at communicating what it depends ON. Also many patterns surfaced from inheriting values from previous patterns or have derivatives, which each one brings another set of pros and cons to the table. The sooner we are more aware of the trade off's of decisions we make in software architecture the sooner we make better decisions. This is my first challenge to the community. Even if you are a big fan of said pattern, I challenge you to discover the cons and when you shouldn't use it. Define when MVVM (Model-View-ViewModel) may not make sense in a particular piece of software and based on what reasons. MVVM has a set of pros and cons. Let's try to define them. GO! :)

    Read the article

  • Should Developers Perform All Tasks or Should They Specialize?

    - by Bob Horn
    Disclaimer: The intent of this question isn't to discern what is better for the individual developer, but for the system as a whole. I've worked in environments where small teams managed certain areas. For example, there would be a small team for every one of these functions: UI Framework code Business/application logic Database I've also worked on teams where the developers were responsible for all of these areas and more (QA, analsyt, etc...). My current environment promotes agile development (specifically scrum) and everyone has their hands in every area mentioned above. While there are pros and cons to each approach, I'd be curious to know if there are more pros and cons than I list below, and also what the generally feeling is about which approach is better. Devs Do It All Pros 1. Developers may be more well-rounded 2. Developers know more of the system Cons 1. Everyone has their hands in all areas, increasing the probability of creating less-than-optimal results in that area 2. It can take longer to do something with which you are unfamiliar (jack of all trades, master of none) Devs Specialize Pros 1. Developers can create policies and procedures for their area of expertise and more easily enforce them 2. Developers have more of a chance to become deeply knowledgeable about their specific area and make it the best it can be 3. Other developers don't cross boundaries and degrade another area Cons 1. As one colleague put it: "Why would you want to pigeon-hole yourself like that?" (Meaning some developers won't get a chance to work in certain areas.) It's easy to say how wonderful agile is, and that we should do it all, but I'm somewhat of a fan of having areas of expertise. Without that expertise, I've seen code degrade, database schemas become difficult to manage, hack UI code, etc... Let's face it, some people make careers out of doing just UI work, or just database work. It's not that easy to just fill in and do as good of a job as an expert in that area.

    Read the article

  • SQLAuthority News – 5th Anniversary Giveaways

    - by pinaldave
    Please read my 5th Anniversary post and my quick note on history of the Database. I am sure that we all have friends and we value friendship more than anything. In fact, the complete model of Facebook is built on friends. If you have lots of friends, you must be a lucky person. Having a lot of friends is indeed a good thing. I consider all you blog readers as my friends so now I want do something for you. What is it? Well, send me details about how many of your friends like my page and you would have a chance to win lots of learning materials for yourself and your friends. Here are the exciting prizes awaiting the lucky winner: Combo set of 5 Joes 2 Pros Book – 1 for YOU and 1 for Friend This is USD 444 (each set USD 222) worth gift. It contains all the five Joes 2 Pros books (Vol1, Vol2, Vol3, Vol4, Vol5) + 1 Learning DVD. [Amazon] | [Flipkart] If in case you submitted an entry but didn’t win the Combo set of 5 Joes 2 Pros books, you could still will  my SQL Server Wait Stats book as a consolation prize! I will pick the next 5 participants who have the highest number of friends who “liked” the Facebook page, http://facebook.com/SQLAuth. Instead of sending one copy, I will send you 2 copies so you can share one copy with a friend of yours. Well, it is important to share our learning and love with friends, isn’t it? Note: Just take a screenshot of http://facebook.com/SQLAuth using Print Screen function and send it by Nov 7th to pinal ‘at’ sqlauthority.com.. There are no special freebies to early birds so take your time and see if you can increase your friends like count by Nov 7th. Guess – What is in it? It is quite possible you are not a Facebook or Twitter user. In that case you can still win a surprise from me. You have 2 days to guess what is in this box. If you guess it correct and you are one of the first 5 persons to have the correct answer – you will get what is in this box for free. Please note that you have only 48 hours to guess. Please give me your guess by commenting to this blog post. Reference:  Pinal Dave (http://blog.SQLAuthority.com) Filed under: About Me, Pinal Dave, PostADay, Readers Contribution, Readers Question, SQL, SQL Authority, SQL Milestone, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, SQLServer, T SQL, Technology

    Read the article

  • Architecture driven by users, or by actions/content?

    - by hugerth
    I have a question about designing MVC app architecture. Let's say our application has three main categories of views (items of type 1, items of type 2...). And we have three (or more in future) types of users - Admins, let's say Moderators and typical Users. And in the future there might be more of them. Admins have full access to app, Moderators can visit only 2/3 type of items, and Users can visit only basic type of items. Should I divide my controllers/views/whatever like this: Items "A", Items "B", Items "C", then make them 100% finished and at the end add access privileges? Pros: DRY option Cons Conditional expressions in views Or another options: Items "A" / Admin, Items "A" / Moderator / Items "B" Admin ...? Pros: Divided parts of application for specific user (is that pros?) Cons: A lot of repeated code I don't have great experience in planning such things so it would nice if you can give me some tips or links to learn something about it.

    Read the article

  • What is a good platform for building a game framework targetting both web and native languages?

    - by fuzzyTew
    I would like to develop (or find, if one is already in development) a framework with support for accelerated graphics and sound built on a system flexible enough to compile to the following: native ppc/x86/x86_64/arm binaries or a language which compiles to them javascript actionscript bytecode or a language which compiles to it (actionscript 3, haxe) optionally java I imagine, for example, creating an API where I can open windows and make OpenGL-like calls and the framework maps this in a relatively efficient manner to either WebGL with a canvas object, 3d graphics in Flash, OpenGL ES 2 with EGL, or desktop OpenGL in an X11, Windows, or Cocoa window. I have so far looked into these avenues: Building the game library in haXe Pros: Targets exist for php, javascript, actionscript bytecode, c++ High level, object oriented language Cons: No support for finally{} blocks or destructors, making resource cleanup difficult C++ target does not allow room for producing highly optimized libraries -- the foreign function interface requires all primitive types be boxed in a wrapper object, as if writing bindings for a scripting language; these feel unideal for real-time graphics and audio, especially exporting low-level functions. Doesn't seem quite yet mature Using the C preprocessor to create a translator, writing programs entirely with macros Pros: CPP is widespread and simple to use Cons: This is an arduous task and probably the wrong tool for the job CPP implementations differ widely in support for features (e.g. xcode cpp has no variadic macros despite claiming C99 compliance) There is little-to-no room for optimization in this route Using llvm's support for multiple backends to target c/c++ to web languages Pros: Can code in c/c++ LLVM is a very mature highly optimizing compiler performing e.g. global inlining Targets exist for actionscript (alchemy) and javascript (emscripten) Cons: Actionscript target is closed source, unmaintained, and buggy. Javascript targets do not use features of HTML5 for appropriate optimization (e.g. linear memory with typed arrays) and are immature An LLVM target must convert from low-level bytecode, so high-level constructs are lost and bloated unreadable code is created from translating individual instructions, which may be more difficult for an unprepared JIT to optimize. "jump" instructions cause problems for languages with no "goto" statements. Using libclang to write a translator from C/C++ to web languages Pros: A beautiful parsing library providing easy access to the code structure Can code in C/C++ Has sponsored developer effort from Apple Cons: Incomplete; current feature set targets IDEs. Basic operators are unexposed and must be manually parsed from the returned AST element to be identified. Translating code prior to compilation may forgo optimizations assumed in c/c++ such as inlining. Creating new code generators for clang to translate into web languages Pros: Can code in C/C++ as libclang Cons: There is no API; code structure is unstable A much larger job than using libclang; the innards of clang are complex Building the game library in Common Lisp Pros: Flexible, ancient, well-developed language Extensive introspection should ease writing translators Translators exist for at least javascript Cons: Unfamiliar language No standardized library functions, widely varying implementations Which of these avenues should I pursue? Do you know of any others, or any systems that might be useful? Does a general project like this exist somewhere already? Thank you for any input.

    Read the article

  • stop background of iphone webapp from responding to swipes

    - by JoeS
    I'm making a mobile version of my website, and trying to make it feel as native as possible on the iphone. However, any background areas of the page respond to swiping gestures such that you can shift the page partway off the screen. Specifically, if the user touches and swipes left, for example, the content shifts off the edge of the screen and one can see a gray background 'behind' the page. How can this be prevented? I'd like to have the page itself be 320x480 with scroll-swiping disabled (except on list elements that I choose). I have added the following meta tags to the top of the page: <meta name="viewport" content="width=320; height=480; initial-scale=1.0; maximum-scale=1.0; user-scalable=0;"/> <meta name="apple-mobile-web-app-capable" content="yes" /> <meta name="apple-mobile-web-app-status-bar-style" content="black" /> I've also tried the following as the event handler for the touchstart, touchmove, and touchend events of the body element: function cancelTouchEvent(e) { if (!e) var e = window.event; e.cancelBubble = true; if (e.stopPropagation) e.stopPropagation(); if (e.preventDefault) e.preventDefault(); return false; } It doesn't stop the swiping behavior, but does prevent clicks on all links on the page... Any help is much appreciated. Thanks!

    Read the article

  • .Net 4.0 Memory-Mapped Files verses RDMS Storage

    - by Harry
    I'm interested in people's thoughts comparing storing data in a traditional SQL based Database or utilising a Memory-Mapped File such as the one in the new .Net 4.0 runtime. The data in question would be arrays of simple structures. Obvious pros and cons: SQL Database Pros Adhoc query support SQL Management Tools Schema changes (adding more columns and setting default values) Memory-Mapped Pros Lighter overhead? (this is an assumption on my part) Shareable between process threads Any others? Is it worth it for performance gains?

    Read the article

  • In ASP.NET MVC (3.0/Razor), do you prefer multiple views, or conditionals within views? Why?

    - by Chad
    For my new web app, I'm debating on using multiple views, or conditionals within views. An example scenario would be showing different info to users who are authenticated vs non-authenticated. This could be handled a couple ways. In the controller, check IsAuthenticated and return a view based on that In the view, check IsAuthenticated and show blocks of info based on that Pros of multiple views: Smaller, less complicated view - next to no logic in the view Pros of single views: less view files to maintain The obvious cons are the opposites of the pros: more files to maintain or more complicated view files. Which do you prefer? Why? Any pros/cons I haven't outlined here? Update: Assume each view uses a layout page and partial views to abstract the obviously repetitive code.

    Read the article

  • SQLAuthority News – Book Signing Event – SQLPASS 2011 Event Log

    - by pinaldave
    I have been dreaming of writing book for really long time, and I finally got the chance – in fact, two chances!  I recently wrote two books: SQL Programming Joes 2 Pros: Programming and Development for Microsoft SQL Server 2008 [Amazon] | [Flipkart] | [Kindle] and SQL Wait Stats Joes 2 Pros: SQL Performance Tuning Techniques Using Wait Statistics, Types & Queues [Amazon] | [Flipkart] | [Kindle].  I had a lot of fun writing these two books, even though sometimes I had to sacrifice some family time and time for other personal development to write the books. The good side of writing book is that when the efforts put in writing books are recognize by books readers and kind organizations like expressor studio. Book Signing Event Book writing is a complex process.  Even after you spend months, maybe years, writing the material you still have to go through the editing and fact checking processes.  And, once the book is out there, there is no way to take back all the copies to change mistakes or add something you forgot.  Most of the time it is a one-way street. Book Signing Event Just like every author, I had a dream that after the books were written, they would be loved by people and gain acceptance by an audience. My first book, SQL Programming Joes 2 Pros: Programming and Development for Microsoft SQL Server 2008, is extremely popular because it helps lots of people learn various fundamental topics. My second book covers beginning to learn SQL Server Wait Stats, which is a relatively new subject. This book has had very good acceptance in the community. Book Signing Event Helping my community is my primary focus, so I was happy to see this year’s SQLPASS tag line: ‘This is a Community.‘ At the event, the expressor studio guys came up with a very novel idea. They had previously used my books and they had found them very useful. They got 100 copies of the book and decided to give it away to community folks. They invited me and my co-author Rick Morelan to hold a book signing event. We did a book signing on Thursday between 1 pm and 2 pm. Book Signing Event This event was one of the best events for me. This was my first book signing event outside of India. I reached the book signing location around 20 minutes before the scheduled time and what I saw was a big line for the book signing event. I felt very honored looking at the crowd and all the people around the event location. I felt very humbled when I saw some of my very close friends standing in the line to get my signature. It was really heartwarming to see so many enthusiasts waiting for more than an hour to get my signature. While standing in line I had the chance to have a conversation with every single person who showed up for the signature. I made sure that I repeated every single name and wrote it in every book with my signature. There is saying that if we write a name once we will remember it forever. I want to remember all of you who saw me at the book signing. Your comments were wonderful, your feedback was amazing and you were all very supportive. Book Signing Event I have made a note of every conversation I had with all of you when I was signing the books. Once again, I just want to express my thanks for coming to my book signing event. The whole experience was very humbling. On the top of it, I want to thank the expressor studio people who made it possible, who organized the whole signing event. I am so thankful to them for facilitating the whole experience, which is going to be hard to beat by any future experience. My books Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL PASS, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Author Visit, SQLAuthority News, T SQL, Technology

    Read the article

  • Microsoft BI Conference 2011 in Lisbon

    - by AlbertoFerrari
    Anyone interested in BI from Portugal or Spain should not miss the Microsoft BI Conference 2011 in Lisbon : one full day ( March, 25, 2011 ) with three tracks on Business Intelligence: Decision Makers BI pros Intro to BI. I am going to present two sessions on PowerPivot: one is a nice deep dive into DAX for BI pros, the other is about self service BI for decision makers. Titles and the complete agenda will be published in the next days, but I suggest to save the date. The full event is free and it...(read more)

    Read the article

  • Azure Task Scheduling Options

    - by charlie.mott
    Currently, the Azure PaaS does not offer a distributed\resilient task scheduling service.  If you do want to host a task scheduling product\solution off-premise (and ideally use Azure), what are your options? PaaS Option 1: Worker Roles Use a worker role to schedule and execute actions at specific time periods.  There are a few frameworks available to assist with this: http://azuretoolkit.codeplex.com https://github.com/Lokad/lokad-cloud/wiki/TaskScheduler http://blog.smarx.com/posts/building-a-task-scheduler-in-windows-azure - This addresses a slightly different set of requirements. It’s a more dynamic approach for queuing up tasks, but not repeatable tasks (e.g. daily). I found the Azure Toolkit option the most simple to implement.  Step 1 : Create a domain entity implementing IJob for each job to schedule.  In this sample, I asynchronously call a WCF service method. 1: namespace Acme.WorkerRole.Jobs 2: { 3: using AzureToolkit; 4: using ScheduledTasksService; 5: 6: public class UploadEmployeesJob : IJob 7: { 8: public void Run() 9: { 10: // Call Tasks Service 11: var client = new ScheduledTasksServiceClient("BasicHttpBinding_IScheduledTasksService"); 12: client.UploadEmployees(); 13: client.Close(); 14: } 15: } 16: } Step 2 : In the worker role run method, add the jobs to the toolkit engine. 1: namespace Acme.WorkerRole 2: { 3: using AzureToolkit.Engine; 4: using Jobs; 5:   6: public class WorkerRole : WorkerRoleEntryPoint 7: { 8: public override void Run() 9: { 10: var engine = new CloudEngine(); 11:   12: // Add Scheduled Jobs (using CronJob syntax - see http://www.adminschoice.com/crontab-quick-reference). 13:   14: // 1. Upload Employee job - 8.00 PM every weekday (Mon-Fri) 15: engine.WithJobScheduler().ScheduleJob<UploadEmployeesJob>(c => { c.CronSchedule = "0 20 * * 1-5"; }); 16: // 2. Purge Data job - 10 AM every Saturday 17: engine.WithJobScheduler().ScheduleJob<PurgeDataJob>(c => { c.CronSchedule = "0 10 * * 6"; }); 18: // 3. Process Exceptions job - Every 5 minutes 19: engine.WithJobScheduler().ScheduleJob<ProcessExceptionsJob>(c => { c.CronSchedule = "*/5 * * * *"; }); 20:   21: engine.Run(); 22: base.Run(); 23: } 24: } 25: } Pros Cons Azure Toolkit option is simple to implement. For the AzureToolkit option, you are limited to a single worker role.  Otherwise, the jobs will be executed multiple times, once for each worker role instance.   Paying for a continuously running worker role, even if it just processes a single job once a week.  If you only have a few scheduled tasks to run calling asynchronous services hosted in different web roles, an extra small worker role likely to be sufficient.  However, for an extra small worker role this still costs $14.40/month (03/09/2012). Option 2: Use Scheduled Task on Azure Web Role calling a console app Setup a Windows Scheduled Task on the Azure Web Role. This calls a console application that calls the WCF service methods that run the task actions. This design is described here: http://www.ronaldwidha.net/2011/02/23/cron-job-on-azure-using-scheduled-task-on-a-web-role-to-replace-azure-worker-role-for-background-job/ http://www.voiceoftech.com/swhitley/index.php/2011/07/windows-azure-task-scheduler/ http://devlicio.us/blogs/vinull/archive/2011/10/23/moving-to-azure-worker-roles-for-nothing-and-tasks-for-free.aspx Pros Cons Fairly easy to implement. Supportability - I RDC’ed onto the Azure server and stopped the scheduled task. I then rebooted the machine and the task was re-started. I also tried deleting the task and rebooting, the same thing occurred. The only way to permanently guarantee that a task is disabled is to do a fresh deployment. I think this is a major supportability concern.   Saleability - multiple instances would trigger multiple tasks. You can only have one instance for the scheduled task web role. The guidance implements setup of the scheduled task as part of a web role instance. But if you have more than one instance in a web role, the task will be triggered multiple times for each scheduled action (once per machine). Workaround: If we wanted to use scheduled tasks for another client with a saleable WCF service, then we could include the console & tasks scripts in a separate web role (e.g. a empty WCF service with no real purpose to it). SaaS Option 3: Azure Marketplace I thought that someone might be offering this type of service via the Azure marketplace. At the point of writing this blog post, I did not find anyone doing so. https://datamarket.azure.com/ Pros Cons   Nobody currently offers this on the Azure Marketplace. Option 4: Online Job Scheduling Service Provider There are plenty of online providers that offer this type of service on a pay-as-you-go approach.  Some of these are free for small usage.   Many of these providers are listed here: http://en.wikipedia.org/wiki/Webcron Pros Cons No bespoke development for scheduler. Reliance on third party. IaaS Option 5: Setup Scheduling Software on Azure IaaS VM’s One of job scheduling software offerings could be installed and configured on Azure VM’s.  A list of software options is listed here: http://en.wikipedia.org/wiki/List_of_job_scheduler_software Pros Cons Enterprise distributed\resilient task scheduling service VM Setup and maintenance   Software Licence Costs Option 6: VM Gallery A the time of writing this blog post, I did not spot a VM in the gallery that included pre-installation of any of the above software options. Pros Cons   No current VM template. Summary For my current project that had a small handful of tasks to schedule with a limited project budget I chose option 1 (a worker role using the Azure Toolkit to schedule tasks).  If I was building an enterprise scale solution for the future, options 4 and 5 are currently worthy of consideration. Hopefully, Microsoft will include tasks scheduling in the future as part of their PaaS offerings.

    Read the article

  • Need to find a find a fast/multi-user database program

    - by user65961
    Our company is currently utilizing Excel and have been encountering a series of issues for starters we have multiple users sharing this application. We utilize it write our schedules for our employees and generate staffing levels. May someone give me please or inform me what are the pros and cons of this program and offer suggestions for another database that allows multiple users to share and also give the pros and cons need something that will hold massive data and allow sharing, protecting capabilities.

    Read the article

  • PowerShell the SQL Server Way

    Although Windows PowerShell has been available to IT professionals going on seven years, there are still many IT pros who are just now deciding to see what the fuss is all about. Depending on your job, you might find PowerShell an invaluable tool. Microsoft's plan is that PowerShell will be the management tool for all of its servers and platforms. For most IT pros, it's not a matter of if you'll be using PowerShell, only a matter of when.

    Read the article

  • MySQL: Functional Partitioning

    This article contains common different methods of functional partitioning and common considerations for database setup and capacity. Company DBAs, database developers, engineers and architects should consider the pros and cons of any method of sharding or partitioning since compromises will have to be made given the pros and cons of a system setup.

    Read the article

  • MySQL: Functional Partitioning

    This article contains common different methods of functional partitioning and common considerations for database setup and capacity. Company DBAs, database developers, engineers and architects should consider the pros and cons of any method of sharding or partitioning since compromises will have to be made given the pros and cons of a system setup.

    Read the article

  • Cloud to On-Premise Connectivity Patterns

    - by Rajesh Raheja
    Do you have a requirement to convert an Opportunity in Salesforce.com to an Order/Quote in Oracle E-Business Suite? Or maybe you want the creation of an Oracle RightNow Incident to trigger an on-premise Oracle E-Business Suite Service Request creation for RMA and Field Scheduling? If so, read on. In a previous blog post, I discussed integrating TO cloud applications, however the use cases above are the reverse i.e. receiving data FROM cloud applications (SaaS) TO on-premise applications/databases that sit behind a firewall. Oracle SOA Suite is assumed to be on-premise with with Oracle Service Bus as the mediation and virtualization layer. The main considerations for the patterns are are security i.e. shielding enterprise resources; and scalability i.e. minimizing firewall latency. Let me use an analogy to help visualize the patterns: the on-premise system is your home - with your most valuable possessions - and the SaaS app is your favorite on-line store which regularly ships (inbound calls) various types of parcels/items (message types/service operations). You need the items at home (on-premise) but want to safe guard against misguided elements of society (internet threats) who may masquerade as postal workers and vandalize property (denial of service?). Let's look at the patterns. Pattern: Pull from Cloud The on-premise system polls from the SaaS apps and picks up the message instead of having it delivered. This may be done using Oracle RightNow Object Query Language or SOAP APIs. This is particularly suited for certain integration approaches wherein messages are trickling in, can be centralized and batched e.g. retrieving event notifications on an hourly schedule from the Oracle Messaging Service. To compare this pattern with the home analogy, you are avoiding any deliveries to your home and instead go to the post office/UPS/Fedex store to pick up your parcel. Every time. Pros: On-premise assets not exposed to the Internet, firewall issues avoided by only initiating outbound connections Cons: Polling mechanisms may affect performance, may not satisfy near real-time requirements Pattern: Open Firewall Ports The on-premise system exposes the web services that needs to be invoked by the cloud application. This requires opening up firewall ports, routing calls to the appropriate internal services behind the firewall. Fusion Applications uses this pattern, and auto-provisions the services on the various virtual hosts to secure the topology. This works well for service integration, but may not suffice for large volume data integration. Using the home analogy, you have now decided to receive parcels instead of going to the post office every time. A door mail slot cut out allows the postman can drop small parcels, but there is still concern about cutting new holes for larger packages. Pros: optimal pattern for near real-time needs, simpler administration once the service is provisioned Cons: Needs firewall ports to be opened up for new services, may not suffice for batch integration requiring direct database access Pattern: Virtual Private Networking The on-premise network is "extended" to the cloud (or an intermediary on-demand / managed service offering) using Virtual Private Networking (VPN) so that messages are delivered to the on-premise system in a trusted channel. Using the home analogy, you entrust a set of keys with a neighbor or property manager who receives the packages, and then drops it inside your home. Pros: Individual firewall ports don't need to be opened, more suited for high scalability needs, can support large volume data integration, easier management of one connection vs a multitude of open ports Cons: VPN setup, specific hardware support, requires cloud provider to support virtual private computing Pattern: Reverse Proxy / API Gateway The on-premise system uses a reverse proxy "API gateway" software on the DMZ to receive messages. The reverse proxy can be implemented using various mechanisms e.g. Oracle API Gateway provides firewall and proxy services along with comprehensive security, auditing, throttling benefits. If a firewall already exists, then Oracle Service Bus or Oracle HTTP Server virtual hosts can provide reverse proxy implementations on the DMZ. Custom built implementations are also possible if specific functionality (such as message store-n-forward) is needed. In the home analogy, this pattern sits in between cutting mail slots and handing over keys. Instead, you install (and maintain) a mailbox in your home premises outside your door. The post office delivers the parcels in your mailbox, from where you can securely retrieve it. Pros: Very secure, very flexible Cons: Introduces a new software component, needs DMZ deployment and management Pattern: On-Premise Agent (Tunneling) A light weight "agent" software sits behind the firewall and initiates the communication with the cloud, thereby avoiding firewall issues. It then maintains a bi-directional connection either with pull or push based approaches using (or abusing, depending on your viewpoint) the HTTP protocol. Programming protocols such as Comet, WebSockets, HTTP CONNECT, HTTP SSH Tunneling etc. are possible implementation options. In the home analogy, a resident receives the parcel from the postal worker by opening the door, however you still take precautions with chain locks and package inspections. Pros: Light weight software, IT doesn't need to setup anything Cons: May bypass critical firewall checks e.g. virus scans, separate software download, proliferation of non-IT managed software Conclusion The patterns above are some of the most commonly encountered ones for cloud to on-premise integration. Selecting the right pattern for your project involves looking at your scalability needs, security restrictions, sync vs asynchronous implementation, near real-time vs batch expectations, cloud provider capabilities, budget, and more. In some cases, the basic "Pull from Cloud" may be acceptable, whereas in others, an extensive VPN topology may be well justified. For more details on the Oracle cloud integration strategy, download this white paper.

    Read the article

  • Cloud to On-Premise Connectivity Patterns

    - by Rajesh Raheja
    Do you have a requirement to convert an Opportunity in Salesforce.com to an Order/Quote in Oracle E-Business Suite? Or maybe you want the creation of an Oracle RightNow Incident to trigger an on-premise Oracle E-Business Suite Service Request creation for RMA and Field Scheduling? If so, read on. In a previous blog post, I discussed integrating TO cloud applications, however the use cases above are the reverse i.e. receiving data FROM cloud applications (SaaS) TO on-premise applications/databases that sit behind a firewall. Oracle SOA Suite is assumed to be on-premise with with Oracle Service Bus as the mediation and virtualization layer. The main considerations for the patterns are are security i.e. shielding enterprise resources; and scalability i.e. minimizing firewall latency. Let me use an analogy to help visualize the patterns: the on-premise system is your home - with your most valuable possessions - and the SaaS app is your favorite on-line store which regularly ships (inbound calls) various types of parcels/items (message types/service operations). You need the items at home (on-premise) but want to safe guard against misguided elements of society (internet threats) who may masquerade as postal workers and vandalize property (denial of service?). Let's look at the patterns. Pattern: Pull from Cloud The on-premise system polls from the SaaS apps and picks up the message instead of having it delivered. This may be done using Oracle RightNow Object Query Language or SOAP APIs. This is particularly suited for certain integration approaches wherein messages are trickling in, can be centralized and batched e.g. retrieving event notifications on an hourly schedule from the Oracle Messaging Service. To compare this pattern with the home analogy, you are avoiding any deliveries to your home and instead go to the post office/UPS/Fedex store to pick up your parcel. Every time. Pros: On-premise assets not exposed to the Internet, firewall issues avoided by only initiating outbound connections Cons: Polling mechanisms may affect performance, may not satisfy near real-time requirements Pattern: Open Firewall Ports The on-premise system exposes the web services that needs to be invoked by the cloud application. This requires opening up firewall ports, routing calls to the appropriate internal services behind the firewall. Fusion Applications uses this pattern, and auto-provisions the services on the various virtual hosts to secure the topology. This works well for service integration, but may not suffice for large volume data integration. Using the home analogy, you have now decided to receive parcels instead of going to the post office every time. A door mail slot cut out allows the postman can drop small parcels, but there is still concern about cutting new holes for larger packages. Pros: optimal pattern for near real-time needs, simpler administration once the service is provisioned Cons: Needs firewall ports to be opened up for new services, may not suffice for batch integration requiring direct database access Pattern: Virtual Private Networking The on-premise network is "extended" to the cloud (or an intermediary on-demand / managed service offering) using Virtual Private Networking (VPN) so that messages are delivered to the on-premise system in a trusted channel. Using the home analogy, you entrust a set of keys with a neighbor or property manager who receives the packages, and then drops it inside your home. Pros: Individual firewall ports don't need to be opened, more suited for high scalability needs, can support large volume data integration, easier management of one connection vs a multitude of open ports Cons: VPN setup, specific hardware support, requires cloud provider to support virtual private computing Pattern: Reverse Proxy / API Gateway The on-premise system uses a reverse proxy "API gateway" software on the DMZ to receive messages. The reverse proxy can be implemented using various mechanisms e.g. Oracle API Gateway provides firewall and proxy services along with comprehensive security, auditing, throttling benefits. If a firewall already exists, then Oracle Service Bus or Oracle HTTP Server virtual hosts can provide reverse proxy implementations on the DMZ. Custom built implementations are also possible if specific functionality (such as message store-n-forward) is needed. In the home analogy, this pattern sits in between cutting mail slots and handing over keys. Instead, you install (and maintain) a mailbox in your home premises outside your door. The post office delivers the parcels in your mailbox, from where you can securely retrieve it. Pros: Very secure, very flexible Cons: Introduces a new software component, needs DMZ deployment and management Pattern: On-Premise Agent (Tunneling) A light weight "agent" software sits behind the firewall and initiates the communication with the cloud, thereby avoiding firewall issues. It then maintains a bi-directional connection either with pull or push based approaches using (or abusing, depending on your viewpoint) the HTTP protocol. Programming protocols such as Comet, WebSockets, HTTP CONNECT, HTTP SSH Tunneling etc. are possible implementation options. In the home analogy, a resident receives the parcel from the postal worker by opening the door, however you still take precautions with chain locks and package inspections. Pros: Light weight software, IT doesn't need to setup anything Cons: May bypass critical firewall checks e.g. virus scans, separate software download, proliferation of non-IT managed software Conclusion The patterns above are some of the most commonly encountered ones for cloud to on-premise integration. Selecting the right pattern for your project involves looking at your scalability needs, security restrictions, sync vs asynchronous implementation, near real-time vs batch expectations, cloud provider capabilities, budget, and more. In some cases, the basic "Pull from Cloud" may be acceptable, whereas in others, an extensive VPN topology may be well justified. For more details on the Oracle cloud integration strategy, download this white paper.

    Read the article

  • Quartz 2D or OpenGL ES? Pros and cons in the long term, possibility of migration to other platforms.

    - by fspirit
    Hi all! I'm having a hard time deciding whether to go with Quartz2D or OpenGL for an iPad game. It will be 2D mostly, but effect-intense (simultaneous lighting effects for 10-30 objects, 10-20 simultaneous animations on the screen). So far, assuming i'm equally dumb in both technologies and have to learn them from the ground, i came to this list. (I've read several topics here, on SO, with names like "Quartz or OpenGL", but i'm still left with some questions) Quartz: Better time-to-market, because of ready to use absractions like UIView, UIImageView, CoreAnimation abstractions Open GL ES Closer to hardware, thus, performance is better. App, implemented with OpenGL ES can be easier migrated to Android, MeeGo, Windows Phone, etc. My questions are: How time will it take to rewrite Quartz 2d app to use OpenGL? Lets say it took me 2 man-month to write Quartz app, how much time will i need to rewrite it? (Please, just some subjective opinions, i'll try to summarize them somehow) Regarding the ease of migration to other platforms, when using OpenGL, is it really so? Or efforts when migrating Quartz app from iPhoneOS to Android will be not so much bigger, compared to OpenGL app migration? (Ease of migration is quite important criterion) Regarding OpenGL, should i go with OpenGL 1.1 or 2.0, concerning migration? (Android supports 2.0 through NDK, but dont know whether NDK's use will increase or decrease migration efforts)

    Read the article

  • SQL – Download NuoDB and Qualify for FREE Amazon Gift Cards

    - by Pinal Dave
    July has been a fantastic month and Team NuoDB has really appreciated the active participation of the SQLAuthority.com active reader base. Earlier we had launched two contests with NuoDB and both of them are very much appreciated by readers. There are constant demands of more contests and team NuoDB is very much excited to support more contests. Here are the details to constests ran earlier: What ACID stands in the Database? – Contest to Win 24 Amazon Gift Cards and Joes 2 Pros 2012 Kit What is the latest Version of NuoDB? – A Quick Contest to Get Amazon Gift Cards Based on the earlier successful contests, the kind folks at NuoDB decided that they will support one more round of the giveaways to SQLAuthority.com contests. However, please note that this month’s contest will end in next 48 hours. You have to take part before July 31st, 2013 11:59:00 PM PST. Here is the quick contest: You just have to go and download NuoDB. The first 10 people who will download the NuoDB will get Amzon USD 10 cards. Remaining everyone will be entered into a lucky draw of Amazon Gift cards of USD 50. Winners will be announced in next 24 hours. To eligible for this contest, please download NuoDB before July 31st, 2013 11:59:00 PM PST. Bonus Round: If you have entered in the contest above, you can also enter to win latest Beginning SSRS Joes 2 Pros book. You just have to leave a comment over here with the note about how many different platform NuoDB supports. Here are few of the blog post I wrote earlier on that subject: Part 1 – Install NuoDB in 90 Seconds Part 2 – Manage NuoDB Installation Part 3 – Explore NuoDB Database Part 4 – Migrate from SQL Server to NuoDB Part 5 - NuoDB and Third Party Explorer – SQuirreL SQL Client, SQL Workbench/J and DbVisualizer Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • SQL – What is the latest Version of NuoDB? – A Quick Contest to Get Amazon Gift Cards

    - by Pinal Dave
    We had a great contest earlier last week - What ACID stands in the Database? – Contest to Win 24 Amazon Gift Cards and Joes 2 Pros 2012 Kit. It has received quite a few responses. Just like any other contest, not everyone was winner. The kind folks at NuoDB decided to give another chance to everyone who have not won in the last contest. This means if you have missed to take part in the earlier contest or if you have taken part and not won, you still have one more chance to win Amazon Gift Card. Here is the quick contest: You just have to go and download NuoDB. The first 10 people who will download the NuoDB will get 10 – USD 10 cards. Remaining everyone will be entered into a lucky draw of Amazon Gift cards of USD 50. Winners will be announced in next 24 hours. Bonus Round: If you have entered in the contest above, you can also enter to win latest Beginning SSRS Joes 2 Pros book. You just have to leave a comment over here with your experience about your experience with NuoDB and what is the latest version of the product. Here are few of the blog post I wrote earlier on that subject: Part 1 – Install NuoDB in 90 Seconds Part 2 – Manage NuoDB Installation Part 3 – Explore NuoDB Database Part 4 – Migrate from SQL Server to NuoDB Part 5 - NuoDB and Third Party Explorer – SQuirreL SQL Client, SQL Workbench/J and DbVisualizer Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Load-balancer options

    - by toolkit
    I am looking at a number of possible options for load-balancing. So far, I am constrained to the following options: DNS server load-balancer, balancing to a cluster of tomcat servers, with terracotta for session replication. Pros - don't have to buy new kit. Cons - DNS lb can keep directing to a broken server. Hardware load-balancer, direct to cluster of tomcat servers. Pros - could have second box for failover lb. Cons - expense. Apache server load-balancer. Pros - apache's lb polls for broken servers. Cons - apache server is single point of failure, plus need to buy another server. Are there any other options I should consider? Thanks. Update: Thanks for all the answers so far +1's all round. Not accepting an answer yet, to keep more ideas coming.

    Read the article

  • Drawbacks of installing linux on usb stick?

    - by Znarkus
    I am setting up a router/nas/http/whatever server based on an ION mini-ITX board. I've installed Ubuntu Server on an old 160 GB drive, but it generates a lot more heat and vibrates more than my other new drive (storage). It just doesn't fit the concept, and worse: it takes up a SATA port. As SSD's are crazy expensive I'm thinking of buying an extra 4 GB USB stick, and raid0 it. From my point of view, these are the pros/cons: Pros Low power consumption No vibrations No heat Smaller Get to buy new, larger USB stick (:D) Cons Shorter life time Slower Raid 0 More work maintaing/installing? I think the pros overweighs the cons. Shorter life time and raid 0 is countered by regular backups of the configs/settings. Slower is partially countered by raid 0, and I don't know about the last one. What do You think? Experience? Another solution?

    Read the article

< Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >