Search Results

Search found 25836 results on 1034 pages for 'solution evangelist'.

Page 58/1034 | < Previous Page | 54 55 56 57 58 59 60 61 62 63 64 65  | Next Page >

  • Oracle Exalogic Customer Momentum @ OOW'12

    - by Sanjeev Sharma
    [Adapted from here]  At Oracle Open World 2012, i sat down with some of the Oracle Exalogic early adopters  to discuss the business benefits these businesses were realizing by embracing the engineered systems approach to data-center modernization and application consolidation. Below is an overview of the 4 businesses that won the Oracle Fusion Middleware Innovation Award for Oracle Exalogic this year. Company: Netshoes About: Leading online retailer of sporting goods in Latin America.Challenges: Rapid business growth resulted in frequent outages and poor response-time of online store-front Conventional ad-hoc approach to horizontal scaling resulted in high CAPEX and OPEX Poor performance and unavailability of online store-front resulted in revenue loss from purchase abandonment Solution: Consolidated ATG Commerce and Oracle WebLogic running on Oracle Exalogic.Business Impact:Reduced abandonment rates resulting in a two-digit increase in online conversion rates translating directly into revenue up-liftCompany: ClaroAbout: Leading communications services provider in Latin America.Challenges: Support business growth over the next 3  - 5 years while maximizing re-use of existing middleware and application investments with minimal effort and risk Solution: Consolidated Oracle Fusion Middleware components (Oracle WebLogic, Oracle SOA Suite, Oracle Tuxedo) and JAVA applications onto Oracle Exalogic and Oracle Exadata. Business Impact:Improved partner SLA’s 7x while improving throughput 5X and response-time 35x for  JAVA applicationsCompany: ULAbout: Leading safety testing and certification organization in the world.Challenges: Transition from being a non-profit to a profit oriented enterprise and grow from a $1B to $5B in annual revenues in the next 5 years Undertake a massive business transformation by aligning change strategy with execution Solution: Consolidated Oracle Applications (E-Business Suite, Siebel, BI, Hyperion) and Oracle Fusion Middleware (AIA, SOA Suite) on Oracle Exalogic and Oracle ExadataBusiness Impact:Reduced financial and operating risk in re-architecting IT services to support new business capabilities supporting 87,000 manufacturersCompany: Ingersoll RandAbout: Leading manufacturer of industrial, climate, residential and security solutions.Challenges: Business continuity risks due to complexity in enforcing consistent operational and financial controls; Re-active business decisions reduced ability to offer differentiation and compete Solution: Consolidated Oracle E-business Suite on Oracle Exalogic and Oracle ExadataBusiness Impact:Service differentiation with faster order provisioning and a shorter lead-to-cash cycle translating into higher customer satisfaction and quicker cash-conversionCheck out the winners of the Oracle Fusion Middleware Innovation awards in other categories here.

    Read the article

  • SharePoint 2010 deployment problem after added a new server to existing farm

    - by mrt
    I have SharePoint 2010 farm with one server. I'm developing some features in a sharepoint farm solution (not sandbox because there are some user rights problem). All feature scopes are set to "Site". I can deploy the solution to SharePoint with no problem. I added a new web front-end server to my existing farm. Then when I try deploy my solution, VS2010 shows this error: Error occurred in deployment step 'Activate Features': Feature with Id 'xxx' is not installed in this farm, and cannot be added to this scope I login with AD administrator account to development server. Administrator account is in site collection admins on the target web application. The farm account is in local administrators group. Is there a solution for this error?

    Read the article

  • Can domain "masking" be setup in BIND\cPanel

    - by ServerAdminGuy45
    I am supporting a client, let's say he has the domain "acme.com". He registered with GoDaddy and set the name servers to point to his crappy hostgator shared account. He uses cPanel on the hostgator account to set up his subdomains. Is it possible to setup some kind of domain masking so that when someone connects to "application.acme.com", it really forwards to "cloud-solution-provider.com". I mean the actual domain "cloud-solution-provider.com" because it resolves to different IPs based upon geolocation. For this reason I can't just set application.acme.com to point to the IP that cloud-solution-provider.com resolves to. I want the ability for a user to RDP to "application.acme.com" and be sent to the desktop served by "cloud-solution-provider.com", whatever that IP may be. Perhaps I can have GoDaddy be the nameserver? I have a feeling this would break Hostgator since there is a website at acme.com and shop.acme.com

    Read the article

  • Is Oracle Smartscan really effective?

    - by Vimvq1987
    Oracle is marketing Exadata to our company, and they're focusing on Smartscan, which is, advertised 10x more effective than traditional solution. I don't really understand (and somewhat don't really believe) this solution, because they assume that data is put on separated storage, and no indexes are used, so "1TB data was scanned and only 2MB is returned". So, I want to ask you: is Smartscan really effective. Is it true solution that greatly improves performance or just a buzzword? Thank you so much for your experience

    Read the article

  • Is it possible to use 3G internet for a TCP/IP game server?

    - by Amit Ofer
    I'm working on a turned based multiplayer android game with a friend. I started working on the game server and client using socket programming. I found a few tutorials on how to implement a basic chat on android and I started extending that example to suit my needs. Basically the game is really simple and the communication only include sending a few string from the client to the server every turn and sending the calculated scores back to all the clients after each turn. the idea is that one of the players creates the game and thus initialize the server, and each player connects to this client using ip. I tried this solution and it seems to work great when all the players are using the same wifi connection or by using router port forwarding. The problem is when trying to use 3G internet for the server, I guess the problem is that 3G ip address isn't global and you can't use port forwarding there, correct me if I'm wrong here. Is there a way to overcome this issue? or the only solution is to limit my game to wifi only or think of a different solution than the standard socket programming solution? I.E web server etc. what do you think would be the best approach here? Thanks.

    Read the article

  • Is using a dedicated thread just for sending gpu commands a good idea?

    - by tigrou
    The most basic game loop is like this : while(1) { update(); draw(); swapbuffers(); } This is very simple but have a problem : some drawing commands can be blocking and cpu will wait while he could do other things (like processing next update() call). Another possible solution i have in mind would be to use two threads : one for updating and preparing commands to be sent to gpu, and one for sending these commands to the gpu : //first thread while(1) { update(); render(); // use gamestate to generate all needed triangles and commands for gpu // put them in a buffer, no command is send to gpu // two buffers will be used, see below pulse(); //signal the other thread data is ready } //second thread while(1) { wait(); // wait for second thread for data to come send_data_togpu(); // send prepared commands from buffer to graphic card swapbuffers(); } also : two buffers would be used, so one buffer could be filled with gpu commands while the other would be processed by gpu. Do you thing such a solution would be effective ? What would be advantages and disadvantages of such a solution (especially against a simpler solution (eg : single threaded with triple buffering enabled) ?

    Read the article

  • Beast / CRIME / Beach attack and stopping it

    - by user2143356
    I have read so much on all this but not entirely sure I understand what has gone on. Also, is this one, two or three problems? It looks to me like three, but it's all very confusing: Beast CRIME Beach It seems the solution may be to simply not use compression with HTTPS traffic (or is that just on one of them?) I use GZIP compression. Is that okay, or is that part of the problem? I also use Ubuntu 12.04 LTS Also, is non-HTTPS traffic okay? So after reading all the theory I just want the solution. I think this may be the solution, but can someone please confirm I have understood everything so I am not likely to suffer from this attack: SOLUTION: Use GZIP compression on HTTP traffic, but don't use any compression on HTTPS traffic

    Read the article

  • SPARC SuperCluster Papers

    - by user12616590
    Oracle has been publishing white papers that describe uses and characteristics of the SPARC SuperCluster product. Here are just a few: A Technical Overview of the Oracle SPARC SuperCluster T4-4SPARC SuperCluster T4-4 is a high performance, multi-purpose engineered system that has been designed, tested and integrated to run a wide array of enterprise applications. It is well suited for multi-tier enterprise applications with Web, database and application components. This 20-page paper discusses the components and technical characteristics of this product. SPARC SuperCluster T4-4 Platform Security Principles and CapabilitiesThe security capabilities designed into the SPARC SuperCluster, and architectural, deployment, and operational best practices for taking advantage of them. Consolidating Oracle E-Business Suite on Oracle’s SPARC SuperClusterThis Oracle Optimized Solution describes the implementation and use of SPARC SuperCluster as a consolidation platform for E-Business Suite in 30 pages. Oracle Optimized Solution for Oracle PeopleSoft Human Capital Management on SPARC SuperClusterThe Oracle Optimized Solution for PeopleSoft Human Capital Management on SPARC SuperCluster is the industry's only proven, tested, applications-to-disk solution that maintains excellence managing absences, optimizing collaborative activities, streamlining knowledge and honing processes; 31 pages. I hope you find some of those papers useful.

    Read the article

  • How can I synchronise my Outlook Calendar with Google Calendar (preferably using a free/open source tool)?

    - by Kuf
    How can I synchronise my desktop Outlook calendar with my Google Calendar (Outlook - Google)? I saw the question Free tool for Synchronizing Google Contacts and Calendar with Outlook, but the solution that was suggested there is no longer available - Google Sync End of Life. There are tools that required a payment, like SyncMyCal, gSyncit and OggSync, but I am looking for a free / open source solution. One can download Google sync, but when trying to use it there's an error: For now, I use OggSync to synchronise, but as a freeware it allows to synchronise manually only, not automatically, so I have to remember to synchronise after every change. I checked Mozilla Sunbird, but I couldn't find any relative posts on how to synchronise Outlook - Google using it. Just to be clear: I'm not looking for software; I am looking for a solution. What can I do if sometimes software is a solution?

    Read the article

  • ArrayList in Java [on hold]

    - by JNL
    I was implementing a program to remove the duplicates from the 2 character array. I implemented these 2 solutions, Solution 1 worked fine, but Solution 2 given me UnSupportedoperationException. I am wonderring why i sthat so? The two solutions are given below; public void getDiffernce(Character[] inp1, Character[] inp2){ // Solution 1: // ********************************************************************************** List<Character> list1 = new ArrayList<Character>(Arrays.asList(inp1)); List<Character> list2 = new ArrayList<Character>(Arrays.asList(inp2)); list1.removeAll(list2); System.out.println(list1); System.out.println("*********************************************************************************"); // Solution 2: Character a[] = {'f', 'x', 'l', 'b', 'y'}; Character b[] = {'x', 'b','d'}; List<Character> al1 = new ArrayList<Character>(); List<Character> al2 = new ArrayList<Character>(); al1 = (Arrays.asList(a)); System.out.println(al1); al2 = (Arrays.asList(b)); System.out.println(al2); al1.removeAll(al2); // retainAll(al2); System.out.println(al1); }

    Read the article

  • OSX : Setup for filestorage in medium business

    - by Franatique
    In our office every machine runs OSX. In search of an ideal storage and sharing solution we decided to let OSX Server handle all account information and auth requests whereas an 7TB QNAP provides NFS shares. All shares are published as mounts in the companywide LDAP. As it turns out, handling permissions in this situation is very clumsy (e.g. inherit permissions on newly created files). Unfortunately using NFS4 in combination with ACLs did not solve the problem. As a possible solution I set up a iSCSI connection between QNAP and the machine running OSX Server which in turn serves the LUN as AFP share. Permission handling works like a charm for this setup. Although I am a bit concerned about the performance of this setup. As we are a fast growing company we expect the solution to serve at least 100 clients while using files aprox. above 100MB each. Are there any known drawbacks of this solution?

    Read the article

  • Linux SFTP and many local user accounts, limits with mount --bind?

    - by user123428
    I am in the process of building a solution to handle many developers (possibly hundreds) to work on their files via sftp, each one Jailed in their home directory. For our particular needs, we have a samba mount point that contains all of the users home directories. I have started developing the following solution and hit some walls: - I have configured a Ubuntu Lucid Server as sftp server. - In order to jail the user in their home directory (without allowing them the browse a directory up and seeing all the other users folders) I am using mount --bind and not a symbolic link (also some ftp clients don't really work with sym links). - The user accounts are local unix user accounts on the sftp server (not using a directory service or anything) that have an empty home folder created on the local machine, then I use mount --bind to bind the empty folder to the actual users home directory on the samba share. With this solution I am hitting a couple of problems, in the case of a server reboot, all the mount --binds are lost because they are not written in fstab. Then I have read somewhere that the maximum amount of entries in fstab are 400 (which does not really help us). I have thought of a solution of writing something that stores the mounts in a text file as a backup and on server reboot, run the script that re mounts. I am just really unsure about this whole process and was wondering if anyone has any insight on possibly a better solution for SFTP? (not FTP)

    Read the article

  • Website project takes a long time to load in VS.NET 2008

    - by rm
    After getting a new computer (a lot faster than the one I've used before) my Solutions take A LONG TIME (3-4 minutes) to load up in VS.NET 2008. I only have 2 projects in the solution: DB Project and Website Project (from IIS). If I remove the Website Project from Solution - it loads up instantly, when I add that same website project to the OPEN Solution - it loads up instantly. The only time it's slow is when I open the solution referencing my Website project. I had the exact same setup (as far as VS is concerned) on my old box, and never had this problem. Any ideas?

    Read the article

  • How should I change my Graph structure (very slow insertion)?

    - by Nazgulled
    Hi, This program I'm doing is about a social network, which means there are users and their profiles. The profiles structure is UserProfile. Now, there are various possible Graph implementations and I don't think I'm using the best one. I have a Graph structure and inside, there's a pointer to a linked list of type Vertex. Each Vertex element has a value, a pointer to the next Vertex and a pointer to a linked list of type Edge. Each Edge element has a value (so I can define weights and whatever it's needed), a pointer to the next Edge and a pointer to the Vertex owner. I have a 2 sample files with data to process (in CSV style) and insert into the Graph. The first one is the user data (one user per line); the second one is the user relations (for the graph). The first file is quickly inserted into the graph cause I always insert at the head and there's like ~18000 users. The second file takes ages but I still insert the edges at the head. The file has about ~520000 lines of user relations and takes between 13-15mins to insert into the Graph. I made a quick test and reading the data is pretty quickly, instantaneously really. The problem is in the insertion. This problem exists because I have a Graph implemented with linked lists for the vertices. Every time I need to insert a relation, I need to lookup for 2 vertices, so I can link them together. This is the problem... Doing this for ~520000 relations, takes a while. How should I solve this? Solution 1) Some people recommended me to implement the Graph (the vertices part) as an array instead of a linked list. This way I have direct access to every vertex and the insertion is probably going to drop considerably. But, I don't like the idea of allocating an array with [18000] elements. How practically is this? My sample data has ~18000, but what if I need much less or much more? The linked list approach has that flexibility, I can have whatever size I want as long as there's memory for it. But the array doesn't, how am I going to handle such situation? What are your suggestions? Using linked lists is good for space complexity but bad for time complexity. And using an array is good for time complexity but bad for space complexity. Any thoughts about this solution? Solution 2) This project also demands that I have some sort of data structures that allows quick lookup based on a name index and an ID index. For this I decided to use Hash Tables. My tables are implemented with separate chaining as collision resolution and when a load factor of 0.70 is reach, I normally recreate the table. I base the next table size on this http://planetmath.org/encyclopedia/GoodHashTablePrimes.html. Currently, both Hash Tables hold a pointer to the UserProfile instead of duplication the user profile itself. That would be stupid, changing data would require 3 changes and it's really dumb to do it that way. So I just save the pointer to the UserProfile. The same user profile pointer is also saved as value in each Graph Vertex. So, I have 3 data structures, one Graph and two Hash Tables and every single one of them point to the same exact UserProfile. The Graph structure will serve the purpose of finding the shortest path and stuff like that while the Hash Tables serve as quick index by name and ID. What I'm thinking to solve my Graph problem is to, instead of having the Hash Tables value point to the UserProfile, I point it to the corresponding Vertex. It's still a pointer, no more and no less space is used, I just change what I point to. Like this, I can easily and quickly lookup for each Vertex I need and link them together. This will insert the ~520000 relations pretty quickly. I thought of this solution because I already have the Hash Tables and I need to have them, then, why not take advantage of them for indexing the Graph vertices instead of the user profile? It's basically the same thing, I can still access the UserProfile pretty quickly, just go to the Vertex and then to the UserProfile. But, do you see any cons on this second solution against the first one? Or only pros that overpower the pros and cons on the first solution? Other Solution) If you have any other solution, I'm all ears. But please explain the pros and cons of that solution over the previous 2. I really don't have much time to be wasting with this right now, I need to move on with this project, so, if I'm doing to do such a change, I need to understand exactly what to change and if that's really the way to go. Hopefully no one fell asleep reading this and closed the browser, sorry for the big testament. But I really need to decide what to do about this and I really need to make a change. P.S: When answering my proposed solutions, please enumerate them as I did so I know exactly what are you talking about and don't confuse my self more than I already am.

    Read the article

  • Converting projects to use Automatic NuGet restore

    - by terje
    Originally posted on: http://geekswithblogs.net/terje/archive/2014/06/11/converting-projects-to-use-automatic-nuget-restore.aspxDownload tool In version 2.7 of NuGet automatic nuget restore was introduced, meaning you no longer need to distort your msbuild project files with nuget target information.   Visual Studio and TFS 2013 build have this enabled by default.  However, if your project was created before this was introduced, and/or if you have used the “Enable NuGet Package Restore” afterwards, you now have a series of unwanted things in your projects, and a series of project files that have been modified – and – you no longer neither want nor need this !  You might also get into some unwanted issues due to these modifications.  This is a MSBuild modification that was needed only before NuGet 2.7 ! So: DON’T USE THIS FUNCTION !!! There is an issue https://nuget.codeplex.com/workitem/4019 on this on the NuGet project site to get this function removed, renamed or at least moved farther away from the top level (please help vote it up!).  The response seems to be that it WILL BE removed, around version 3.0. This function does nothing you need after the introduction of NuGet 2.7.  What is also unfortunate is the naming of it – it implies that it is needed, it is not, and what is worse, there is no corresponding function to remove what it does ! So to fix this use the tool named IFix, that will fix this issue for you   - all free of course, and the code is open source.  Also report issues there:  https://github.com/OsirisTerje/IFix    IFix information DOWNLOAD HERE This command line tool installs using an MSI, and add itself to the system path.  If you work in a team, you will probably need to use the  tool multiple times.  Anyone in the team may at any time use the “Enable NuGet Package Restore” function and mess up your project again.  The IFix program can be run either in a  check modus, where it does not write anything back – it only checks if you have any issues, or in a Fix mode, where it will also perform the necessary fixes for you. The IFix program is used like this: IFix <command> [-c/--check] [-f/--fix]  [-v/--verbose] The command in this case is “nugetrestore”.  It will do a check from the location where it is being called, and run through all subfolders from that location. So  “IFix nugetrestore  --check” , will do the check ,  and “IFix nugetrestore  --fix”  will perform the changes, for all files and folders below the current working directory. (Note that --check  can be replaced with only –c, and --fix with –f, and so on. ) BEWARE: When you run the fix option, all solutions to be affected must be closed in Visual Studio ! So, if you just want to DO it, then: IFix nugetrestore --check to see if you have issues then IFix nugetrestore  --fix to fix them. How does it work IFix nugetrestore  checks and optionally fixes four issues that the older enabling of nuget restore did.  The issues are related to the MSBuild projess, and are: Deleting the nuget.targets file. Deleting the nuget.exe that is located under the .nuget folder Removing all references to nuget.targets in the solution file Removing all properties and target imports of nuget.targets inside the csproj files. IFix fixes these issues in the same sequence. The first step, removing the nuget.targets file is the most critical one, and all instances of the nuget.targets file within the scope of a solution has to be removed, and in addition it has to be done with the solution closed in Visual Studio.  If Visual Studio finds a nuget.targets file, the csproj files will be automatically messed up again. This means the removal process above might need to be done multiple times, specially when you’re working with a team, and that solution context menu still has the “Enable NuGet Package Restore” function.  Someone on the team might inadvertently do this at any time. It can be a good idea to add this check to a checkin policy – if you run TFS standard version control, but that will have no effect if you use TFS Git version control of course. So, better be prepared to run the IFix check from time to time. Or, even better, install IFix on your build servers, and add a call to IFix nugetrestore --check in the TFS Build script.    How does it look As a first example I have run the IFix program from the top of a set of git repositories, so it spans multiple repositories with multiple solutions. The result from the check option is as follows: We see the four red lines, there is one for each of the four checks we talked about in the previous section. The fact that they are red, means we have that particular issue. The first section (above the first red text line) is the nuget targets section.  Notice  No.1, it says it has found no paths to copy.  What IFix does here is to check if there are any defined paths to other nuget galleries.  If there are, then those are copied over to the nuget.config file, where is where it should be in version 2.7 and above.   No.2 says it has found the particular nuget.targets file,  No.3  states it HAS found some other nuget galleries defines in the targets file, which then it would like to copy to the config.file. No.4 is the section for nuget.exe files, and list those it has found, and which it would like to delete. No 5 states it has found a reference to nuget.targets in the solution file.  This reference comes from the fact that the .nuget folder is a solution folder, and the items within are described in the solution file. It then checks the csproj files, and as can be seen from the last red line, it ha found issues in 96 out of 198 csproj files.  There are two possible issues in a csproj files.  No.6 is the first one, and the most common and most important one, an “Import project” section.  This is the section that calls the nuget.targets files.  No.7 is another issue, which seems to sometimes be there, sometimes not, it is a RestorePackages property, which also should go away. Now, if we run the IFix nugetrestore –fix command, and then the check again after that, the result is: All green !

    Read the article

  • Beyond Cloud Technology, Enabling A More Agile and Responsive Organization

    - by sxkumar
    This is the second part of the blog “Clouds, Clouds Everywhere But not a Drop of Rain”. In the first part,  I was sharing with you how a broad-based transformation makes cloud more than a technology initiative, I will describe in this section how it requires people (organizational) and process changes as well, and these changes are as critical as is the choice of right tools and technology. People: Most IT organizations have a fairly complex organizational structure. There are different groups, managing different pieces of the puzzle, and yet, they don't always work together. Provisioning a new application therefore may require a request to float endlessly through system administrators, DBAs and middleware admin worlds – resulting in long delays and constant finger pointing.  Cloud users expect end-to-end automation - which requires these silos to be greatly simplified, if not completely eliminated.  Most customers I talk to acknowledge this problem but are quick to admit that such a transformation is hard. As hard as it may be, I am afraid that the status quo is no longer an option. Sticking to an organizational structure that was created ages back will not only impede cloud adoption,  it also risks making the IT skills increasingly irrelevant in a world that is rapidly moving towards converged applications and infrastructure.   Process: Most IT organizations today operate with a mindset that they must fully "control" access to any and all types of IT services. This in turn leads to people clinging on to outdated manual approval processes .  While requiring approvals for scarce resources makes sense, insisting that every single request must be manually approved defeats the very purpose of cloud. Not only this causes delays, thereby at least partially negating the agility benefits, it also results in gross inefficiency. In a cloud environment, self-service access should be governed by policies, quotas that the administrators can define upfront . For a cloud initiative to be successful, IT organizations MUST be ready to empower users by giving them real control rather than insisting on brokering every single interaction between users and the cloud resources. Technology: From a technology perspective, cloud is about consolidation, standardization and automation. A consolidated and standardized infrastructure helps increase utilization and reduces cost. Additionally, it  enables a much higher degree of automation - thereby providing users the required agility while minimizing operational costs.  Obviously, automation is the key to cloud. Unfortunately it hasn’t received as much attention within enterprises as it should have.  Many organizations are just now waking up to the criticality of automation and it still often gets relegated to back burner in favor of other "high priority" projects. However, it is important to understand that without the right type and level of automation, cloud will remain a distant dream for most enterprises. This in turn makes the choice of the cloud management software extremely critical.  For a cloud management software to be effective in an enterprise environment, it must meet the following qualifications: Broad and Deep Solution It should offer a broad and deep solution to enable the kind of broad-based transformation we are talking about.  Its footprint must cover physical and virtual systems, as well as infrastructure, database and application tiers. Too many enterprises choose to equate cloud with virtualization. While virtualization is a critical component of a cloud solution, it is just a component and not the whole solution. Similarly, too many people tend to equate cloud with Infrastructure-as-a-Service (IaaS). While it is perfectly reasonable to treat IaaS as a starting point, it is important to realize that it is just the first stepping stone - and on its own it can only provide limited business benefits. It is actually the higher level services, such as (application) platform and business applications, that will bring about a more meaningful transformation to your enterprise. Run and Manage Efficiently Your Mission Critical Applications It should not only be able to run your mission critical applications, it should do so better than before.  For enterprises, applications and data are the critical business assets  As such, if you are building a cloud platform that cannot run your ERP application, it isn't truly a "enterprise cloud".  Also, be wary of  vendors who try to sell you the idea that your applications must be written in a certain way to be able to run on the cloud. That is nothing but a bogus, self-serving argument. For the cloud to be meaningful to enterprises, it should adopt to your applications - and not the other way around.  Automated, Integrated Set of Cloud Management Capabilities At the root of many of the problems plaguing enterprise IT today is complexity. A complex maze of tools and technology, coupled with archaic  processes, results in an environment which is inflexible, inefficient and simply too hard to manage. Management tool consolidation, therefore, is key to the success of your cloud as tool proliferation adds to complexity, encourages compartmentalization and defeats the very purpose that you are building the cloud for. Decision makers ought to be extra cautious about vendors trying to sell them a "suite" of disparate and loosely integrated products as a cloud solution.  An effective enterprise cloud management solution needs to provide a tightly integrated set of capabilities for all aspects of cloud lifecycle management. A simple question to ask: will your environment be more or less complex after you implement your cloud? More often than not, the answer will surprise you.  At Oracle, we have understood these challenges and have been working hard to create cloud solutions that are relevant and meaningful for enterprises.  And we have been doing it for much longer than you may think. Oracle was one of the very first enterprise software companies to make our products available on the Amazon Cloud. As far back as in 2007, we created new cloud solutions such as Cloud Database Backup that are helping customers like Amazon save millions every year.  Our cloud solution portfolio is also the broadest and most deep in the industry  - covering public, private, hybrid, Infrastructure, platform and applications clouds. It is no coincidence therefore that the Oracle Cloud today offers the most comprehensive set of public cloud services in the industry.  And to a large part, this has been made possible thanks to our years on investment in creating cloud enabling technologies. I will dedicated the third and final part of the blog “Clouds, Clouds Everywhere But not a Drop of Rain” to Oracle Cloud Technologies Building Blocks and how they mapped into our vision of Enterprise Cloud. Stay Tuned.

    Read the article

  • Integrating JavaScript Unit Tests with Visual Studio

    - by Stephen Walther
    Modern ASP.NET web applications take full advantage of client-side JavaScript to provide better interactivity and responsiveness. If you are building an ASP.NET application in the right way, you quickly end up with lots and lots of JavaScript code. When writing server code, you should be writing unit tests. One big advantage of unit tests is that they provide you with a safety net that enable you to safely modify your existing code – for example, fix bugs, add new features, and make performance enhancements -- without breaking your existing code. Every time you modify your code, you can execute your unit tests to verify that you have not broken anything. For the same reason that you should write unit tests for your server code, you should write unit tests for your client code. JavaScript is just as susceptible to bugs as C#. There is no shortage of unit testing frameworks for JavaScript. Each of the major JavaScript libraries has its own unit testing framework. For example, jQuery has QUnit, Prototype has UnitTestJS, YUI has YUI Test, and Dojo has Dojo Objective Harness (DOH). The challenge is integrating a JavaScript unit testing framework with Visual Studio. Visual Studio and Visual Studio ALM provide fantastic support for server-side unit tests. You can easily view the results of running your unit tests in the Visual Studio Test Results window. You can set up a check-in policy which requires that all unit tests pass before your source code can be committed to the source code repository. In addition, you can set up Team Build to execute your unit tests automatically. Unfortunately, Visual Studio does not provide “out-of-the-box” support for JavaScript unit tests. MS Test, the unit testing framework included in Visual Studio, does not support JavaScript unit tests. As soon as you leave the server world, you are left on your own. The goal of this blog entry is to describe one approach to integrating JavaScript unit tests with MS Test so that you can execute your JavaScript unit tests side-by-side with your C# unit tests. The goal is to enable you to execute JavaScript unit tests in exactly the same way as server-side unit tests. You can download the source code described by this project by scrolling to the end of this blog entry. Rejected Approach: Browser Launchers One popular approach to executing JavaScript unit tests is to use a browser as a test-driver. When you use a browser as a test-driver, you open up a browser window to execute and view the results of executing your JavaScript unit tests. For example, QUnit – the unit testing framework for jQuery – takes this approach. The following HTML page illustrates how you can use QUnit to create a unit test for a function named addNumbers(). <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"> <html> <head> <title>Using QUnit</title> <link rel="stylesheet" href="http://github.com/jquery/qunit/raw/master/qunit/qunit.css" type="text/css" /> </head> <body> <h1 id="qunit-header">QUnit example</h1> <h2 id="qunit-banner"></h2> <div id="qunit-testrunner-toolbar"></div> <h2 id="qunit-userAgent"></h2> <ol id="qunit-tests"></ol> <div id="qunit-fixture">test markup, will be hidden</div> <script type="text/javascript" src="http://code.jquery.com/jquery-latest.js"></script> <script type="text/javascript" src="http://github.com/jquery/qunit/raw/master/qunit/qunit.js"></script> <script type="text/javascript"> // The function to test function addNumbers(a, b) { return a+b; } // The unit test test("Test of addNumbers", function () { equals(4, addNumbers(1,3), "1+3 should be 4"); }); </script> </body> </html> This test verifies that calling addNumbers(1,3) returns the expected value 4. When you open this page in a browser, you can see that this test does, in fact, pass. The idea is that you can quickly refresh this QUnit HTML JavaScript test driver page in your browser whenever you modify your JavaScript code. In other words, you can keep a browser window open and keep refreshing it over and over while you are developing your application. That way, you can know very quickly whenever you have broken your JavaScript code. While easy to setup, there are several big disadvantages to this approach to executing JavaScript unit tests: You must view your JavaScript unit test results in a different location than your server unit test results. The JavaScript unit test results appear in the browser and the server unit test results appear in the Visual Studio Test Results window. Because all of your unit test results don’t appear in a single location, you are more likely to introduce bugs into your code without noticing it. Because your unit tests are not integrated with Visual Studio – in particular, MS Test -- you cannot easily include your JavaScript unit tests when setting up check-in policies or when performing automated builds with Team Build. A more sophisticated approach to using a browser as a test-driver is to automate the web browser. Instead of launching the browser and loading the test code yourself, you use a framework to automate this process. There are several different testing frameworks that support this approach: · Selenium – Selenium is a very powerful framework for automating browser tests. You can create your tests by recording a Firefox session or by writing the test driver code in server code such as C#. You can learn more about Selenium at http://seleniumhq.org/. LTAF – The ASP.NET team uses the Lightweight Test Automation Framework to test JavaScript code in the ASP.NET framework. You can learn more about LTAF by visiting the project home at CodePlex: http://aspnet.codeplex.com/releases/view/35501 jsTestDriver – This framework uses Java to automate the browser. jsTestDriver creates a server which can be used to automate multiple browsers simultaneously. This project is located at http://code.google.com/p/js-test-driver/ TestSwam – This framework, created by John Resig, uses PHP to automate the browser. Like jsTestDriver, the framework creates a test server. You can open multiple browsers that are automated by the test server. Learn more about TestSwarm by visiting the following address: https://github.com/jeresig/testswarm/wiki Yeti – This is the framework introduced by Yahoo for automating browser tests. Yeti uses server-side JavaScript and depends on Node.js. Learn more about Yeti at http://www.yuiblog.com/blog/2010/08/25/introducing-yeti-the-yui-easy-testing-interface/ All of these frameworks are great for integration tests – however, they are not the best frameworks to use for unit tests. In one way or another, all of these frameworks depend on executing tests within the context of a “living and breathing” browser. If you create an ASP.NET Unit Test then Visual Studio will launch a web server before executing the unit test. Why is launching a web server so bad? It is not the worst thing in the world. However, it does introduce dependencies that prevent your code from being tested in isolation. One of the defining features of a unit test -- versus an integration test – is that a unit test tests code in isolation. Another problem with launching a web server when performing unit tests is that launching a web server can be slow. If you cannot execute your unit tests quickly, you are less likely to execute your unit tests each and every time you make a code change. You are much more likely to fall into the pit of failure. Launching a browser when performing a JavaScript unit test has all of the same disadvantages as launching a web server when performing an ASP.NET unit test. Instead of testing a unit of JavaScript code in isolation, you are testing JavaScript code within the context of a particular browser. Using the frameworks listed above for integration tests makes perfect sense. However, I want to consider a different approach for creating unit tests for JavaScript code. Using Server-Side JavaScript for JavaScript Unit Tests A completely different approach to executing JavaScript unit tests is to perform the tests outside of any browser. If you really want to test JavaScript then you should test JavaScript and leave the browser out of the testing process. There are several ways that you can execute JavaScript on the server outside the context of any browser: Rhino – Rhino is an implementation of JavaScript written in Java. The Rhino project is maintained by the Mozilla project. Learn more about Rhino at http://www.mozilla.org/rhino/ V8 – V8 is the open-source Google JavaScript engine written in C++. This is the JavaScript engine used by the Chrome web browser. You can download V8 and embed it in your project by visiting http://code.google.com/p/v8/ JScript – JScript is the JavaScript Script Engine used by Internet Explorer (up to but not including Internet Explorer 9), Windows Script Host, and Active Server Pages. Internet Explorer is still the most popular web browser. Therefore, I decided to focus on using the JScript Script Engine to execute JavaScript unit tests. Using the Microsoft Script Control There are two basic ways that you can pass JavaScript to the JScript Script Engine and execute the code: use the Microsoft Windows Script Interfaces or use the Microsoft Script Control. The difficult and proper way to execute JavaScript using the JScript Script Engine is to use the Microsoft Windows Script Interfaces. You can learn more about the Script Interfaces by visiting http://msdn.microsoft.com/en-us/library/t9d4xf28(VS.85).aspx The main disadvantage of using the Script Interfaces is that they are difficult to use from .NET. There is a great series of articles on using the Script Interfaces from C# located at http://www.drdobbs.com/184406028. I picked the easier alternative and used the Microsoft Script Control. The Microsoft Script Control is an ActiveX control that provides a higher level abstraction over the Window Script Interfaces. You can download the Microsoft Script Control from here: http://www.microsoft.com/downloads/en/details.aspx?FamilyID=d7e31492-2595-49e6-8c02-1426fec693ac After you download the Microsoft Script Control, you need to add a reference to it to your project. Select the Visual Studio menu option Project, Add Reference to open the Add Reference dialog. Select the COM tab and add the Microsoft Script Control 1.0. Using the Script Control is easy. You call the Script Control AddCode() method to add JavaScript code to the Script Engine. Next, you call the Script Control Run() method to run a particular JavaScript function. The reference documentation for the Microsoft Script Control is located at the MSDN website: http://msdn.microsoft.com/en-us/library/aa227633%28v=vs.60%29.aspx Creating the JavaScript Code to Test To keep things simple, let’s imagine that you want to test the following JavaScript function named addNumbers() which simply adds two numbers together: MvcApplication1\Scripts\Math.js function addNumbers(a, b) { return 5; } Notice that the addNumbers() method always returns the value 5. Right-now, it will not pass a good unit test. Create this file and save it in your project with the name Math.js in your MVC project’s Scripts folder (Save the file in your actual MVC application and not your MVC test application). Creating the JavaScript Test Helper Class To make it easier to use the Microsoft Script Control in unit tests, we can create a helper class. This class contains two methods: LoadFile() – Loads a JavaScript file. Use this method to load the JavaScript file being tested or the JavaScript file containing the unit tests. ExecuteTest() – Executes the JavaScript code. Use this method to execute a JavaScript unit test. Here’s the code for the JavaScriptTestHelper class: JavaScriptTestHelper.cs   using System; using System.IO; using Microsoft.VisualStudio.TestTools.UnitTesting; using MSScriptControl; namespace MvcApplication1.Tests { public class JavaScriptTestHelper : IDisposable { private ScriptControl _sc; private TestContext _context; /// <summary> /// You need to use this helper with Unit Tests and not /// Basic Unit Tests because you need a Test Context /// </summary> /// <param name="testContext">Unit Test Test Context</param> public JavaScriptTestHelper(TestContext testContext) { if (testContext == null) { throw new ArgumentNullException("TestContext"); } _context = testContext; _sc = new ScriptControl(); _sc.Language = "JScript"; _sc.AllowUI = false; } /// <summary> /// Load the contents of a JavaScript file into the /// Script Engine. /// </summary> /// <param name="path">Path to JavaScript file</param> public void LoadFile(string path) { var fileContents = File.ReadAllText(path); _sc.AddCode(fileContents); } /// <summary> /// Pass the path of the test that you want to execute. /// </summary> /// <param name="testMethodName">JavaScript function name</param> public void ExecuteTest(string testMethodName) { dynamic result = null; try { result = _sc.Run(testMethodName, new object[] { }); } catch { var error = ((IScriptControl)_sc).Error; if (error != null) { var description = error.Description; var line = error.Line; var column = error.Column; var text = error.Text; var source = error.Source; if (_context != null) { var details = String.Format("{0} \r\nLine: {1} Column: {2}", source, line, column); _context.WriteLine(details); } } throw new AssertFailedException(error.Description); } } public void Dispose() { _sc = null; } } }     Notice that the JavaScriptTestHelper class requires a Test Context to be instantiated. For this reason, you can use the JavaScriptTestHelper only with a Visual Studio Unit Test and not a Basic Unit Test (These are two different types of Visual Studio project items). Add the JavaScriptTestHelper file to your MVC test application (for example, MvcApplication1.Tests). Creating the JavaScript Unit Test Next, we need to create the JavaScript unit test function that we will use to test the addNumbers() function. Create a folder in your MVC test project named JavaScriptTests and add the following JavaScript file to this folder: MvcApplication1.Tests\JavaScriptTests\MathTest.js /// <reference path="JavaScriptUnitTestFramework.js"/> function testAddNumbers() { // Act var result = addNumbers(1, 3); // Assert assert.areEqual(4, result, "addNumbers did not return right value!"); }   The testAddNumbers() function takes advantage of another JavaScript library named JavaScriptUnitTestFramework.js. This library contains all of the code necessary to make assertions. Add the following JavaScriptnitTestFramework.js to the same folder as the MathTest.js file: MvcApplication1.Tests\JavaScriptTests\JavaScriptUnitTestFramework.js var assert = { areEqual: function (expected, actual, message) { if (expected !== actual) { throw new Error("Expected value " + expected + " is not equal to " + actual + ". " + message); } } }; There is only one type of assertion supported by this file: the areEqual() assertion. Most likely, you would want to add additional types of assertions to this file to make it easier to write your JavaScript unit tests. Deploying the JavaScript Test Files This step is non-intuitive. When you use Visual Studio to run unit tests, Visual Studio creates a new folder and executes a copy of the files in your project. After you run your unit tests, your Visual Studio Solution will contain a new folder named TestResults that includes a subfolder for each test run. You need to configure Visual Studio to deploy your JavaScript files to the test run folder or Visual Studio won’t be able to find your JavaScript files when you execute your unit tests. You will get an error that looks something like this when you attempt to execute your unit tests: You can configure Visual Studio to deploy your JavaScript files by adding a Test Settings file to your Visual Studio Solution. It is important to understand that you need to add this file to your Visual Studio Solution and not a particular Visual Studio project. Right-click your Solution in the Solution Explorer window and select the menu option Add, New Item. Select the Test Settings item and click the Add button. After you create a Test Settings file for your solution, you can indicate that you want a particular folder to be deployed whenever you perform a test run. Select the menu option Test, Edit Test Settings to edit your test configuration file. Select the Deployment tab and select your MVC test project’s JavaScriptTest folder to deploy. Click the Apply button and the Close button to save the changes and close the dialog. Creating the Visual Studio Unit Test The very last step is to create the Visual Studio unit test (the MS Test unit test). Add a new unit test to your MVC test project by selecting the menu option Add New Item and selecting the Unit Test project item (Do not select the Basic Unit Test project item): The difference between a Basic Unit Test and a Unit Test is that a Unit Test includes a Test Context. We need this Test Context to use the JavaScriptTestHelper class that we created earlier. Enter the following test method for the new unit test: [TestMethod] public void TestAddNumbers() { var jsHelper = new JavaScriptTestHelper(this.TestContext); // Load JavaScript files jsHelper.LoadFile("JavaScriptUnitTestFramework.js"); jsHelper.LoadFile(@"..\..\..\MvcApplication1\Scripts\Math.js"); jsHelper.LoadFile("MathTest.js"); // Execute JavaScript Test jsHelper.ExecuteTest("testAddNumbers"); } This code uses the JavaScriptTestHelper to load three files: JavaScripUnitTestFramework.js – Contains the assert functions. Math.js – Contains the addNumbers() function from your MVC application which is being tested. MathTest.js – Contains the JavaScript unit test function. Next, the test method calls the JavaScriptTestHelper ExecuteTest() method to execute the testAddNumbers() JavaScript function. Running the Visual Studio JavaScript Unit Test After you complete all of the steps described above, you can execute the JavaScript unit test just like any other unit test. You can use the keyboard combination CTRL-R, CTRL-A to run all of the tests in the current Visual Studio Solution. Alternatively, you can use the buttons in the Visual Studio toolbar to run the tests: (Unfortunately, the Run All Impacted Tests button won’t work correctly because Visual Studio won’t detect that your JavaScript code has changed. Therefore, you should use either the Run Tests in Current Context or Run All Tests in Solution options instead.) The results of running the JavaScript tests appear side-by-side with the results of running the server tests in the Test Results window. For example, if you Run All Tests in Solution then you will get the following results: Notice that the TestAddNumbers() JavaScript test has failed. That is good because our addNumbers() function is hard-coded to always return the value 5. If you double-click the failing JavaScript test, you can view additional details such as the JavaScript error message and the line number of the JavaScript code that failed: Summary The goal of this blog entry was to explain an approach to creating JavaScript unit tests that can be easily integrated with Visual Studio and Visual Studio ALM. I described how you can use the Microsoft Script Control to execute JavaScript on the server. By taking advantage of the Microsoft Script Control, we were able to execute our JavaScript unit tests side-by-side with all of our other unit tests and view the results in the standard Visual Studio Test Results window. You can download the code discussed in this blog entry from here: http://StephenWalther.com/downloads/Blog/JavaScriptUnitTesting/JavaScriptUnitTests.zip Before running this code, you need to first install the Microsoft Script Control which you can download from here: http://www.microsoft.com/downloads/en/details.aspx?FamilyID=d7e31492-2595-49e6-8c02-1426fec693ac

    Read the article

  • SSAS: Using fake dimension and scopes for dynamic ranges

    - by DigiMortal
    In one of my BI projects I needed to find count of objects in income range. Usual solution with range dimension was useless because range where object belongs changes in time. These ranges depend on calculation that is done over incomes measure so I had really no option to use some classic solution. Thanks to SSAS forums I got my problem solved and here is the solution. The problem – how to create dynamic ranges? I have two dimensions in SSAS cube: one for invoices related to objects rent and the other for objects. There is measure that sums invoice totals and two calculations. One of these calculations performs some computations based on object income and some other object attributes. Second calculation uses first one to define income ranges where object belongs. What I need is query that returns me how much objects there are in each group. I cannot use dimension for range because on one date object may belong to one range and two days later to another income range. By example, if object is not rented out for two days it makes no money and it’s income stays the same as before. If object is rented out after two days it makes some income and this income may move it to another income range. Solution – fake dimension and scopes Thanks to Gerhard Brueckl from pmOne I got everything work fine after some struggling with BI Studio. The original discussion he pointed out can be found from SSAS official forums thread Create a banding dimension that groups by a calculated measure. Solution was pretty simple by nature – we have to define fake dimension for our range and use scopes to assign values for object count measure. Object count measure is primitive – it just counts objects and that’s it. We will use it to find out how many objects belong to one or another range. We also need table for fake ranges and we have to fill it with ranges used in ranges calculation. After creating the table and filling it with ranges we can add fake range dimension to our cube. Let’s see now how to solve the problem step-by-step. Solving the problem Suppose you have ranges calculation defined like this: CASE WHEN [Measures].[ComplexCalc] < 0 THEN 'Below 0'WHEN [Measures].[ComplexCalc] >=0 AND  [Measures].[ComplexCalc] <=50 THEN '0 - 50'...END Let’s create now new table to our analysis database and name it as FakeIncomeRange. Here is the definition for table: CREATE TABLE [FakeIncomeRange] (     [range_id] [int] IDENTITY(1,1) NOT NULL,     [range_name] [nvarchar](50) NOT NULL,     CONSTRAINT [pk_fake_income_range] PRIMARY KEY CLUSTERED      (         [range_id] ASC     ) ) Don’t forget to fill this table with range labels you are using in ranges calculation. To use ranges from table we have to add this table to our data source view and create new dimension. We cannot bind this table to other tables but we have to leave it like it is. Our dimension has two attributes: ID and Name. The next thing to create is calculation that returns objects count. This calculation is also fake because we override it’s values for all ranges later. Objects count measure can be defined as calculation like this: COUNT([Object].[Object].[Object].members) Now comes the most crucial part of our solution – defining the scopes. Based on data used in this posting we have to define scope for each of our ranges. Here is the example for first range. SCOPE([FakeIncomeRange].[Name].&[Below 0], [Measures].[ObjectCount])     This=COUNT(            FILTER(                [Object].[Object].[Object].members,                 [Measures].[ComplexCalc] < 0          )     ) END SCOPE To get these scopes defined in cube we need MDX script blocks for each line given here. Take a look at the screenshot to get better idea what I mean. This example is given from SQL Server books online to avoid conflicts with NDA. :) From previous example the lines (MDX scripts) are: Line starting with SCOPE Block for This = Line with END SCOPE And now it is time to deploy and process our cube. Although you may see examples where there are semicolons in the end of statements you don’t need them. Visual Studio BI tools generate separate command from each script block so you don’t need to worry about it.

    Read the article

  • Useful Tips for BizTalk 2006 to BizTalk 2009 Porting

    - by Arvind Chaudhary
    BizTalk projects require some manual intervention in order to upgrade them. Execute the following steps to port a BizTalk solution / project: Open the project’s solution file (.sln) using a text editor – NotePad++ is recommended. Remove all the contents (in red below) between (not including) the following elements: GlobalSection(ProjectConfigurationPlatforms) = postSolution           {5C48CB6B-AE6F-4288-A8EE-46E352BB730C}.Debug|.NET.ActiveCfg = Debug|Any CPU           {5C48CB6B-AE6F-4288-A8EE-46E352BB730C}.Debug|.NET.Build.0 = Debug|Any CPU           {5C48CB6B-AE6F-4288-A8EE-46E352BB730C}.Debug|Any CPU.ActiveCfg = Debug|Any CPU           {5C48CB6B-AE6F-4288-A8EE-46E352BB730C}.Debug|Any CPU.Build.0 = Debug|Any CPU           … EndGlobalSection           You should see the following once you have removed the contents:      GlobalSection(ProjectConfigurationPlatforms) = postSolution                EndGlobalSection            Note: There should not be any   For each BizTalk project (.btproj) in the solution (.sln) find and replace the following in the .btproj file: ‘Name = “Debug”’ with ‘Name = “Development”’ ‘Name = “Release”’ with ‘Name = “Deployment”’ “bin\Debug” with “bin\Development” “bin\Release” with “bin\Deployment” Save the file.

    Read the article

  • User roles in GWT applications

    - by csaffi
    Hi everybody, I'm wondering if you could suggest me any way to implement "user roles" in GWT applications. I would like to implement a GWT application where users log in and are assigned "roles". Based on their role, they would be able to see and use different application areas. Here are two possible solution I thought: 1) A possible solution could be to make an RPC call to the server during onModuleLoad. This RPC call would generate the necessary Widgets and/or place them on a panel and then return this panel to the client end. 2) Another possible solution could be to make an RPC call on login retrieving from server users roles and inspecting them to see what the user can do. What do you think about? Thank you very much in advance for your help!

    Read the article

  • Cut Caseload Costs, Speed Service Delivery For Social Services

    - by michael.seback
    Lower Caseload Costs, Speedier Service Delivery with New Oracle Social Services Solution Oracle has just introduced a new solution for social services agencies that's designed to help case workers address the challenges of rising workloads and growing demands by citizens for additional services. In the past, IT departments developed custom software in an effort to meet program outcomes. "Because this capability is out of the box with the Oracle solution, there's less complexity for organizations and an overall lower total cost of ownership," says Kimberly Ellison-Taylor, Oracle's executive director of health and human services. "Self service brings costs down to just pennies per interaction and makes it possible for clients to receive government services more quickly," Ellison-Taylor says. read more

    Read the article

  • And the Winners of Fusion Middleware Innovation Awards in Data Integration are…

    - by Irem Radzik
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} At OpenWorld, we announced the winners of Fusion Middleware Innovation Awards 2012. Raymond James and Morrison Supermarkets were selected for the data integration category for their innovative use of Oracle’s data integration products and the great results they have achieved. In this blog I would like to briefly introduce you to these award winning projects. Raymond James is a diversified financial services company, which provides financial planning, wealth management, investment banking, and asset management. They are using Oracle GoldenGate and Oracle Data Integrator to feed their operational data store (ODS), which supports application services across the enterprise. A major requirement for their project was low data latency, as key decisions are made based on the data in the ODS. They were able to fulfill this requirement due to the Oracle Data Integrator’s integrated solution with Oracle GoldenGate. Oracle GoldenGate captures changed data from different systems including Oracle Database, HP NonStop and Microsoft SQL Server into a single data store on SQL Server 2008. Oracle Data Integrator provides data transformations for the ODS. Leveraging ODI’s integration with GoldenGate, Raymond James now sees a 9 second median latency (from source commit to ODS target commit). The ODS solution delivers high quality, accurate data for consuming applications such as Raymond James’ next generation client and portfolio management systems as well as real-time operational reporting. It enables timely information for making better decisions. There are more benefits Raymond James achieved with this implementation of Oracle’s data integration solution. The software developers and architects of this solution, Tim Garrod and Ryan Fonnett, have told us during their presentation at OpenWorld that they also reduced application complexity significantly while improving developer productivity through trusted operational services. They were able to utilize CDC to generate alerts for business users, and for applications (for example for cache hydration mechanisms). One cool innovation example among many in this project is that using ODI's flexible architecture, Tim and Ryan could build 24/7 self-healing processes. And these processes have hardly failed. Integration processes fixes the errors itself. Pretty amazing; and a great solution for environments that need such reliability and availability. (You can see Tim and Ryan’s photo with the Innovation Award above.) The other winner of this year in the data integration category, Morrison Supermarkets, is the UK’s 4th largest grocery retailer. The company has been migrating all their legacy applications on to a new-world application set based on Oracle and consolidating all BI on to a single Oracle platform. The company recently implemented Oracle Exadata as the data warehouse engine and uses Oracle Business Intelligence EE. Their goal with deploying GoldenGate and ODI was to provide BI data to the enterprise in a way that it also supports operational decision making requirements from a wide range of Oracle based ERP applications such as E-Business Suite, PeopleSoft, Oracle Retail Suite. They use GoldenGate’s log-based change data capture capabilities and Oracle Data Integrator to populate the Oracle Retail Data Model. The electronic point of sale (EPOS) integration solution they built processes over 80 million transactions/day at busy periods in near real time (15 mins). It provides valuable insight to Retail and Commercial teams for both intra-day and historical trend analysis. As I mentioned in yesterday’s blog, the right data integration platform can transform the business. Here is another example: The point-of-sale integration enabled the grocery chain to optimize its stock management, leading to another award: Morrisons won the Grocer 33 award in 2012 - beating all other major UK supermarkets in product availability. Congratulations, Morrisons,on another award! Celebrating the innovation and the success of our customers with Oracle’s data integration products was definitely a highlight of Oracle OpenWorld for me. I look forward to hearing more from Raymond James, Morrisons, and the other customers that presented their data integration projects at OpenWorld, on how they are creating more value for their organizations.

    Read the article

  • Simultaneously calling multiple methods on a WCF service from silverlight

    - by ola karlsson
    A while back I had to debug some performance issues in an existing Silverlight app, as the problem / solution was a bit obscure and finding info about it was quite tricky, I thought I’d share, maybe it can help the next person with this problem. The App On start, the app would do a number of calls to different methods on a WCF service, this to populate the UI with the necessary data. Recently one of those services had been changed and was now taking quite a bit longer than it used to. This was resulting in quite a long loading time for the whole UI, which was set up so it wouldn’t let the user interact with anything, until all the service calls had finished. First I broke out the longer running service call from the others, then removed the constraint that it had to be loaded for the UI in general to become responsive. I also added a loading indicator just on that area of the UI, thinking that the main UI would load while this particular section could keep loading independently. The Problem However this is where things started to get a bit strange. I found that even after these changes, the main UI wouldn’t activate until the long running call returned. So now, I did what I should have done to start with, I got Fiddler out and had a look at what was really happening. What I found was that, once the call to the long running service method was placed, all subsequent call were waiting for that one to return before executing. Not having really worked with WCF previously or knowing much about it in general, I was stumped… I knew of the issues where Silverlight is restricted by the browsers networking features in regards to number of simultaneous connections etc. However that just didn’t seem to be the issue here, you can clearly see in Fiddler that there’s numerous calls, but they’re just not returning. I thought of the problem maybe being in the WCF service, but the calls were really not that complicated and surely the service should be able to handle a lot more than what I was throwing at it! So I did what every developer does in this type of scenario, I hit the search engines. I did a whole bunch of searching on things like “multiple simultaneous WCF calls from Silverlight” and “Calling long running WCF services from Silverlight” etc. etc. This however, pretty much got me nowhere, I found a whole heap of resources on how to do WCF calls from Silverlight but most of them were very basic and of no use what so ever. The fog is clearing It wasn’t until I came across the term “ WCF blocking calls” and started incorporating that in my searches I started to get somewhere. Those searches quite quickly brought me to the following thread in the Silverlight forum “Long-running WCF call blocking subsequent calls” which discussed the exact problem I was facing and the best part, one of the guys there had the solution! The short answer is in the forum post and the guys answering, has also done a more extensive blog post about it called “Silverlight, WCF, and ASP.Net Configuration Gotchas” which covers it very well.  So come on what’s the solution?! I heard you ask, unless you’ve already gone to the links and looked it up ;) The Solution Well, it turns out that the issue is founded in a mix of Silverlight, Asp.Net and WCF, basically if you’re doing multiple calls to a single WCF web-service and you have Asp.Net session state enabled, the calls will be executed sequentially by the service, hence any long running calls will block subsequent ones. So why is Asp.Net session state effecting us, we’re working in Silverlight, right? We'll as mentioned earlier, by default Silverlight uses the browsers networking stack when doing service calls, hence to the WCF service, the call looks like it might as well be coming from a normal Asp.Net. To get around this, we look to a feature introduced in Silverlight 3, namely the Client HTTP Stack. The Client HTTP Stack to the rescue By using the following syntax (for example in our App.xaml.cs, Application_Startup method) WebRequest.RegisterPrefix("http://", WebRequestCreator.ClientHttp); we can set our Silverlight application to use the Client HTTP Stack, which incidentally solves our problem! By using Silverlights own networking stack, rather than that of the browser, we get around the Asp.Net - WCF session state issue. The above code specifies that all calls to addresses starting with “http://” should go through the client stack, this can actually be set more granular and you can specify it to be used only for certain domains etc. Summary The actual solution is well covered in the forum and blog posts I link to above. This post is more about sharing my experience, hopefully helping to spread the word about this and maybe make it a bit easier for the next poor guy with this issue to find the solution. Until next time, Ola

    Read the article

  • Autocad on linux ubuntu 11.10!

    - by gabriel
    I am trying 3 years now installing autocad,3ds max and revit architecture on ubuntu with the help of wine!Every year i am very optimistic cause i see the new wine versions already improved.So, now i am starting again in a clean ubuntu install to install the autocad 2013 with the wine version wine1.4.I am not trying to have an answer only for me but i want all this ubuntu community try for this and finally we can achieve that!The winetricks have already net framework 4 to install which is the reason i have not already ran in the pas autocad.So, i would like to remove completely my windows 7 partition from my pc and go on a linux machine without loosing the powerfull architectural programms.I know all about blender and staff so i just want you to help find a solution on that because i know there is a solution!Maybe i will have to learn all the c++ or python etc staff.But i am sure that a solution can come with the help of all of us!Any suggestion about this problem will be very nice and helpfull. Thanks in advance! Gabriel

    Read the article

< Previous Page | 54 55 56 57 58 59 60 61 62 63 64 65  | Next Page >