Search Results

Search found 27143 results on 1086 pages for 'include path'.

Page 814/1086 | < Previous Page | 810 811 812 813 814 815 816 817 818 819 820 821  | Next Page >

  • Java Embedded Releases

    - by Tori Wieldt
    Oracle today announced a new product in its Java Platform, Micro Edition (Java ME) product portfolio, Oracle Java ME Embedded 3.2, a complete client Java runtime Optimized for resource-constrained, connected, embedded systems.  Also, Oracle is releasing Oracle Java Wireless Client 3.2, Oracle Java ME Software Development Kit (SDK) 3.2. Oracle also announced Oracle Java Embedded Suite 7.0 for larger embedded devices, providing a middleware stack for embedded systems. Small is the new big! Introducing Oracle Java ME Embedded 3.2  Oracle Java ME Embedded 3.2 is designed and optimized to meet the unique requirements of small embedded, low power devices such as micro-controllers and other resource-constrained hardware without screens or user interfaces. These include: On-the-fly application downloads and updates Remote operation, often in challenging environments Ability to add new capabilities without impacting the existing functions Support for hardware with as little as 130 kB RAM and 350 kB ROM Oracle Java Wireless Client 3.2 Oracle Java Wireless Client 3.2 is built around an optimized Java ME implementation that delivers a feature-rich application environment for mass-market mobile devices. This new release: Leverages standard JSRs, Oracle optimizations/APIs and a flexible porting layer for device specific customizations, which are tuned to device/chipset requirements Supports advanced tooling functions, such as memory and network monitoring and on-device tooling Offers new support for dual SIM functionality, which is highly useful for mass-market devices supported by multiple carriers with multiple phone connections Oracle Java ME SDK 3.2 Oracle Java ME SDK 3.2 provides a complete development environment for both Oracle Java ME Embedded 3.2 and Oracle Java Wireless Client 3.2. Available for download from OTN, The latest version includes: Small embedded device support In-field and remote administration and debugging Java ME SDK plug-ins for Eclipse and the NetBeans Integrated Development Environment (IDE), enabling more application development environments for Java ME developers. A new device skin creator that developers can use to generate their custom device skins for testing their applications. Oracle Embedded Suite 7.0 The Oracle Java Embedded Suite is a new packaged solution from Oracle (including Java DB, GlassFish for Embedded Suite, Jersey Web Services Framework, and Oracle Java SE Embedded 7 platform), created to provide value added services for collecting, managing, and transmitting data to and from other embedded devices.The Oracle Java Embedded Suite is a complete device-to-data center solution subset for embedded systems.  See Java Me and Java Embedded in Action Java ME and Java Embedded technologies will be showcased for developers at JavaOne 2012 in over 60 conference sessions and BOFs, as well as in the JavaOne Exhibition Hall. For business decision makers, the new event Java Embedded @ JavaOne you learn more about Java Embedded technologies and solutions.

    Read the article

  • We need you! Sign up now to give Oracle your feedback on future product design trends at OpenWorld 2012

    - by mvaughan
    By Kathy Miedema, Oracle Applications User Experience Get the most from your Oracle OpenWorld 2012 experience and participate in a usability feedback session, where your expertise will help Oracle develop unbeatable products and solutions. Sign up to attend a one-hour session during Oracle OpenWorld. You’ll learn about Oracle’s future design trends -- including mobile applications and social networking -- and how these trends will affect your users down the road. A street scene from Oracle OpenWorld 2011. Oracle’s usability experts will guide you through practical learning sessions on the user experience of various business applications, middleware, and more. All user feedback sessions will be conducted October 1–3 at the InterContinental San Francisco Hotel on Howard Street, just a few steps away from the Moscone Center. To best match you with a user feedback activity, we will ask you about your role at your company. Our user feedback opportunities include focus groups, surveys, and one-on-one sessions with usability engineers. What do you get out of it? Customer and partner participants in the past have been surprised to learn how tuned in Oracle is to work that their applications users do every day. Oracle’s User Experience team members are trained to listen carefully, ask specific questions, interpret your answers, and work with designers to create products and solutions that suit your needs. Our goal is to help make you and your users more productive and efficient. Learn about Oracle’s process, and take advantage of the chance to give your specific feedback to the designers who create the enterprise applications of your future. See for yourself how Oracle collects feedback and measures its designs for turning them into code. Seats are limited for Oracle’s user feedback sessions, so sign up now by sending an e-mail to [email protected] with the subject line: Sign Me Up for an Oracle OpenWorld 2012 UX Session. For more information about customer feedback sessions and what you can learn from them, please visit the Usable Apps website. When: Monday-Wednesday during OpenWorld 2012, Oct. 1-3 Where: The InterContinental San Francisco Hotel How to sign up: RSVP now by sending an email to [email protected] with the subject line “Sign me up for an OOW 2012 UX Session.” Learn more: Visit the Usable Apps website at Get Involved.

    Read the article

  • How do you handle objects that need custom behavior, and need to exist as an entity in the database?

    - by Scott Whitlock
    For a simple example, assume your application sends out notifications to users when various events happen. So in the database I might have the following tables: TABLE Event EventId uniqueidentifier EventName varchar TABLE User UserId uniqueidentifier Name varchar TABLE EventSubscription EventUserId EventId UserId The events themselves are generated by the program. So there are hard-coded points in the application where an event instance is generated, and it needs to notify all the subscribed users. So, the application itself doesn't edit the Event table, except during initial installation, and during an update where a new Event might be created. At some point, when an event is generated, the application needs to lookup the Event and get a list of Users. What's the best way to link the event in the source code to the event in the database? Option 1: Store the EventName in the program as a fixed constant, and look it up by name. Option 2: Store the EventId in the program as a static Guid, and look it up by ID. Extra Credit In other similar circumstances I may want to include custom behavior with the event type. That is, I'll want subclasses of my Event entity class with different behaviors, and when I lookup an event, I want it to return an instance of my subclass. For instance: class Event { public Guid Id { get; } public Guid EventName { get; } public ReadOnlyCollection<EventSubscription> EventSubscriptions { get; } public void NotifySubscribers() { foreach(var eventSubscription in EventSubscriptions) { eventSubscription.Notify(); } this.OnSubscribersNotified(); } public virtual void OnSubscribersNotified() {} } class WakingEvent : Event { private readonly IWaker waker; public WakingEvent(IWaker waker) { if(waker == null) throw new ArgumentNullException("waker"); this.waker = waker; } public override void OnSubscribersNotified() { this.waker.Wake(); base.OnSubscribersNotified(); } } So, that means I need to map WakingEvent to whatever key I'm using to look it up in the database. Let's say that's the EventId. Where do I store this relationship? Does it go in the event repository class? Should the WakingEvent know declare its own ID in a static member or method? ...and then, is this all backwards? If all events have a subclass, then instead of retrieving events by ID, should I be asking my repository for the WakingEvent like this: public T GetEvent<T>() where T : Event { ... // what goes here? ... } I can't be the first one to tackle this. What's the best practice?

    Read the article

  • Pragmas and exceptions

    - by Darryl Gove
    The compiler pragmas: #pragma no_side_effect(routinename) #pragma does_not_write_global_data(routinename) #pragma does_not_read_global_data(routinename) are used to tell the compiler more about the routine being called, and enable it to do a better job of optimising around the routine. If a routine does not read global data, then global data does not need to be stored to memory before the call to the routine. If the routine does not write global data, then global data does not need to be reloaded after the call. The no side effect directive indicates that the routine does no I/O, does not read or write global data, and the result only depends on the input. However, these pragmas should not be used on routines that throw exceptions. The following example indicates the problem: #include <iostream extern "C" { int exceptional(int); #pragma no_side_effect(exceptional) } int exceptional(int a) { if (a==7) { throw 7; } else { return a+1; } } int a; int c=0; class myclass { public: int routine(); }; int myclass::routine() { for(a=0; a<1000; a++) { c=exceptional(c); } return 0; } int main() { myclass f; try { f.routine(); } catch(...) { std::cout << "Something happened" << a << c << std::endl; } } The routine "exceptional" is declared as having no side effects, however it can throw an exception. The no side effects directive enables the compiler to avoid storing global data back to memory, and retrieving it after the function call, so the loop containing the call to exceptional is quite tight: $ CC -O -S test.cpp ... .L77000061: /* 0x0014 38 */ call exceptional ! params = %o0 ! Result = %o0 /* 0x0018 36 */ add %i1,1,%i1 /* 0x001c */ cmp %i1,999 /* 0x0020 */ ble,pt %icc,.L77000061 /* 0x0024 */ nop However, when the program is run the result is incorrect: $ CC -O t.cpp $ ./a.out Something happend00 If the code had worked correctly, the output would have been "Something happened77" - the exception occurs on the seventh iteration. Yet, the current code produces a message that uses the original values for the variables 'a' and 'c'. The problem is that the exception handler reads global data, and due to the no side effects directive the compiler has not updated the global data before the function call. So these pragmas should not be used on routines that have the potential to throw exceptions.

    Read the article

  • Need help identifing what resources (eg. In MIT OpenCourseWare) can help me prepare for a test [closed]

    - by jiewmeng
    I am entering uni soon. I can sit for a placement test to see if I elegible for exemptions. The details are http://www.comp.nus.edu.sg/undergraduates/TestScope11_12.html Or CS2100 Computer Organisation (please click title) The objective of this module is to familiarise students with the fundamentals of computing devices. Through this module students will understand the basics of data representation, and how the various parts of a computer work, separately and with each other. This allows students to understand the issues in computing devices, and how these issues affect the implementation of solutions. Topics covered include data representation systems, combinational and sequential circuit design techniques, assembly language, processor execution cycles, pipelining, memory hierarchy and input/output systems. Recommended Textbooks Digital Design: Principles and Practices [DDPP] by John F. Wakerly, Prentice-Hall. ISBN 0-13-324500-4. Computer Organizations and Design (The hardware/software interface) by David A. Patterson and John L. Hennessy. CS2105 Introduction to Computer Networks (please click title) This course aims to provide a broad introduction to computer networks and some appreciations of network application programming. It covers a range of topics including basic data communication and computer network concepts, protocols, networked computing concepts and principles, network applications development and network security. The emphasis of teaching is on the working principles and application of computer networks. As an integral part of the course, tutorials and practical assignments enforcing learning will also be given. These assignments provide an early exposure in network application programming and they should be able to complete by using personal computers and school's network facilities. Topics included: An overview of computer networks and the Internet Basic data communications Application layer Transport layer Network layer and routing Link layer and local area networks Recommended Textbook James F. Kurose & Keith W. Ross, Computer networking: A top-down approach featuring internet, Addison Wesley, 2001 I am wondering what resources eg. MIT OpenCourseWare or other universities resources are available to help he perpare for these particular modubles. I am thinking does the Networking one look like CCNA? The computer oragization. Its like electronics, assembly etc? I learnt some electronics in Poly but looking at the sample papers, uni looks very different... I have about 1 month to prepare if I want any chance of exempting from these modules :) any help?

    Read the article

  • Can see samba shares but not access them

    - by nitefrog
    For the life of me I cannot figure this one out. I have samba installed and set up on the ubuntu box and on the Win7 box I CAN SEE all the shares I created. I created two users on ubuntu that map to the users in windows. On ubuntu they are both admins, user A & B on Windows User A is admin and user B is poweruser. User A can see both shares and access them, but user B can see everythin, but only access the homes directory, the other directory throws an error. I have two drives in Ubuntu and this is the smb.config file (I am new to samba): [global] workgroup = WORKGROUP server string = %h server (Samba, Ubuntu) wins support = no dns proxy = yes name resolve order = lmhosts host wins bcast log file = /var/log/samba/log.%m max log size = 1000 syslog = 0 panic action = /usr/share/samba/panic-action %d security = user encrypt passwords = true passdb backend = tdbsam obey pam restrictions = yes unix password sync = yes passwd program = /usr/bin/passwd %u passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* . pam password change = yes map to guest = bad user ; usershare max shares = 100 usershare allow guests = yes And here is the share section: Both user A & B can access this from windows. No problems. [homes] comment = Home Directories browseable = no writable = yes Both User A & B can see this share, but only user A can access it. User B get an error thrown. [stuff] comment = Unixmen File Server path = /media/data/appinstall/ browseable = yes ;writable = no read only = yes hosts allow = The permission for the media/data/appinstall/ is as follows: appInstall properties: share name: stuff Allow others to create and delete files in this folder is cheeked Guest access (for people without a user account) is checked permissions: Owner: user A Folder Access: Create and delete files File Access: --- Group: user A Folder Access: Create and delete files File Access: --- Others Folder Access: Create and delete files File Access: --- I am at a loss and need to get this work. Any ideas? The goal is to have a setup like this. 3 users on window machines. Each user on the data drive will have their own personal folder where they are the ones that can only access, then another folder where 2 of the users will have read only and one user full access. I had this setup before on windows, but after what happened I am NEVER going back to windows, so Unix here I am to stay! I am really stuck. I am running Ubuntu 11. I could reformat again and put on version 10 if that would make life easier. I have been dealing with this since Wed. 3pm. Thanks.

    Read the article

  • Driving Growth through Smarter Selling

    - by Samantha.Y. Ma
    With the proliferation of social media and mobile technologies, the world of selling and buying has drastically changed, as buyers now have access to more information than they did in the past. In fact, studies have shown that buyers complete 60 percent of the buying process before they even engage with a salesperson. The old models of selling no longer work effectively; and the new way of selling is driven by customer insights. To succeed, sales need to be proactive, not reactive. They need to engage with the customer early, sometimes even before the customer’s needs are fully understood. In fact, the best sales reps prescribe a solution that the customer doesn't even know they need, often by leveraging social media to listen, engage and collaborate with peers. And they fully tap into the power of analytics and data to drive results.  Let’s look at some stats regarding challenges facing sales today. According to recent studies, sales reps spend 78 percent of their time doing administrative things -- such as planning, searching for information, data entry -- and only 22 percent of the time actually selling. Furthermore, 40 percent of B2B sales reps miss their quota, and only 3 percent of companies can say with confidence that their forecasts are “always accurate.” How do you drive growth in this modern day and age? It's not just getting your sales teams to work harder; it's helping them work smarter and providing them with a solution they want to use, on the device(s) they already know, giving them critical insights and tools to be more productive, increase win rates, and close deals faster. Oracle Sales Cloud was designed to do exactly that. It enables smarter selling that allows reps to sell more, managers to know more, and companies to grow more.  Let’s face it—if all CRM solutions worked well, sales executives wouldn’t be having the same headaches as they had in the past. Join Oracle’s Thomas Kurian and Doug Clemmans on Tuesday, October 22 as they explain: • How today’s sales processes have rendered many CRM systems obsolete • The secrets to smarter selling, leveraging mobile, social, and big data • How Oracle Sales Cloud enables smarter selling—as proven by Oracle and its customers Take the first step down the path toward smarter selling. With Oracle Sales Cloud, reps sell more, managers know more, and companies grow more.

    Read the article

  • Why doesn't my texture display with this GLSL shader?

    - by Chewy Gumball
    I am trying to display a DXT1 compressed texture on a quad using a VBO and shaders, but I have been unable to get it working. All I get is a black square. I know my texture is uploaded properly because when I use immediate mode without shaders the texture displays fine but I will include that part just in case. Also, when I change the gl_FragColor to something like vec4 (0.0, 1.0, 1.0, 1.0) then I get a nice blue quad so I know that my shader is able to set the colour. It appears to be either the texture is not being bound correctly in the shader or the texture coordinates are not being picked up. However, I can't find the error! What am I doing wrong? I am using OpenTK in C# (not xna). Vertex Shader: void main() { gl_TexCoord[0] = gl_MultiTexCoord0; // Set the position of the current vertex gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex; } Fragment Shader: uniform sampler2D diffuseTexture; void main() { // Set the output color of our current pixel gl_FragColor = texture2D(diffuseTexture, gl_TexCoord[0].st); //gl_FragColor = vec4 (0.0,1.0,1.0,1.0); } Drawing Code: int vb, eb; GL.GenBuffers(1, out vb); GL.GenBuffers(1, out eb); // Position Texture float[] verts = { 0.1f, 0.1f, 0.0f, 0.0f, 0.0f, 1.9f, 0.1f, 0.0f, 1.0f, 0.0f, 1.9f, 1.9f, 0.0f, 1.0f, 1.0f, 0.1f, 1.9f, 0.0f, 0.0f, 1.0f }; uint[] indices = { 0, 1, 2, 0, 2, 3 }; //upload data to the VBO GL.BindBuffer(BufferTarget.ArrayBuffer, vb); GL.BindBuffer(BufferTarget.ElementArrayBuffer, eb); GL.BufferData(BufferTarget.ArrayBuffer, (IntPtr)(verts.Length * sizeof(float)), verts, BufferUsageHint.StaticDraw); GL.BufferData(BufferTarget.ElementArrayBuffer, (IntPtr)(indices.Length * sizeof(uint)), indices, BufferUsageHint.StaticDraw); //Upload texture int buffer = GL.GenTexture(); GL.BindTexture(TextureTarget.Texture2D, buffer); GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureWrapS, (float)TextureWrapMode.Repeat); GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureWrapT, (float)TextureWrapMode.Repeat); GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureMagFilter, (float)TextureMagFilter.Linear); GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureMinFilter, (float)TextureMinFilter.Linear); GL.TexEnv(TextureEnvTarget.TextureEnv, TextureEnvParameter.TextureEnvMode, (float)TextureEnvMode.Modulate); GL.CompressedTexImage2D(TextureTarget.Texture2D, 0, texture.format, texture.width, texture.height, 0, texture.data.Length, texture.data); //Draw GL.UseProgram(shaderProgram); GL.EnableClientState(ArrayCap.VertexArray); GL.EnableClientState(ArrayCap.TextureCoordArray); GL.VertexPointer(3, VertexPointerType.Float, 5 * sizeof(float), 0); GL.TexCoordPointer(2, TexCoordPointerType.Float, 5 * sizeof(float), 3); GL.ActiveTexture(TextureUnit.Texture0); GL.Uniform1(GL.GetUniformLocation(shaderProgram, "diffuseTexture"), 0); GL.DrawElements(BeginMode.Triangles, indices.Length, DrawElementsType.UnsignedInt, 0);

    Read the article

  • Now Shipping! NetAdvantage for .NET 2010 Volume 3!

    The new NetAdvantage Ultimate includes all four Line of Business user interface control sets for ASP .NET, Windows Forms, WPF and Silverlight plus two advanced Data Visualization UI control sets for WPF and Silverlight. With six NetAdvantage products in one robust package, Infragistics® gives you hundreds of controls and infinite development possibilities. Unified XAML Product Strategy-Share Code, Get More Controls In the 10.3 release, Infragistics continues to deliver code parity between the XAML platforms, WPF and Silverlight. In the line of business toolsets, Infragistics introduces the new xamSchedule™, full-featured, Outlook® 2010-style schedule controls, and the new xamDataTree™, a data bound tree view that comfortably handles tens of thousands of tree nodes. Mimicking our Silverlight Drag and Drop Framework, the WPF Drag and Drop Framework CTP empowers you to add your own rich touches to your applications. Track Users' Behaviors New to all NetAdvantage Silverlight controls is the Infragistics Analytics Framework (IGAF), which empowers you to track user behavior in RIAs running on Silverlight 4. Building on the Microsoft® Silverlight Analytics Framework, with IGAF you can analyze the user's behaviors to ensure the experience you want to deliver. NetAdvantage for Windows Forms--New Office® 2010 Ribbon and Application Menu 2010 Create new experiences with Windows Forms. Now with Office 2010 styling, NetAdvantage for Windows Forms has new features such as Microsoft® Office 2010 ribbon and enhanced Infragistics.Excel to export the contents of the high performance WinGrid™ into Microsoft Excel® 2010. The new Windows Message Support enables Infragistics standalone editor controls to process numerous Windows® OS messages, allowing them to respond just like native controls to changes in the Windows environment. Create Faster Web 2.0 Experiences with NetAdvantage for ASP .NET Infragistics continues to push the envelope to deliver the fastest ASP .NET WebForms controls available on the market. Our lightning fast ASP .NET grids are now enhanced with XPS/PDF Exporting and Summary Rows. This release also includes support for jQuery Templating (as a CTP) within our WebDataGrid™ and WebDataTree™ controls allowing you to quickly cut down overall page size. Deliver Business Intelligence with Power, Flexibility and the Office 2010 Experience NetAdvantage for WPF Data Visualization and NetAdvantage for Silverlight Data Visualization help you deliver flexible, powerful and usable end user experiences in Business Intelligence applications. Both suites include the Pivot Grid that delivers the full power of online analytical processing (OLAP) to present multi-dimensional data, sliced and diced in cross-tabulated form for end users to drill down into, interact with and easily extract meaning from the data. Mapping Made Easy 10.3 marks the official release of the WPF Data Visualization xamMap™ control to map anything and everything from geographic to geo-spacial mapping data. Map layers allow you to add successive levels of detail, navigational panes for panning in all directions, color swatch panes that facilitate value scales like Choropleth shading, and scale panes allowing users to zoom-in and out. Both toolsets introduce the first of many relationship maps! With the xamOrgChart™ CTP you can map out organizational charts of up to 50K employees, competitive brackets (think World Cup) and any other relational, organizational map your application needs. http://www.infragistics.com span.fullpost {display:none;}

    Read the article

  • Can a 10-bit monitor connection preserve all tones in 8-bit sRGB gradients on a wide-gamut monitor?

    - by hjb981
    This question is about color management and the use of a higher color depth, 10 bits per channel (30 bits in total, resulting in 1.07 billion colors, or 1024 shades of gray, sometimes referred to as "deep color") compared to the standard of 8 bits per channel (24 bits in total, 16.7 million colors, 256 shades of gray, sometimes referred to as "true color"). Do not confuse with "32 bit color", which usually refers to standard 8 bit color with an extra channel ("alpha channel") for transparency (used to achieve effects like semi-transparent windows etc). The following can be assumed to be in place: 1: A wide-gamut monitor that supports 10-bit input. Further, it can be assumed that the monitor has been calibrated to its native gamut and that an ICC color profile has been created. 2: A graphics card that supports 10-bit output (and is connected to the monitor via DisplayPort). 3: Drivers for the graphics card that support 10-bit output. If applications that support 10-bit output and color profiles would be used, I would expect them to display images that were saved using different color spaces correctly. For example, both an sRGB and an adobeRGB image should be displayed correctly. If an sRGB image was saved using 8 bits per channel (almost always the case), then the 10-bit signal path would ensure that no tonal gradients were lost in the conversion from the sRGB of the image to the native color space of the monitor. For example: If the image contains a pixel that is pure red in 8 bits (255,0,0), the corresponding value in 10 bits would be (1023,0,0). However, since the monitor has a larger color space than sRGB, sending the signal (1023,0,0) to the monitor would result in a red that was too saturated. Therefore, according to the ICC color profile, the signal would be transformed into a different value with less red saturation, for example (987,0,0). Since there are still plenty of levels left between 0 and 987, all 256 values (0-255) for red in the sRGB color space of the file could be uniquely mapped to color-corrected 10-bit values in the monitor's native color space. However, if the conversion was done in 8 bits, (255,0,0) would be translated to (246,0,0), and there would now only be 247 available levels for the red channel instead of 256, degrading the displayed image quality. My question is: how does this work on Ubuntu? Let's say that I use Firefox (which is color-aware and uses ICC color profiles). Would I get 10-bit processing, thus preserving all levels of an 8-bit picture? What is the situation like for other applications, especially photo applications like Shotwell, Rawtherapee, Darktable, RawStudio, Photivo etc? Does Ubuntu differ from other operating systems (Linux and others) on this point?

    Read the article

  • A starting point for Use Cases and User Stories

    - by Mike Benkovich
    Originally posted on: http://geekswithblogs.net/benko/archive/2013/07/23/a-starting-point-for-use-cases-and-user-stories.aspxSoftware is a challenging business and is rife with opportunities to go wrong. Over the years a number of methodologies have evolved to help make sure that things go right. In an effort to contribute to this I’ve created a list of user stories that I think should be included and sometimes are just assumed. Note this is a work in progress, so I’m looking for your feedback. I’m curious what you would add or change in my list. · As a DBA I am working with a Normalized data model that reflects an agreed upon logical model for the system · As a DBA I am using consistent names for my fields which match the naming standards of my organization · As a DBA my model supports simple CRUD operations against all the entities · As an Application Architect the UI has been validated against the Business requirements and a complete set of user story’s have been created · As an Application Architect the database model has been validated against the UI · As an Application Architect we have a logical business model that describes all the known and/or expected usage of the system during the software’s expected lifecycle · As an Application Architect we have a Deployment diagram that describes how the application components will be deployed · As an Application Architect we have a navigation diagram that describes the typical application flow · As an Application Architect we have identified points of interaction which describes how the UI interacts with the services and the data storage · As an Application Architect we have identified external systems which may now or in the future use the data of this application and have adapted the logical model to include these interactions · As an Application Architect we have identified existing systems and tools that can be extended and/or reused to help this application achieve it’s business goals · As a Project Manager all team members understand the goals of each release and iteration as they are planned · As a Project Manager all team members understand their role and the roles of others · As a Project Manager we have support of the business to do the right thing even if it is not the expedient thing · As a Test/QA Analyst we have created a simulation environment for testing the system which does not use sensitive data and accurately reflects the scenarios of all the data that will be supported by the system · As a Test/QA Analyst we have identified the matrix of supported clients used to access the system including the likely browsers, mobile devices and other interfaces to work with the application · As a Test/QA Analyst we have created exit criteria for each user story that match the requirements of the business story that was used to create them · As a Test/QA Analyst we have access to a Test environment that is isolated from production and staging environments · As a Test/QA Analyst there we have a way to reset the environment so we can rerun tests when a new version of the software becomes available · As a Test/QA Analyst I am able to automate portions of the test process Thoughts? -mike

    Read the article

  • Oracle At QCon SF 2012

    - by Cassandra Clark - OTN
    Oracle Technology Network is a Platinum sponsor at QCon San Francisco.  (qconsf.com).  Don’t miss these great developer focused sessions: Shay ShmeltzerHow we simplified Web, Mobile and Cloud development for our own developers? - the Oracle StoryOver the past several years, Oracle has beendeveloping a new set of enterprise applications in what is probably one of thelargest Java based development project in the world. How do you take 3000 developers and make them productive? How do you insure the delivery of cutting edge UIs for both Mobile and Web channels? How do you enable Cloud baseddevelopment and deployment?  Come and learn how we did it at Oracle, and see how the same technologies and methodologies can apply to your development efforts. Dan SmithProject Lambda in Java 8Java SE 8 will include major enhancements to the Java Programming Language and its core libraries.  This suite of new features, known as Project Lambda in the OpenJDK community, includes lambda expressions, default methods, and parallel collections (and much more!).  The result will be a next-generation Java programming experience with more flexibility and better abstractions.   This talk will introduce the new Java features and offer a behind-the-scenes view of how they evolved and why they work the way that they do. Arun GuptaJSR 356: Building HTML5 WebSocket Applications in JavaThe family of HTML5 technologies has pushed the pendulum away from rich client technologies and toward ever-more-capable Web clients running on today’s browsers. In particular, WebSocket brings new opportunities for efficient peer-to-peer communication, providing the basis for a new generation of interactive and “live” Web applications. This session examines the efforts under way to support WebSocket in the Java programming model, from its base-level integration in the Java Servlet and Java EE containers to a new, easy-to-use API and toolset that are destined to become part of the standard Java platform. The full conference schedule is here: http://qconsf.com/sf2012/schedule/wednesday.jsp But wait, there’s more!  At the Oracle booth, we’ll also be covering: ·         Oracle ADF Mobile·         Oracle Developer Cloud Service·         Oracle ADF Essentials·         NetBeans Project Easel Lastly we’ll share the results of a short cloud survey at QConSF ater this week.  If you attended this year's Oracle OpenWorld and JavaOne conferences, it would be hard not to notice that Oracle is clearly "all-in" when it comes to the Cloud.  With Cloud computing being such a hot topic on many OTN members' minds, we'd like to know what you're doing in the cloud and invite you to take this short cloud survey.

    Read the article

  • Oracle Partner Architects Training

    - by mseika
    Dear Oracle Partner, There is a lot more to Oracle technology than meets the eye. Sure, you already belong to a small circle of our most experienced and committed partners. But are you making the best use possible of our technology solutions? Put it to the test.  Join the “Oracle Partner Architects Training”. It is aimed at providing your experts, architects and consultants with in-depth architectural knowledge about Oracle technology. Here is your chance to learn from the best. Seasoned speakers, exclusive content and no product marketing. Oracle technology beyond the obvious. Choose from any of the 40 recorded training sessions. Topics include:  • Security• Service integration • Database and options• Data integration • BI and applications• Applications and infrastructure• Hardware and software combinations The market and Oracle value specialized partners More information about specialization can be found on opn.oracle.com. Click through to OPN Program/Specialize “What’s in it for us?” Quite simply: the opportunity to gain the differentiation and competitive edge you need to stand out in the marketplace. • Differentiate your company through expertise in leading Oracle IT solutions;• Get your experts, architects and consultants up to speed on specialized services and solutions;• Make our customers’ shortlists. They are looking for value-added solutions for their business.   Recordings All sessions are recorded. After registering for a session in oraevents, you will receive the info to access the webex recording. Your timing, your tempo.  Registration and more information Visit architects.oraevents.eu to sign up for the recorded sessions. NOTE: Looking to get your consultants Oracle certified? One more reason to join the Oracle Partner Architects Training. It is the fast track to getting their expertise validated with an Oracle certificate. Training schedule  Choose from any of the 40 recorded training sessions: SECURITY THE PRACTICAL APPROACH •  Identity governance• Access management• Data privacy and protection• End-to-end security, layers of exposures•  Identity & access management, why and where to start?• Data security, how? SERVICE INTEGRATION A NEW ROADTO ENTERPRISE-WIDE SERVICE INTEGRATION • Oracle RUEI: maximize business value by insight into real end-user experiences•  Governance challenges in the services landscape•  Creating an agile enterprise (by Jeff Davies)• Oracle’s approach to SOA (by Jeff Davies) - guiding and accelerating SOA success• Technical case study – the SOA challenge• Oracle’s unified business process management suite 11g (incl. demo) DATABASE DATABASE AND OPTIONS, GOINGWIDE •  Understanding service level agreements for databases• Database lifecycle management• Data centric information lifecycle management DATA INTEGRATION  DIS FOR ARCHITECTS • Data integration solutions: an overview• ODI and goldengate• Data quality

    Read the article

  • Size doesn't matter

    - by ssoolsma
    Whenever I start a new project I *always* break up my code in different projects. Also known as n-tier solution. The scale of  the project doesn't matter, but make sure that each project is responsible for himself (or herself if you prefer). I make sure that i ....At least thought about how the project should work on the toilet or in a project team meeting.Have a solution directory and create my projects within. I like to name my project (and it's folders by the namespaces). For instance: When i'm creating a piece of (web)software called: ChuckNorris, i always include the software name in my projects. Start off with designing the DataAccess project. I name it: ChuckNorris.DataAccess which lets me easily identify the project incase the project scales alot.Build the classes which represent the database structure. Don't stop working on a class untill it's finished for now. Also, don't over-do the methods. Build stuff only when it's needed, and not think: "Hm, that would be cool to have". Cause most of the time you end up with unused code, and we don't want that.Build a unittest project and make sure you create the folder inside the project that it's testing. So, create the ChuckNorris.DataAccess.UnitTest project inside the folder of the dataaccess project. I would suggest using the nUnit testframework.Incase you though, hm i skip unittest: Don't! Just build it - it will safe you alot of time later onNow, read 5 again. Build that bloody unittest. Don't skip. (i cant emphasize this enough)Now, every class in the dataaccess project is responsible for itself. They don't rely on each other. This is where we use the BusinessLogic project for. Start creating the ChuckNorris.BusinessLogic project. (not inside the data-access project ofcourse, but withing the ChuckNorris folder.Combine stuff from data-access. This usual involves alot of copying the data-access classes and feels silly at first. (we'll get to that later on)Now you come up to a point of creating a service project. You might not always see why to use it, but see it as a way to expose your businesslogic to any application (including your own). Sometimes i use it as a so-called "Factory". Every call goes through this factory, so that's the only thing i'm exposing to any program, and make sure that those methods are the only ones that I allow you to invoke.Build any UI (website, phoneapp, forms application, silverlight, wpf or whatever) and reference it to you service project. Fall in love (cough) with this approach.It's possible that it doesn't seem to make much sense, and very incomplete. Well, that last part is correct. Next post will go in to detail of setting up your Data-Access project and use the entity framework.

    Read the article

  • With AMD style modules in JavaScript is there any benefit to namespaces?

    - by gman
    Coming from C++ originally and seeing lots of Java programmers doing the same we brought namespaces to JavaScript. See Google's closure library as an example where they have a main namespace, goog and under that many more namespaces like goog.async, goog.graphics But now, having learned the AMD style of requiring modules it seems like namespaces are kind of pointless in JavaScript. Not only pointless but even arguably an anti-pattern. What is AMD? It's a way of defining and including modules that removes all direct dependencies. Effectively you do this // some/module.js define([ 'name/of/needed/module', 'name/of/someother/needed/module', ], function( RefToNeededModule, RefToSomeOtherNeededModule) { ...code... return object or function }); This format lets the AMD support code know that this module needs name/of/needed/module.js and name/of/someother/needed/module.js loaded. The AMD code can load all the modules and then, assuming no circular dependencies, call the define function on each module in the correct order, record the object/function returned by the module as it calls them, and then call any other modules' define function with references to those modules. This seems to remove any need for namespaces. In your own code you can call the reference to any other module anything you want. For example if you had 2 string libraries, even if they define similar functions, as long as they follow the AMD pattern you can easily use both in the same module. No need for namespaces to solve that. It also means there's no hard coded dependencies. For example in Google's closure any module could directly reference another module with something like var value = goog.math.someMathFunc(otherValue) and if you're unlucky it will magically work where as with AMD style you'd have to explicitly include the math library otherwise the module wouldn't have a reference to it since there are no globals with AMD. On top of that dependency injection for testing becomes easy. None of the code in the AMD module references things by namespace so there is no hardcoded namespace paths, you can easily mock classes at testing time. Is there any other point to namespaces or is that something that C++ / Java programmers are bringing to JavaScript that arguably doesn't really belong?

    Read the article

  • How to manage a Closed Source High-Risk Project?

    - by abel
    I am currently planning to develop a J2EE website and wish to bring in 1 developer and 1 web designer to assist me. The project is a financial app with a niche market. I plan to keep the source closed . However, I fear that my would-be employees could easily copy the codebase and use it /sell it to a third party especially when they switch jobs. The app development will take 4-6months and perhaps more and I may have to bring in people after the app goes live. How do I keep the source to myself. Are there techniques companies use to guard their source. I foresee disabling pendrives and dvd writers on my development machines, but uploading data or attaching the code in one's mail would still be possible. My question is incomplete. But programmers who have been in my situation, please advice. How should I go about this? Building a team, maintaining code-secrecy,etc. I am looking forward to sign a secrecy contract with the employees if needed too. (Please add relevant tags) Update Thank you for all the answers. I certainly won't be disabling all USB ports and DVD writers now. But I think I should be logging activity(How exactly should I do that?) I am wary of scalpers who would join and then run off with the existing code. I haven't met any, but I have been advised to be wary of them. I would include a secrecy clause, but given this is a startup with almost no funding and in a highly competitive business niche with bigger players in the field, I doubt I would be able to detect or pursue any scalpers. How do I hire people I trust, when I don't know them personally. Their resume will be helpful but otherwise trust will develop only with time. But finally even if they do run away with the code, it is service that matters after the sale is made. So I am not really worried for the long term.

    Read the article

  • Problems using custom keyboard layout / symbols file

    - by January
    I have a custom xkb symbols file which looks as follows: // modify the basic German layout to have polish characters default partial alphanumeric_keys xkb_symbols "basic" { include "de(basic)" name[Group1]="Germany - with polish characters"; key <AD03> { [ e, E, eogonek, Eogonek ] }; key <AD09> { [ o, O, oacute, Oacute ] }; key <AC01> { [ a, A, aogonek, Aogonek ] }; key <AC02> { [ s, S, sacute, Sacute ] }; key <AD06> { [ z, Z, zabovedot, Zabovedot ] }; key <AB02> { [ x, X, zacute, Zacute ] }; key <AB03> { [ c, C, cacute, Cacute ] }; key <AB06> { [ n, N, nacute, Nacute ] }; }; The name of the file is depl. I copy the file to /usr/share/X11/xkb/symbols and it works with setxkbmap depl. However, I also tried to add the respective menu entries in the "Text Entry" customization. I have modified the file /usr/share/X11/xkb/rules/evdev.xml and added the following section: <layout> <configItem> <name>depl</name> <shortDescription>depl</shortDescription> <description>German (with Polish characters)</description> <languageList> <iso639Id>ger</iso639Id> </languageList> </configItem> </layout> I have then reconfigured the xkb data with sudo dpkg-reconfigure xkb-data. It works in as much as that the new layout appears as a viable option in the Text Entry dialog, it can be added to the list of dialogs and is visible in the application indicator: However, it does not work, the new symbols are not loaded. No errors are reported in /var/log/Xorg.0.log.

    Read the article

  • How can I achieve a 3D-like effect with spritebatch's rotation and scale parameters

    - by Alic44
    I'm working on a 2d game with a top-down perspective similar to Secret of Mana and the 2D Final Fantasy games, with one big difference being that it's an action rpg using a 3-dimensional physics engine. I'm trying to draw an aimer graphic (basically an arrow) at my characters' feet when they're aiming a ranged weapon. At first I just converted the character's aim vector to radians and passed that into spritebatch, but there was a problem. The position of every object in my world is scaled for perspective when it's drawn to the screen. So if the physics engine coordinates are (1, 0, 1), the screen coords are actually (1, .707) -- the Y and Z axis are scaled by a perspective factor of .707 and then added together to get the screen coordinates. This meant that the direction the aimer graphic pointed (thanks to its rotation value passed into spritebatch) didn't match up with the direction the projectile actually traveled over time. Things looked fine when the characters fired left, right, up, or down, but if you fired on a diagonal the perspective of the physics engine didn't match with the simplistic way I was converting the character's aim direction to a screen rotation. Ok, fast forward to now: I've got the aimer's rotation matched up with the path the projectile will actually take, which I'm doing by decomposing a transform matrix which I build from two rotation matrices (one to represent the aimer's rotation, and one to represent the camera's 45 degree rotation on the x axis). My question is, is there a way to get not just rotation from a series of matrix transformations, but to also get a Vector2 scale which would give the aimer the appearance of being a 3d object, being warped by perspective? Orthographic perspective is what I'm going for, I think. So, the aimer arrow would get longer when facing sideways, and shorter when facing north and south because of the perspective. At the same time, it would get wider when facing north and south, and less wide when facing right or left. I'd like to avoid actually drawing the aimer texture in 3d because I'm still using spritebatch's layerdepth parameter at this point in my project, and I don't want to have to figure out how to draw a 3d object within the depth sorting system I already have. I can provide code and more details if this is too vague as a question... This is my first post on stack exchange. Thanks a lot for reading! Note: (I think) I realize it can't be a technically correct 3D perspective, because the spritebatch's vector2 scaling argument doesn't allow for an object to be skewed the way it actually should be. What I'm really interested in is, is there a good way to fake the effect, or should I just drop it and not scale at all? Edit to clarify without the help of a picture (apparently I can't post them yet): I want the aimer arrow to look like it has been painted on the ground at the character's feet, so it should appear to be drawn on the ground plane (in my case the XZ plane) which should be tilted at a 45 degree angle (around the X axis) from the viewing perspective. Alex

    Read the article

  • Reading a ZFS USB drive with Mac OS X Mountain Lion

    - by Karim Berrah
    The problem: I'm using a MacBook, mainly with Solaris 11, but something with Mac OS X (ML). The only missing thing is that Mac OS X can't read my external ZFS based USB drive, where I store all my data. So, I decided to look for a solution. Possible solution: I decided to use VirtualBox with a Solaris 11 VM as a passthrough to my data. Here are the required steps: Installing a Solaris 11 VM Install VirtualBox on your Mac OS X, add the extension pack (needed for USB) Plug your ZFS based USB drive on your Mac, ignore it when asked to initialize it. Create a VM for Solaris (bridged network), and before installing it, create a USB filter (in the settings of your Vbox VM, go to Ports, then USB, then add a new USB filter from the attached device "grey usb-connector logo with green plus sign")  Install a Solaris 11 VM, boot it, and install the Guest addition check with "ifconfg -a" the IP address of your Solaris VM Creating a path to your ZFS USB drive In MacOS X, use the "Disk Utility" to unmount the USB attached drive, and unplug the USB device. Switch back to VirtualBox, select the top of the window where your Solaris 11 is running plug your ZFS USB drive, select "ignore" if Mac OS invite you to initialize the disk In the VirtualBox VM menu, go to "Devices" then "USB Devices" and select from the dropping menu your "USB device" Connection your Solaris VM to the USB drive Inside Solaris, you might now check that your device is accessible by using the "format" cli command If not, repeat previous steps Now, with root privilege, force a zpool import -f myusbdevicepoolname because this pool was created on another system check that you see your new pool with "zpool status" share your pool with NFS: share -F NFS /myusbdevicepoolname Accessing the USB ZFS drive from Mac OS X This is the easiest step: access an NFS share from mac OS Create a "ZFSdrive" folder on your MacOS desktop from a terminal under mac OS: mount -t nfs IPadressofMySoalrisVM:/myusbdevicepoolname  /Users/yourusername/Desktop/ZFSdrive et voila ! you might access your data, on a ZFS USB drive, directly from your Mountain Lion Desktop. You might play with the share rights in order to alter any read/write rights as needed. You might activate compression, encryption inside the Solaris 11 VM ...

    Read the article

  • How employable am I as a programmer?

    - by dsimcha
    I'm currently a Ph.D. student in Biomedical Engineering with a concentration in computational biology and am starting to think about what I want to do after graduate school. I feel like I've accumulated a lot of programming skills while in grad school, but taken a very non-traditional path to learning all this stuff. I'm wondering whether I would have an easy time getting hired as a programmer and could fall back on that if I can't find a good job directly in my field, and if so whether I would qualify for a more prestigious position than "code monkey". Things I Have Going For Me Approximately 4 years of experience programming as part of my research. I believe I have a solid enough grasp of the fundamentals that I could pick up new languages and technologies pretty fast, and could demonstrate this in an interview. Good math and statistics skills. An extensive portfolio of open source work (and the knowledge that working on these projects implies): I wrote a statistics library in D, mostly from scratch. I wrote a parallelism library (parallel map, reduce, foreach, task parallelism, pipelining, etc.) that is currently in review for adoption by the D standard library. I wrote a 2D plotting library for D against the GTK Cairo backend. I currently use it for most of the figures I make for my research. I've contributed several major performance optimizations to the D garbage collector. (Most of these were low-hanging fruit, but it still shows my knowledge of low-level issues like memory management, pointers and bit twiddling.) I've contributed lots of miscellaneous bug fixes to the D standard library and could show the change logs to prove it. (This demonstrates my ability read other people's code.) Things I Have Going Against Me Most of my programming experience is in D and Python. I have very little to virtually no experience in the more established, "enterprise-y" languages like Java, C# and C++, though I have learned a decent amount about these languages from small, one-off projects and discussions about language design in the D community. In general I have absolutely no knowledge of "enterprise-y" technlogies. I've never used a framework before, possibly because most reusable code for scientific work and for D tends to call itself a "library" instead. I have virtually no formal computer science/software engineering training. Almost all of my knowledge comes from talking to programming geek friends, reading blogs, forums, StackOverflow, etc. I have zero professional experience with the official title of "developer", "software engineer", or something similar.

    Read the article

  • Are there any concerns with using a static read-only unit of work so that it behaves like a cache?

    - by Rowan Freeman
    Related question: How do I cache data that rarely changes? I'm making an ASP.NET MVC4 application. On every request the security details about the user will need to be checked with the area/controller/action that they are accessing to see if they are allowed to view it. The security information is stored in the database. For example: User Permission UserPermission Action ActionPermission A "Permission" is a token that is applied to an MVC action to indicate that the token is required in order to access the action. Once a user is given the permission (via the UserPermission table) then they have the token and can therefore access the action. I've been looking in to how to cache this data (since it rarely changes) so that I'm only querying in-memory data and not hitting a database (which is a considerable performance hit at the moment). I've tried storing things in lists, using a caching provider but I either run in to problems or performance doesn't improve. One problem that I constantly run in to is that I'm using lazy loading and dynamic proxies with EntityFramework. This means that even if I ToList() everything and store them somewhere static, the relationships are never populated. For example, User.Permissions is an ICollection but it's always null. I don't want to Include() everything because I'm trying to keep things simple and generic (and easy to modify). One thing I know is that an EntityFramework DbContext is a unit of work that acts with 1st-level caching. That is, for the duration of the unit of work, everything that is accessed is cached in memory. I want to create a read-only DbContext that will exist indefinitely and will only be used to read about permission data. Upon testing this it worked perfectly; my page load times went from 200ms+ to 20ms. I can easily force the data to refresh at certain intervals or simply leave it to refresh when the application pool is recycled. Basically it will behave like a cache. Note that the rest of the application will interact with other contexts that exist per request as normal. Is there any disadvantage to this approach? Could I be doing something different?

    Read the article

  • As the current draft stands, what is the most significant change the "National Strategy for Trusted Identities in Cyberspace" will provoke?

    - by mfg
    A current draft of the "National Strategy for Trusted Identities in Cyberspace" has been posted by the Department of Homeland Security. This question is not asking about privacy or constitutionality, but about how this act will impact developers' business models and development strategies. When the post was made I was reminded of Jeff's November blog post regarding an internet driver's license. Whether that is a perfect model or not, both approaches are attempting to handle a shared problem (of both developers and end users): How do we establish an online identity? The question I ask here is, with respect to the various burdens that would be imposed on developers and users, what are some of the major, foreseeable implementation issues that will arise from the current U.S. Government's proposed solution? For a quick primer on the setup, jump to page 12 for infrastructure components, here are two stand-outs: An Identity Provider (IDP) is responsible for the processes associated with enrolling a subject, and establishing and maintaining the digital identity associated with an individual or NPE. These processes include identity vetting and proofing, as well as revocation, suspension, and recovery of the digital identity. The IDP is responsible for issuing a credential, the information object or device used during a transaction to provide evidence of the subject’s identity; it may also provide linkage to authority, roles, rights, privileges, and other attributes. The credential can be stored on an identity medium, which is a device or object (physical or virtual) used for storing one or more credentials, claims, or attributes related to a subject. Identity media are widely available in many formats, such as smart cards, security chips embedded in PCs, cell phones, software based certificates, and USB devices. Selection of the appropriate credential is implementation specific and dependent on the risk tolerance of the participating entities. Here are the first considered actionable components of the draft: Action 1: Designate a Federal Agency to Lead the Public/Private Sector Efforts Associated with Achieving the Goals of the Strategy Action 2: Develop a Shared, Comprehensive Public/Private Sector Implementation Plan Action 3:Accelerate the Expansion of Federal Services, Pilots, and Policies that Align with the Identity Ecosystem Action 4:Work Among the Public/Private Sectors to Implement Enhanced Privacy Protections Action 5:Coordinate the Development and Refinement of Risk Models and Interoperability Standards Action 6: Address the Liability Concerns of Service Providers and Individuals Action 7: Perform Outreach and Awareness Across all Stakeholders Action 8: Continue Collaborating in International Efforts Action 9: Identify Other Means to Drive Adoption of the Identity Ecosystem across the Nation

    Read the article

  • Common SOA Problems by C2B2

    - by JuergenKress
    SOA stands for Service Oriented Architecture and has only really come together as a concrete approach in the last 15 years or so, although the concepts involved have been around for longer. Oracle SOA Suite is based around the Service Component Architecture (SCA) devised by the Open SOA collaboration of companies including Oracle and IBM. SCA, as used in SOA suite, is designed as a way to crystallise the concepts of SOA into a standard which ensures that SOA principles like the separation of application and business logic are maintained. Orchestration or Integration? A common thing to see with many people who are beginning to either build a new SOA based infrastructure, or move an old system to be service oriented, is confusion in the purpose of SOA technologies like BPEL and enterprise service buses. For a lot of problems, orchestration tools like BPEL or integration tools like an ESB will both do the job and achieve the right objectives; however it’s important to remember that, although a hammer can be used to drive a screw into wood, that doesn’t mean it’s the best way to do it. Service Integration is the act of connecting components together at a low level, which usually results in a single external endpoint for you to expose to your customers or other teams within your organisation – a simple product ordering system, for example, might integrate a stock checking service and a payment processing service. Process Orchestration, however, is generally a higher level approach whereby the (often externally exposed) service endpoints are brought together to track an end-to-end business process. This might include the earlier example of a product ordering service and couple it with a business rules service and human task to handle edge-cases. A good (but not exhaustive) rule-of-thumb is that integrations performed by an ESB will usually be real-time, whereas process orchestration in a SOA composite might comprise processes which take a certain amount of time to complete, or have to wait pending manual intervention. BPEL vs BPMN For some, with pre-existing SOA or business process projects, this decision is effectively already made. For those embarking on new projects it’s certainly an important consideration for those using Oracle SOA software since, due to the components included in SOA Suite and BPM Suite, the choice of which to buy is determined by what they offer. Oracle SOA suite has no BPMN engine, whereas BPM suite has both a BPMN and a BPEL engine. SOA suite has the ESB component “Mediator”, whereas BPM suite has none. Decisions must be made, therefore, on whether just one or both process modelling languages are to be used. The wrong decision could be costly further down the line. Design for performance: Read the complete article here. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Technorati Tags: C2B2,SOA best practice,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • Access denied 403 errors after migrating my site

    - by AgA
    I've recently migrated my Joomla site from one shared hosting to another with Hostgator. GWT notified me about many 403 access denied pages. I've checked with Firebug too, and even though browser is displaying full page correctly but http return is 403. I've checked the home page but it's correctly returing 200 response. The same is shown by Fetch as Google in GWT(pasted this in the bottom). The site is 3 years old and I regularly do such migrations. I've copied the files and database "AS IS". I've even cleared all the caches but no luck. There is only one change: previously the site was primary domain but now it's add-on one. What could be the issue? This is how Googlebot fetched the page. Fetch as Google URL: http://MYSITE.COM/-----------------REMOVED.html Date: Thursday, June 20, 2013 at 10:32:14 PM PDT Googlebot Type: Web Download Time (in milliseconds): 3899 HTTP/1.1 403 Forbidden Date: Fri, 21 Jun 2013 05:32:15 GMT Server: Apache P3P: CP="NOI ADM DEV PSAi COM NAV OUR OTRo STP IND DEM" Expires: Mon, 1 Jan 2001 00:00:00 GMT Cache-Control: post-check=0, pre-check=0 Pragma: no-cache Set-Cookie: 0e4f6b53991c80cf39d57a6db58bb58d=ee2d880e8db0f1fc03c5612ea5a77004; path=/ Last-Modified: Fri, 21 Jun 2013 05:32:19 GMT Keep-Alive: timeout=5, max=75 Connection: Keep-Alive Transfer-Encoding: chunked Content-Type: text/html; charset=utf-8 <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en-gb" lang="en-gb" > <head> <base href="http://www.mysite.com/-----------------rajiv-yuva-shakthi-programme-finance-planning.html" /> <meta http-equiv="content-type" content="text/html; charset=utf-8" /> <meta name="robots" content="index, follow" /> <meta name="keywords" content="" /> <<<<<<TRIMMED>>>>>>>>>>>>>>

    Read the article

  • Was I wrong about JavaScript?

    - by jboyer
    Yes, I was. Recently, I’ve taken a good hard look at JavaScript. I’ve used it before but mostly in the capacity of web design. Using JQuery to make your web page do cool stuff is different than really creating a JavaScript application using all of the language constructs. What I’m finding as I use it more is that I may have been wrong about my assumptions about it. Let me explain.   I enjoyed doing cool stuff with JQuery but the limited experience with JavaScript as a language coupled with the bad things that I heard about it led me to not have any real interest in it. However, JavaScript is ubiquitous on the web and if I want to do any web development, which I do, I need to learn it. So here I am, diving deep into the language with the help of the JavaScript Fundamentals training course at Pluralsight (great training for a low price) and the JavaScript: The Good Parts book by Douglas Crockford.   Now, there are certainly parts of JavaScript that are bad. I think these are well known by any developer that uses it. The parts that I feel are especially egregious are the following: The global object null vs. undefined truthy and falsy limited (nearly nonexistent) scoping ‘==’ and ‘===’ (I just don’t get the reason for coercion)   However, what I am finding hiding under the covers of the bad things is a good language. I am finding that I am legitimately enjoying JavaScript. This I was not expecting. I’m not going to go into a huge dissertation on what I like about it, but some things include: Object literal notation dynamic typing functional style (JavaScript: The Good Parts describes it as LISP in C clothing) JSON (better than XML) There are parts of JavaScript that seem strange to OOP developers like myself. However, just because it is different or seems strange does not mean it is bad. Some differences are quite interesting and useful.   I feel that it is important for developers to challenge their assumptions and also to be able to admit when they are wrong on a topic. Many different situations can arise that lead to this, such as choosing the wrong technology for a problem’s solution, misunderstanding the requirements, etc. I decided to challenge my assumptions about JavaScript instead of moving straight into CoffeeScript or Dart. After exploring it, I find that I am beginning to enjoy it the more I use it. As long as there are those like Crockford to help guide me in the right way to code in JavaScript, I can create elegant and efficient solutions to problems and add another ‘arrow’ to the ‘quiver’, so to speak. I do still intend to learn CoffeeScript to see what the hub-bub is about, but now I no longer have to be afraid of JavaScript as a legitimate programming language.   Has something similar ever happened to you? Tell me about it in the comments below.

    Read the article

< Previous Page | 810 811 812 813 814 815 816 817 818 819 820 821  | Next Page >