Search Results

Search found 9254 results on 371 pages for 'approach'.

Page 159/371 | < Previous Page | 155 156 157 158 159 160 161 162 163 164 165 166  | Next Page >

  • Getting Started with MySQL Cluster, Hands-on Lab, Next Saturday, MySQL Connect

    - by user13819847
    Hi!I'm speaking at MySQL Connect next Saturday, Sep. 29. My Session is a hands-on lab (HOL) on MySQL Cluster.If you are interested in familiarize a bit with MySQL Cluster this is definitely a session for you. I will start by briefly introducing MySQL Cluster and its architecture. Then I will guide you through the needed steps to install a local MySQL Cluster, connect to it (using the command line), monitor its logs, and safe shutdown it.We will hence have a chance to see which are the most common commands using in MySQL Cluster administration (e.g. Cluster backup) as well as the most common operations (e.g. online datanode add). Cluster's users and customers have the flexibility to choose whether they prefer to use a SQL or NoSQL approach to connect to MySQL Cluster, so, during the last part of the HOL, we will see how to connect to MySQL Cluster using the NoSQL NDB API. If there is enough time at the end, we will also compile and execute some simple Java programs that make use of Connector J to connect to the SQL Nodes of our Cluster. I hope this HOL will be of your interest! Below are some details if you decide to attend:When:   Saturday Sep. 29, 4 pmWhere: Hilton San Francisco - Plaza Room AIf you are interested in other MySQL Cluster sessions, you will find the info you need in this post. The full program of the MySQL Connect Conference is here, and if you are not registered yet, remember that you can still save US $300 over the on-site fee – Register Now! See you at MySQL Connect!

    Read the article

  • Web Application: Combining View Layer Between PHP and Javascript-AJAX

    - by wlz
    I'm developing web application using PHP with CodeIgniter MVC framework with a huge real time client-side functionality needs. This is my first time to build large scale of client-side app. So I combine the PHP with a large scale of Javascript modules in one project. As you already know, MVC framework seperate application modules into Model-View-Controller. My concern is about View layer. I could be display the data on the DOM by PHP built-in script tag by load some data on the Controller. Otherwise I could use AJAX to pulled the data -- treat the Controller like a service only -- and display the them by Javascript. Here is some visualization I could put the data directly from Controller: <label>Username</label> <input type="text" id="username" value="<?=$userData['username'];?>"><br /> <label>Date of birth</label> <input type="text" id="dob" value="<?=$userData['dob'];?>"><br /> <label>Address</label> <input type="text" id="address" value="<?=$userData['address'];?>"> Or pull them using AJAX: $.ajax({ type: "POST", url: config.indexURL + "user", dataType: "json", success: function(data) { $('#username').val(data.username); $('#dateOfBirth').val(data.dob); $('#address').val(data.address); } }); So, which approach is better regarding my application has a complex client-side functionality? In the other hand, PHP-CI has a default mechanism to put the data directly from Controller, so why using AJAX?

    Read the article

  • The UIManager Pattern

    - by Duncan Mills
    One of the most common mistakes that I see when reviewing ADF application code, is the sin of storing UI component references, most commonly things like table or tree components in Session or PageFlow scope. The reasons why this is bad are simple; firstly, these UI object references are not serializable so would not survive a session migration between servers and secondly there is no guarantee that the framework will re-use the same component tree from request to request, although in practice it generally does do so. So there danger here is, that at best you end up with an NPE after you session has migrated, and at worse, you end up pinning old generations of the component tree happily eating up your precious memory. So that's clear, we should never. ever, be storing references to components anywhere other than request scope (or maybe backing bean scope). So double check the scope of those binding attributes that map component references into a managed bean in your applications.  Why is it Such a Common Mistake?  At this point I want to examine why there is this urge to hold onto these references anyway? After all, JSF will obligingly populate your backing beans with the fresh and correct reference when needed.   In most cases, it seems that the rational is down to a lack of distinction within the application between what is data and what is presentation. I think perhaps, a cause of this is the logical separation between business data behind the ADF data binding (#{bindings}) façade and the UI components themselves. Developers tend to think, OK this is my data layer behind the bindings object and everything else is just UI.  Of course that's not the case.  The UI layer itself will have state which is intrinsically linked to the UI presentation rather than the business model, but at the same time should not be tighly bound to a specific instance of any single UI component. So here's the problem.  I think developers try and use the UI components as state-holders for this kind of data, rather than using them to represent that state. An example of this might be something like the selection state of a tabset (panelTabbed), you might be interested in knowing what the currently disclosed tab is. The temptation that leads to the component reference sin is to go and ask the tabset what the selection is.  That of course is fine in context - e.g. a handler within the same request scoped bean that's got the binding to the tabset. However, it leads to problems when you subsequently want the same information outside of the immediate scope.  The simple solution seems to be to chuck that component reference into session scope and then you can simply re-check in the same way, leading of course to this mistake. Turn it on its Head  So the correct solution to this is to turn the problem on its head. If you are going to be interested in the value or state of some component outside of the immediate request context then it becomes persistent state (persistent in the sense that it extends beyond the lifespan of a single request). So you need to externalize that state outside of the component and have the component reference and manipulate that state as needed rather than owning it. This is what I call the UIManager pattern.  Defining the Pattern The  UIManager pattern really is very simple. The premise is that every application should define a session scoped managed bean, appropriately named UIManger, which is specifically responsible for holding this persistent UI component related state.  The actual makeup of the UIManger class varies depending on a needs of the application and the amount of state that needs to be stored. Generally I'll start off with a Map in which individual flags can be created as required, although you could opt for a more formal set of typed member variables with getters and setters, or indeed a mix. This UIManager class is defined as a session scoped managed bean (#{uiManager}) in the faces-config.xml.  The pattern is to then inject this instance of the class into any other managed bean (usually request scope) that needs it using a managed property.  So typically you'll have something like this:   <managed-bean>     <managed-bean-name>uiManager</managed-bean-name>     <managed-bean-class>oracle.demo.view.state.UIManager</managed-bean-class>     <managed-bean-scope>session</managed-bean-scope>   </managed-bean>  When is then injected into any backing bean that needs it:    <managed-bean>     <managed-bean-name>mainPageBB</managed-bean-name>     <managed-bean-class>oracle.demo.view.MainBacking</managed-bean-class>     <managed-bean-scope>request</managed-bean-scope>     <managed-property>       <property-name>uiManager</property-name>       <property-class>oracle.demo.view.state.UIManager</property-class>       <value>#{uiManager}</value>     </managed-property>   </managed-bean> In this case the backing bean in question needs a member variable to hold and reference the UIManager: private UIManager _uiManager;  Which should be exposed via a getter and setter pair with names that match the managed property name (e.g. setUiManager(UIManager _uiManager), getUiManager()).  This will then give your code within the backing bean full access to the UI state. UI components in the page can, of course, directly reference the uiManager bean in their properties, for example, going back to the tab-set example you might have something like this: <af:paneltabbed>   <af:showDetailItem text="First"                disclosed="#{uiManager.settings['MAIN_TABSET_STATE'].['FIRST']}"> ...   </af:showDetailItem>   <af:showDetailItem text="Second"                      disclosed="#{uiManager.settings['MAIN_TABSET_STATE'].['SECOND']}">     ...   </af:showDetailItem>   ... </af:panelTabbed> Where in this case the settings member within the UI Manger is a Map which contains a Map of Booleans for each tab under the MAIN_TABSET_STATE key. (Just an example you could choose to store just an identifier for the selected tab or whatever, how you choose to store the state within UI Manger is up to you.) Get into the Habit So we can see that the UIManager pattern is not great strain to implement for an application and can even be retrofitted to an existing application with ease. The point is, however, that you should always take this approach rather than committing the sin of persistent component references which will bite you in the future or shotgun scattered UI flags on the session which are hard to maintain.  If you take the approach of always accessing all UI state via the uiManager, or perhaps a pageScope focused variant of it, you'll find your applications much easier to understand and maintain. Do it today!

    Read the article

  • How to get search engines to properly index an ajax driven search page

    - by Redtopia
    I have an ajax-driven search page that will allow users to search through a large collection of records. Each search result points to index.php?id=xyz (where xyz is the id of the record). The initial view does not have any records listed, and there is no interface that allows you to browse through all records. You can only conduct a search. How do I build the page so that spiders can crawl each record? Or is there another way (outside of this specific search page) that will allow me to point spiders to a list of all records. FYI, the collection is rather large, so dumping links to every record in a single request is not a workable solution. Outputting the records must be done in multiple requests. Each record can be viewed via a single page (eg "record.php?id=xyz"). I would like all the records indexed without anything indexed from the sitemap that shows where the records exist, for example: <a href="/result.php?id=record1">Record 1</a> <a href="/result.php?id=record2">Record 2</a> <a href="/result.php?id=record3">Record 3</a> <a href="/seo.php?page=2">next</a> Assuming this is the correct approach, I have these questions: How would the search engines find the crawl page? Is it possible to prevent the search engines from indexing the words "Record 1", etc. and "next"? Can I output only the links? Or maybe something like:  

    Read the article

  • I want to turn VB.Net Option Strict On

    - by asjohnson
    I recently found out about strong typing in VB.Net (naturally it was on here, thanks!) and am deciding I should take another step toward being a better programmer. I went from vba macros - VB.Net, because I needed a program that I could automate and I never read anything about strong typing, so I kind of fell into the VB.Net default trap. Now I am looking to turn it on and sort out this whole type thing. I was hoping someone could direct me towards some resources to make this transistion as painless as possible. I have read around some and ctype seems to come up a lot, but past that I am at a bit of a loss. What are the benefits of switching? Is there more to it than just using ctype to cast things? I feel like there is a good article that I have failed to come across and any direction would be great. Would a good approach to be to rewrite a program that is written with option strict off and note differences?

    Read the article

  • Implementing SOA & Security with Oracle Fusion Middleware in your solution – partner webcast September 20th 2012

    - by JuergenKress
    Security was always one of the main pain points for the IT industry, and new security challenges has been introduced with the proliferation  of the service-oriented approach to building modern software. Oracle Fusion Middleware provides a wide variety of features that ease the building service-oriented solutions, but how these services can be secured? Should we implement the security features in each and every service or there’s a better way? During the webinar we are going to show how to implement non-intrusive declarative security for your SOA components by introducing the Oracle product portfolio in this area, such as Oracle Web Services Manager and Oracle Enterprise Gateway. Agenda: SOA & Web Services basics: quick refresher Building your SOA with Oracle Fusion Middleware: product review Common security risks in the Web Services world SOA & Web Services security standards Implementing Web Services Security with the Oracle products Web Services Security with Oracle – the big picture Declarative end point security with Oracle Web Services Manager Perimeter Security with Oracle Enterprise Gateway Utilizing the other Oracle IDM products for the advanced scenarios Q&A session Delivery Format This FREE online LIVE eSeminar will be delivered over the Web. Registrations received less than 24hours prior to start time may not receive confirmation to attend. Duration: 1 hour Register Now Send your questions and migration/upgrade requests [email protected] Visit regularly our ISV Migration Center blog or Follow us @oracleimc to learn more on Oracle Technologies, upcoming partner webcasts and events. All content is made available through our YouTube - SlideShare - Oracle Mix. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: Technorati Tags: ISV migration center,SOA,IDM,SOA Community,Oracle SOA,Oracle BPM,BPM,Community,OPN,Jürgen Kress

    Read the article

  • 2012 Oracle Fusion Middleware Innovation Awards for Oracle Exalogic

    - by Sanjeev Sharma
    Companies from around the world were honored for their innovative solutions using Oracle Fusion Middleware. This year’s 27 award winners, representing 11 countries and a wide span of industries, wowed the judges with a range of projects across eight product categories. 4 awards were given out to customers who demonstrated innovative application of Oracle Exalogic for their mission-critical applications.Below is an overview of the 4 businesses that won the Oracle Fusion Middleware Innovation Award for Oracle Exalogic this year. Company: Netshoes About: Leading online retailer of sporting goods in Latin America.Challenges: Rapid business growth resulted in frequent outages and poor response-time of online store-front Conventional ad-hoc approach to horizontal scaling resulted in high CAPEX and OPEX Poor performance and unavailability of online store-front resulted in revenue loss from purchase abandonment Solution: Consolidated ATG Commerce and Oracle WebLogic running on Oracle Exalogic.Business Impact:Reduced abandonment rates resulting in a two-digit increase in online conversion rates translating directly into revenue up-liftCompany: ClaroAbout: Leading communications services provider in Latin America.Challenges: Support business growth over the next 3  - 5 years while maximizing re-use of existing middleware and application investments with minimal effort and risk Solution: Consolidated Oracle Fusion Middleware components (Oracle WebLogic, Oracle SOA Suite, Oracle Tuxedo) and JAVA applications onto Oracle Exalogic and Oracle Exadata. Business Impact:Improved partner SLA’s 7x while improving throughput 5X and response-time 35x for  JAVA applicationsCompany: ULAbout: Leading safety testing and certification organization in the world.Challenges: Transition from being a non-profit to a profit oriented enterprise and grow from a $1B to $5B in annual revenues in the next 5 years Undertake a massive business transformation by aligning change strategy with execution Solution: Consolidated Oracle Applications (E-Business Suite, Siebel, BI, Hyperion) and Oracle Fusion Middleware (AIA, SOA Suite) on Oracle Exalogic and Oracle ExadataBusiness Impact:Reduced financial and operating risk in re-architecting IT services to support new business capabilities supporting 87,000 manufacturersCompany: Ingersoll RandAbout: Leading manufacturer of industrial, climate, residential and security solutions.Challenges: Business continuity risks due to complexity in enforcing consistent operational and financial controls; Re-active business decisions reduced ability to offer differentiation and compete Solution: Consolidated Oracle E-business Suite on Oracle Exalogic and Oracle ExadataBusiness Impact:Service differentiation with faster order provisioning and a shorter lead-to-cash cycle translating into higher customer satisfaction and quicker cash-conversionCheck out the winners of the Oracle Fusion Middleware Innovation awards in other categories here.

    Read the article

  • How to Search for (and Find) Solaris Docs

    - by rickramsey
    Just the other day, I went to the recently-released Oracle Solaris 11 library to search for information about the print service changes. I knew there had been changes in Oracle Solaris 11, but could not remember the new approach to printing. So, being the optimist that I (never) am, I went to the Oracle Solaris 11 Information Library on docs.oracle.com and typed "print service" into the search box. Imagine my surprise when the response back was: We did not find any search results for: print service site:download.oracle.com url:/docs/cd/E23824_01. OMG! WTF? Are you kidding me? After throwing a few stuffed animals at my computer screen, I tried again. Is search broken? Well, sort of (and I'm trying to get it fixed). In the meantime, however, there is a reasonably simple user workaround. Possibly unnoticed by most people, there is a Within drop-down menu on the Oracle search results page. If you simply open the Within menu, select Documentation, and click the little magnifying glass again, you (should) get the expected results. Is it perfect? No, but at least it's an improvement over being completely broken. - Janice Critchlow, Information Architect, Systems Website Newsletter Facebook Twitter

    Read the article

  • Why should I use Zend_Application?

    - by Billy ONeal
    I've been working on a Zend Framework application which currently does a bunch of things through Zend Application and a few resource plugins written for it. However, looking at this codebase now, it seems to me that using Zend_Application just makes things more complicated; and a plain, more "traditional" bootstrap file would do a better job of being transparent. This is even more the case because the individual components of Zend -- Zend_Controller, Zend_Navigation, etc. -- don't reference Zend_Application at all. Therefore they do things like "Well just call setRoute and be on your way," and the user is left scratching their head as to how to implement that in terms of the application.ini configuration file. This is not to say that one can't figure out what's going on by doing spelunking through the ZF source code. My problem with that approach is that it's to easy to depend on something that's an implementation detail, rather than a contract, and that all it seems to do is add an extra layer of indirection that one must wade through to understand an application. I look at pre ZF 1.8 example code, before Zend_Application existed, and everywhere I see plain bootstrap files that setup the MVC framework and get on their way. The code is clear and easy to understand, even if it is a bit repetitive. I like the DRY concept that Application gets you, but particularly when I'm assuming first people looking at the app's code aren't really familiar with Zend at all, I'm considering blowing away any dependence I have on Zend_Application and returning to a traditional bootstrap file. Now, my concern here is that I don't have much experience doing this, and I don't want to get rid of Zend_Application if it does something particularly important of which I am unaware, or something of that nature. Is there a really good reason I should keep it around?

    Read the article

  • Six in Six - SQL Server 2012 Webinars

    - by JustinL
    We're running six webinars over the next six months covering our experiences with SQL Server 2012 and customer deployments. I'm presenting the first on upgrading to SQL Server 2012 next month, subsequent sessions will be delivered by colleagues: NOVEMBER: SQL Server 2012 Upgrade Approach and considerations. Friday 23rd November 12:00 – 13:00 Present approaches for upgrade testing, managing risk and rollback. The session will include details on minimizing downtime and upgrading from SQL Server 2000, 2005, and 2008 including.... More details and register. DECEMBER: Delivering Mission Critical BI with SQL Server 2012– Friday 14th December 12:00-13:00 Information is the lifeblood of many organisations and the availability of timely, accurate information is critical to strategic decision making. This session covers the features and capabilities… More details and register. JANUARY: Architecting Highly Available solutions with SQL Server 2012 – Friday 18th January 12:00- 13:00 Overview and comparison of the high availability features available within SQL Server 2012. The session considers business requirements for availability and recoverability and presents a number of alternative solution designs to meet… More details and register. FEBRUARY: Private cloud deployments with SQL Server 2012 – Friday 15th February 12:00- 13:00 Cloud based technology provide cost effective scale and flexibility. This session provides an overview of the benefits organisations can realise through private cloud… More details and register. MARCH: Visualising data patterns with SQL Server 2012 – Friday 22nd March 12:00- 13:00 This webinar demonstrates the ease of delivering business insight by exploring information and identifying trends through data visualisation. SQL Server 2012 provides new capability with enhanced performance and … More details and register. APRIL: Architecting Highly Available solutions with SQL Server 2012 – Friday 26th April 12:00- 13:00 Customers are increasingly interested in leveraging the benefits of cloud based solutions to provide scalable and flexible infrastructure to host their applications. This session looks at common design patterns and workloads… More details and register. Justin Langford - Coeo Ltd SQL Server Consultants | SQL Server Remote DBA

    Read the article

  • Migrating a Windows Server to Ubuntu Server to provide Samba, AFP and Roaming Profiles

    - by Dan
    I'm replacing our old Windows XP Pro office server with a HP Microserver running Ubuntu Server 12.04 LTS. I'm not a Linux expert but I can find my way around a terminal prompt, I'm a Mac user by choice. The office use a mix of Windows XP Pro machines and OSX Lion laptops. I included Samba during installation, and I'm planning on using Netatalk for the AFP and Bonjour sharing. I'd quite like to have samba make the server appear in 'My network places' on the Windows machines the way Bonjour makes it appear in finder on the Macs, if this is possible? I want to get to a point so that a user logging into Windows, gets connected to the Ubuntu server (do they need an Ubuntu user account?) which get them their shares and their Windows user profile (though a standard profile across users would do). The upshot is to make centralised control of user accounts (e.g. If a person leaves, killing their account on the server stops their Windows logon and ability to access Samba shares) and to ensure files aren't stored on the individual machines for backup/security purposes. I want to make this as simple as possible, so don't want to have loads of stuff I don't need, I just can't figure out: What I need at the server end: - will Samba be enough (already installed as part of initial installation), or will I need to cock around with LDAP (and how does this interact with Samba) - For someone of moderate Linux competence like me, is there a package that offers easy admin of user accounts, e.g. a GUI like phpLDAPadmin (if LDAP is necessary) How to configure the XP machines: - do I need to have the XP machines set up as a domain controller (I've no idea, really) - roaming profiles looks to offer the feature of putting the user's files on the server rather than the machine itself along with a profile that follows the user from machine to machine. Syncing Mac user's home folders with the server This is less of a concern because I can set up Time Machine if it comes to it, but I'd appreciate any recommendations of what approach I should take having the Mac home folders synced to the server.

    Read the article

  • Is rotating the lead developer a good or bad idea?

    - by NickC
    I work on a team that has been flat organizationally since it's creation several months ago. My manager is non-technical and this means that our whole team is responsible for decision-making. My manager is beginning to realize that there are several benefits to having a lead developer, both for his sake (a single point of contact and single responsible party for tasks) and ours (dispute resolution, organized technical guidance, etc.). Because the team has been flat, one concern is that picking one lead developer may discourage the others. A non-developer suggested to my manager that rotating the lead developer is a possible way to avoid this issue. One developer would be lead one month, another the next, and so on. Is this a good idea? Why or why not? Keep in mind that this means all developers — All developers are good, but not necessarily equally suited to leadership. And if it is not, how do I recommend that we avoid this approach without seeming like it's merely for selfish reasons?

    Read the article

  • Designing a Content-Based ETL Process with .NET and SFDC

    - by Patrick
    As my firm makes the transition to using SFDC as our main operational system, we've spun together a couple of SFDC portals where we can post customer-specific documents to be viewed at will. As such, we've had the need for pseudo-ETL applications to be implemented that are able to extract metadata from the documents our analysts generate internally (most are industry-standard PDFs, XML, or MS Office formats) and place in networked "queue" folders. From there, our applications scoop of the queued documents and upload them to the appropriate SFDC CRM Content Library along with some select pieces of metadata. I've mostly used DbAmp to broker communication with SFDC (DbAmp is a Linked Server provider that allows you to use SQL conventions to interact with your SFDC Org data). I've been able to create [console] applications in C# that work pretty well, and they're usually structured something like this: static void Main() { // Load parameters from app.config. // Get documents from queue. var files = someInterface.GetFiles(someFilterOrRegexPattern); foreach (var file in files) { // Extract metadata from the file. // Validate some attributes of the file; add any validation errors to an in-memory // structure (e.g. List<ValidationErrors>). if (isValid) { var fileData = File.ReadAllBytes(file); // Upload using some wrapper for an ORM or DAL someInterface.Upload(fileData, meta.Param1, meta.Param2, ...); } else { // Bounce the file } } // Report any validation errors (via message bus or SMTP or some such). } And that's pretty much it. Most of the time I wrap all these operations in a "Worker" class that takes the needed interfaces as constructor parameters. This approach has worked reasonably well, but I just get this feeling in my gut that there's something awful about it and would love some feedback. Is writing an ETL process as a C# Console app a bad idea? I'm also wondering if there are some design patterns that would be useful in this scenario that I'm clearly overlooking. Thanks in advance!

    Read the article

  • How to get tilemap transparency color working with TiledLib's Demo implementation?

    - by Adam LaBranche
    So the problem I'm having is that when using Nick Gravelyn's tiledlib pipeline for reading and drawing tmx maps in XNA, the transparency color I set in Tiled's editor will work in the editor, but when I draw it the color that's supposed to become transparent still draws. The closest things to a solution that I've found are - 1) Change my sprite batch's BlendState to NonPremultiplied (found this in a buried Tweet). 2) Get the pixels that are supposed to be transparent at some point then Set them all to transparent. Solution 1 didn't work for me, and solution 2 seems hacky and not a very good way to approach this particular problem, especially since it looks like the custom pipeline processor reads in the transparent color and sets it to the color key for transparency according to the code, just something is going wrong somewhere. At least that's what it looks like the code is doing. TileSetContent.cs if (imageNode.Attributes["trans"] != null) { string color = imageNode.Attributes["trans"].Value; string r = color.Substring(0, 2); string g = color.Substring(2, 2); string b = color.Substring(4, 2); this.ColorKey = new Color((byte)Convert.ToInt32(r, 16), (byte)Convert.ToInt32(g, 16), (byte)Convert.ToInt32(b, 16)); } ... TiledHelpers.cs // build the asset as an external reference OpaqueDataDictionary data = new OpaqueDataDictionary(); data.Add("GenerateMipMaps", false); data.Add("ResizetoPowerOfTwo", false); data.Add("TextureFormat", TextureProcessorOutputFormat.Color); data.Add("ColorKeyEnabled", tileSet.ColorKey.HasValue); data.Add("ColorKeyColor", tileSet.ColorKey.HasValue ? tileSet.ColorKey.Value : Microsoft.Xna.Framework.Color.Magenta); tileSet.Texture = context.BuildAsset<Texture2DContent, Texture2DContent>( new ExternalReference<Texture2DContent>(path), null, data, null, asset); ... I can share more code as well if it helps to understand my problem. Thank you.

    Read the article

  • Calculate the Intersection of Two Volumes

    - by igrad
    If you've ever played The Swapper, you'll have a good idea of what I'm asking about. I need to check for, and isolate, areas of a rectangle that may intersect with either a circle or another rectangle. These selected areas will receive special properties, and the areas will be non-static, since the intersecting shapes themselves will also be dynamic. My first thought was to use raycasting detection, though I've only seen that in use with circles, or even ellipses. I'm curious if there's a method of using raycasting with a more rectangular approach, or if there's a totally different method already in use to accomplish this task. I would like something more exact than checking in large chunks, and since I'm using SDL2 with a logical renderer size of 1920x1080, checking if each pixel is intersecting is out of the question, as it would slow things down past a playable speed. I already have a multi-shape collision function-template in place, and I could use that, though it only checks if sides or corners are intersecting; it does not compute the overlapping area, or even find the circle's secant line, though I can't imagine it would be overly complex to implement. TL;DR: I need to find and isolate areas of a rectangle that may intersect with a circle or another rectangle without checking every single pixel on-screen.

    Read the article

  • Partner Webcast: Implementing on SOA - A Hands-On Technology Demonstration

    - by Thanos
    Service Oriented Architecture enables organizations to operate more efficiently and react faster to opportunities. How? By helping you create a flexible application architecture that supports greater business agility. You decide how quickly you want to move. You can start by implementing an application integration platform. Then, you can evolve your environment gradually by introducing business process management, business rules, governance and event processing. This unified but flexible approach also allows you to maximize the long-term cost reduction benefits of SOA and cloud-based applications. In this session, you dive into SOA Suite and you will see the usage of some advanced features. The topics covered range from adapters, automatic and custom business process correlation through service routing, rule based and manual decisions and to error handling, compensations and extending SOA Suite with your own Java code. Agenda: Service Oriented Architecture The Auctions Scenario Live Demo of the Oracle SOA Suite Features Connecting to non service enabled technologies with adapters (Database and File adapter) Orchestrating services with BPEL processes Correlating processes with correlation sets Mediating services Service Component Architecture Event Handling User Notification Human Workflow Business Rules Fault Handling patterns Developing custom components with Spring and using them in SOA Suite composites Delivery Format This FREE online LIVE eSeminar will be delivered over the Web. Registrations received less than 24hours prior to start time may not receive confirmation to attend. Duration: 1 hour Register Now For all your questions and support requests to adopt and implement the latest Oracle technologies please contact us at [email protected]

    Read the article

  • links for 2011-03-07

    - by Bob Rhubart
    DON CIO News: DON CIO Discusses Future IT Initiatives Audio links and a little background information on a recent town hall meetings hosted by Department of the Navy Chief Information Officer Terry Halvorsen. (tags: usgov usnavy cio enterprisearchitecture) Strassmann's Blog: Why So Many Data Centers? "The idea of datacenter consolidation involves much more that applying simple technical solutions." - Paul Strassmann (tags: enterprisearchitecture datacenter consolidation) Satyajith Nair: Coherence - The next big thing for the cloud!! "Disk-based computing is fraught with performance and management issues and doing away with Disks though not practical now, maybe true in the future. This also calls for a re-think of our current application architecture which is so focussed on disk-based persistence." - Satyajith Nair (tags: oracle infosys coherence grid cloud) TechCast: GlassFish Server and WebLogic - Interoperability and Integration - Oracle media - developer Fusion VP Development Anil Gaur and Product Manager Adam Leftik explain Oracle&#39;s strategy for creating increasing integration between GlassFish Server and Oracle WebLogic Server with an overview of new features and functionality for developers in GlassFish 3.1. (tags: ping.fm) Oracle Fusion and Oracle Fusion Applications : Overview | OracleApps Epicenter So WHAT IS ORACLE FUSION? People often get confuse with this term .To start with, it will be a good idea to know the difference between Fusion (tags: ping.fm) Marc Kelderman: OSB: Automatic update of Service Acounts Solution architect Marc Kelderman shares a work-around for using different Service Accounts for multiple environments. (tags: oracle otn sca bpel soa bpm servicebus) Perfect Integration 1 - Architectural Approach "First post in a series of 5-10, I will release all my views and opinions on the Art of Integration. I challenge you to disagree, and bash me with arguments and reasoning." -- Martijn Linssen (tags: enterprisearchitecture integration) Edwin Biemond: Set the Initial Focus on a component in a Page or a Fragment Edwin says: "This is not so hard to do, but sometimes it can be tricky to find the id of a component when you use regions ( Bounded Task Flows )." (tags: oracle otn oracleace java soa) Oracle Linux and Oracle Virtualization at Collaborate 2011 Information on more than 200 Oracle-hosted sessions with the latest insights and guidance from Oracle executives, product managers, and developers. (tags: oracle virtualization linux ioug oaug)

    Read the article

  • Oracle Buys Compendium - Adds Leading Content Marketing Platform to Oracle Eloqua Marketing Cloud

    - by Richard Lefebvre
    News Facts Oracle today announced that it has acquired Compendium, a cloud-based content marketing provider that helps companies plan, produce and deliver engaging content across multiple channels throughout their customers’ lifecycle. Compendium’s data-driven approach aligns relevant content with customer data and profiles to help companies more effectively attract prospects, engage buyers, accelerate conversion of prospects to opportunities, increase adoption, and drive revenue growth. Compendium’s innovative solution complements Oracle’s industry leading Eloqua Marketing Cloud which is a part of Oracle’s comprehensive Customer Experience solution. The combination of Oracle Eloqua Marketing Cloud with Compendium is expected to enable modern marketers to align persona-based content to customers’ digital body language to increase “top-of-funnel” customer engagement, improve the quality of sales leads, realize the highest return on their marketing investment, and increase customer loyalty. More information on this announcement can be found at http://www.oracle.com/compendium. Supporting Quotes “As customers increasingly access information through online and mobile channels, the buying process is shifting from sales-driven to marketing-driven. Now, more than ever, marketers are challenged to deliver relevant and engaging content across multiple channels and throughout the customer lifecycle,” said Thomas Kurian, Executive Vice President, Oracle Development. “By adding Compendium’s content marketing platform to Oracle Eloqua Marketing Cloud, customers will be able to capture more prospects, improve the customer experience and drive top line revenue.” “Oracle Eloqua Marketing Cloud is uniquely positioned to capture a prospect’s digital body language to help companies know each buyer’s demographics, behaviors and influencers,” said Chris Baggott, Compendium CEO. “By combining this buyer profile with Compendium’s data-driven content marketing platform, marketers will be able to deliver the right content, to the right individual across the right channel at the right time. We are very excited to now be a part of the industry’s most complete marketing cloud solution, giving us a global stage to deliver innovative content marketing solutions.” Supporting Resources About Oracle and Compendium General Presentation Customer and Partner Letter FAQ

    Read the article

  • How do I prevent skype from starting another instance when it's already running?

    - by con-f-use
    Just a little unnecessary annoyance to fix here: Suppose you have Skype for Ubuntu running. You accidentally click on the launcher again. The way it is now, Skype starts a second instance, which promptly tells you it can't log in. There is another instance already running. To make things worse: On the next regular start of Skype it you will have to re-enter your saved password, due to the "Sign in failure". I thought this would be fixed soon but neither Canonical nor Microsoft care enough. The annoyance continues to exist for the last three updates at least. So in an approach to provide a workaround I will post what I've done to prevent this behaviour. Maybe it's useful for some of you. Maybe it will rise awareness and cause a fix. I'm happy for any better solution btw. and that's the real purpose of my question. Usually I don't answer them myself. So doesn anyone know of a better solution to fix dual instancing of Skype?

    Read the article

  • Highly scalable and dynamic "rule-based" applications?

    - by Prof Plum
    For a large enterprise app, everyone knows that being able to adjust to change is one of the most important aspects of design. I use a rule-based approach a lot of the time to deal with changing business logic, with each rule being stored in a DB. This allows for easy changes to be made without diving into nasty details. Now since C# cannot Eval("foo(bar);") this is accomplished by using formatted strings stored in rows that are then processed in JavaScript at runtime. This works fine, however, it is less than elegant, and would not be the most enjoyable for anyone else to pick up on once it becomes legacy. Is there a more elegant solution to this? When you get into thousands of rules that change fairly frequently it becomes a real bear, but this cannot be that uncommon of a problem that someone has not thought of a better way to do this. Any suggestions? Is this current method defensible? What are the alternatives? Edit: Just to clarify, this is a large enterprise app, so no matter which solution works, there will be plenty of people constantly maintaining its rules and data (around 10). Also, The data changes frequently enough to say that some sort of centralized server system is basically a must.

    Read the article

  • WCF Versioning, Naming and Endpoint URL

    - by Vinothkumar VJ
    I have a WCF Service and a Main Lib1. Say, I have a Save Profile Service. WCF gets data (with predefined data contract) from client and pass the same to the Main Class Lib1, generate response and send it back to client. WCF Method : SaveProfile(ProfileDTO profile) Current Version 1.0 ProfileDTO have the following UserName Password FirstName DOB (In string yyyy-mm-dd) CreatedDate (In string yyyy-mm-dd) Next Version (V2.0) ProfileDTO have the following UserName Password FirstName DOB (In UnixTimeStamp) CreatedDate (In UnixTimeStamp) Version 3.0 ProfileDTO have the following (With change in UserName and Password length validation) UserName Password FirstName DOB (In UnixTimeStamp) CreatedDate (In UnixTimeStamp) In simple we have DataContract and Workflow change between each version 1. How do I name the methods in WCF Service and Main Class Lib1? 2. Do I have to go with any specific pattern for ease development and maintenance? 3. Do I have to have different endpoints for different version? In the above example I have a method named “SaveProfile”. Do I have to name the methods like “SaveProfile1.0”, “SaveProfile2.0”, etc. If that is the case when there is no change between Version “3.0” and “4.0” then there will difficult in maintenance. I’m looking for a approach that will help in ease maintenance

    Read the article

  • How can I better implement A star algorithm with a very large set of nodes?

    - by Stephen
    I'm making a game with nodejs in which many enemies must converge on the player as the player moves around a relatively open space (right now it is an open field with few obstacles, but eventually there may be some small buildings in the field with 1 or 2 rooms). It's a multiplayer game using websockets, so the server needs to keep track of enemies and players. I found this javascript A* library which I've modified to be used on the server as a nodejs module. The library utilizes a Binary Heap to track the nodes for the algorithm, so it should be pretty fast (and indeed, with a small grid, say 100x100 it is lightning fast). The problem is that my game is not really tile-based. As the player moves around the map, he is moving on a more or less 1-to-1 per-pixel coordinate system (the player can move in 8 directions, 1 or 2 pixels at a time). In preliminary tests, on an 800x600 field, the path-finding can take anywhere from 400 to 1000 ms. Multiply that by 10 enemies and the game starts to get pretty choppy. I have already set it up so that each enemy will only do a path-finding call once per second or even as slow as once every 2 seconds (they have to keep updating their path because the players can move freely). But even with this long interval, there are noticeable lag spikes or chops every couple of seconds as the enemies update their paths. I'm willing to approach the problem of path-finding differently, if there's another option. I'm assuming that the real problem is the enormous grid (800x600). It also occurs to me that maybe the large arrays are to blame, as I've read that V8 has trouble with large arrays.

    Read the article

  • jQuery + Perl CGI to vb.net transition

    - by user1257458
    I've been developing oracle database-heavy "web applications" forever by building my html by hand, adding some jquery to handle ajax requests (html inserts for forms processing etc), and always did my server side stuff in perl cgi. I really love how easy it is to read some form input, execute some select statements through dbi (SO EASY), and generate HTML to be inserted by the jquery request. That's a web application to me. However, my new boss builds everything in visual studio 2010, vb.net, usually webforms. So, for work reasons, I now need to start developing in vb.net so it can be collectively maintained, and I'm just seeking advice on where to start learning/how to approach this. I know I could at least learn ASP.net and VB.net, and create a webform, have it read parameters, return HTML, etc. which would allow me to use my previously written HTML and client-side scripts (jQuery). Although- since we're moving heavily to mobile applications I really need to reduce client-side processing load. Is there any advantage to my boss' method? Thanks a ton.

    Read the article

  • Guest Blog: Secure your applications based on your business model, not your application architecture, by Yaldah Hakim

    - by Darin Pendergraft
    Today’s businesses are looking for new ways to engage their customers, embrace mobile applications, while staying in compliance, improving security and driving down costs.  For many, the solution to that problem is to host their applications with a Cloud Services provider, but concerns that a hosted application will be less secure continue to cause doubt. Oracle is recognized by Gartner as a leader in the User Provisioning and Identity and Access Governance magic quadrants, and has helped thousands of companies worldwide to secure their enterprise applications and identities.  Now those same world class IDM capabilities are available as a managed service, both for enterprise applications, as well has Oracle hosted applications. --- Listen to our IDM in the cloud podcast to hear Yvonne Wilson, Director of the IDM Practice in Cloud Service, explain how Oracle Managed Services provides IDM as a service ---Selecting OracleManaged Cloud Services to deploy and manage Oracle Identity Management Services is a smart business decision for a variety of reasons. Oracle hosted Identity Management infrastructure is deployed securely, resilient to failures, and supported by Oracle experts. In addition, Oracle  Managed Cloud Services monitors customer solutions from several perspectives to ensure they continue to work smoothly over time. Customers gain the benefit of Oracle Identity Management expertise to achieve predictable and effective results for their organization.Customers can select Oracle to host and manage any number of Oracle IDM products as a service as well as other Oracle’s security products, providing a flexible, cost effective alternative to onsite hardware and software costs.Security is a major concern for all organizations- making it increasingly important to partner with a company like Oracle to ensure consistency and a layered approach to security and compliance when selecting a cloud provider.  Oracle Cloud Service makes this possible for our customers by taking away the headache and complexity of managing Identity management infrastructure and other security solutions. For more information:http://www.oracle.com/us/solutions/cloud/managed-cloud-services/overview/index.htmlTwitter-https://twitter.com/OracleCloudZoneFacebook - http://www.facebook.com/OracleCloudComputing

    Read the article

  • What should be the architecture of an urban game system?

    - by pmichna
    I'm going to develop an urban game using a telco API for phone geolocation and sending/receiving messages. A player would pick up one of the scenarios, move around the city and when he hits a given location, he gets a message and possibly has to answer it. I'm wondering, what approach would be the best in my case. I came up with this general idea: Web application as a user interface (user registration, players ranking, scenarios editing) written in Ruby on Rails. Game server (hosting games, game logic like checking players location, sending and receiving messages) written in Ruby. Database (users, scores, scenarios etc.), probably MySQL or someother open source DB. I want to learn Ruby and RoR, that's why I chose these language and framework. Do you think it's a good choice for a game server? Another question: is this project division good? I mean, I have little experience with Ruby and Rails - that's why I'm asking. Maybe it's better to have web application merged with game server and somehow have the server hosting RoR application do the tasks like mobile phone pinging and message sending? How would that be performed? Maybe this is worth mentioning: the API is RESTful, most results are JSON, few are XML.

    Read the article

< Previous Page | 155 156 157 158 159 160 161 162 163 164 165 166  | Next Page >