Search Results

Search found 25885 results on 1036 pages for 'claims based identity'.

Page 251/1036 | < Previous Page | 247 248 249 250 251 252 253 254 255 256 257 258  | Next Page >

  • Monitoring over Time with Nagios: How?

    - by David
    Nagios in its standard usage monitors with point-in-time checks: either something is - or is not - true. Other tools like SGI's PCP, HP's MeasureWare, and SEC provide monitoring over time - monitoring things like average disk access time over the last five minutes, or other similar items. Is there anything like this for Nagios? I'm already running NDOUtils, which seems like a natural source for such data. I'd like to have something that would monitor and fire off alarms based on a time-based check using historical data. Is there anything like this for Nagios?

    Read the article

  • EPM Architecture: Reporting and Analysis

    - by Marc Schumacher
    Reporting and Analysis is the basis for all Oracle EPM reporting components. Through the Java based Reporting and Analysis web application deployed on WebLogic, it enables users to browse through reports for all kind of Oracle EPM reporting components. Typical users access the web application by browser through Oracle HTTP Server (OHS). Reporting and Analysis Web application talks to the Reporting and Analysis Agent using CORBA protocol on various ports. All communication to the repository databases (EPM System Registry and Reporting and Analysis database) from web and application layer is done using JDBC. As an additional data store, the Reporting and Analysis Agent uses the file system to lay down individual reports. While the reporting artifacts are stored on the file system, the folder structure and report based security information is stored in the relational database. The file system can be either local or remote (e.g. network share, network file system). If an external user directory is used, Reporting and Analysis services also communicate to this directory. The next post will cover WebAnalysis.

    Read the article

  • SOA Suite 11g Developers Cookbook Published

    - by Antony Reynolds
    SOA Suite 11g Developers Cookbook Available Just realized that I failed to mention that Matt & mine’s most recent book, the SOA Suite 11g Developers Cookbook was published over Christmas last year! In some ways this was an easier book to write than the Developers Guide, the hard bit was deciding what recipes to include.  Once we had decided that the writing of the book was pretty straight forward. The book focuses on areas that we felt we had neglected in the Developers Guide, and so there is more about Java integration and OSB, both of which we see a lot of questions about when working with customers. Amazon has a couple of reviews. Table of Contents Chapter 1: Building an SOA Suite ClusterChapter 2: Using the Metadata Service to Share XML ArtifactsChapter 3: Working with TransactionsChapter 4: Mapping DataChapter 5: Composite Messaging PatternsChapter 6: OSB Messaging PatternsChapter 7: Integrating OSB with JSONChapter 8: Compressed File Adapter PatternsChapter 9: Integrating Java with SOA SuiteChapter 10: Securing Composites and Calling Secure Web ServicesChapter 11: Configuring the Identity ServiceChapter 12: Configuring OSB to Use Foreign JMS QueuesChapter 13: Monitoring and Management

    Read the article

  • Copy/move EBS volume from one Region to another

    - by Gnanam
    Background of our setup: We've hosted our web-based application in Amazon EC2 US East (Virginia) Region. Our instance is based on Linux distribution (CentOS) and AMI is S3-backed. 1 EBS volume (400 GB size) is attached to this instance. Question: We've planned to migrate our deployment to US West (N. California) Region. From AWS doc, I understood that for moving AMI, there is a command-line tool available - ec2-migrate-bundle. But for moving EBS volume across Region, currently there is no tool available. I'm looking for easiest and/or fastest way of copying/moving EBS volume from one Region to another. Also, are there any hidden risks involved during and/or after the migration? Experts ideas/suggestions/recommendations on this are highly appreciated.

    Read the article

  • Is there such a thing as "closure" with software work?

    - by Bobby Tables
    I burned out last year (after a decade of fulltime programming jobs) and am on a sabbatical now. With all the self-examination I've started to figure out some of the root causes of my burnout, and one of the major ones is basically this: there was never any real closure in any of the work I've ever done. It was always a case of getting into an open-ended support/maintenance grind and going stale. When I first entered the industry, I had this image of programming work being very project-based. And I expected projects to have a start, beginning, and END. And then you move on and start on something totally new and fresh. Basically I never expected that a lot (most) of software work involves supporting and maintaining the same code base for open-ended long periods of time - years and even decades. That, combined with generally having itchy feet makes me think that burnout is inevitable for me, after 2-3 years, in ANY fulltime software job. All this sounds like I probably should have been a contractor instead of a fulltimer. But when I discuss this with people, a lot of them say that even THEN you can't really escape having to go back and maintain/support the stuff you worked on, over and over (eg. Coming back on support contracts, for example). The nature of software work is simply like that. There is no project closure, unlike in many other engineering fields. So my question is - Is there ANY programming work out there which is based on short to mid term projects/stints and then moving on cleanly? And is there any particular industry domain or specialization where this kind of project work is typical?

    Read the article

  • How to go from Mainframe to the Cloud?

    - by Ruma Sanyal
    Running applications on IBM mainframes is expensive, complex, and hinders IT responsiveness. The high costs from frequent forced upgrades, long integration cycles, and complex operations infrastructures can only be alleviated by migrating away from a mainframe environment.  Further, data centers are planning for cloud enablement pinned on principles of operating at significantly lower cost, very low upfront investment, operating on commodity hardware and open, standards based systems, and decoupling of hardware, infrastructure software, and business applications. These operating principles are in direct contrast with the principles of operating businesses on mainframes. By utilizing technologies such as Oracle Tuxedo, Oracle Coherence, and Oracle GoldenGate, businesses are able to quickly and safely migrate away from their IBM mainframe environments. Further, running Oracle Tuxedo and Oracle Coherence on Oracle Exalogic, the first and only integrated cloud machine on the market, Oracle customers can not only run their applications on standards-based open systems, significantly cutting their time to market and costs, they can start their journey of cloud enabling their mainframe applications. Oracle Tuxedo re-hosting tools and techniques can provide automated migration coverage for more than 95% of mainframe application assets, at a fraction of the cost Oracle GoldenGate can migrate data from mainframe systems to open systems, eliminating risks associated with the data migration Oracle Coherence hosts transactional data in memory providing mainframe-like data performance and linear scalability Running Oracle software on top of Oracle Exalogic empowers customers to start their journey of cloud enabling their mainframe applications Join us in a series of events across the globe where you you'll learn how you can build your enterprise cloud and add tremendous value to your business. In addition, meet with Oracle experts and your peers to discuss best practices and see how successful organizations are lowering total cost of ownership and achieving rapid returns by moving to the cloud. Register for the Oracle Fusion Middleware Forum event in a city new you!

    Read the article

  • Stacks in C++

    - by MarkPearl
    So some more basics… One of the things you will be taught at any college after conquering arrays is different derivatives of collections. Stack is one of the simplest of those and very useful… A stack is a LIFO (last in first out) data structure and has at least two basic method calls – push & pop. Push, “pushes” an item on the top of the stack. Pop, removes the top most item off the stack. Because all elements on a stack are of the same type, one can use an array to implement a stack or a linked list. With the array based approach, the first element in a stack would be the first element in the array, the second on the stack would be the second on the array, etc. One limitation with an array implementation of a stack is that unless the array is dynamic, one would have to have a preset max stack size (based on the bounds of the array). Linked lists is another approach that gets past this boundary by allowing you to dynamically grow or shrink a collection of data. Stacks have many applications… a typical computer science example would be Postfix Expression Calculator, where the LIFO principle is maintained.

    Read the article

  • Breaking The Promise of Web Service Interoperability

    The promise of web service interoperability is achievable if certain technical and non-technical issues are dealt with properly. As the world gets smaller and smaller thanks to our growing global economy the need for security is increasing. The use of security is vital in the transferring of data from one server to another. As new security standards and protocols are created, the environments for web service hosts and clients must be in sync so that they can communicate on the same standard and protocols. For example, if a new protocol x can only be implemented on computers built after 2010 then all computers built prior to 2010 will not be able to connect to any web service hosts that only use this protocol in its security policy. If both the host and client of a web service cannot communicate using a set of common standards and protocols then web services are not available to these clients thus breaking the promise of interoperability. Another limiting factor of web services is governmental policies and regulations. I have experienced this first hand last year when I had to work on a project that dealt with personally identifiable information (PII) regarding US and Canadian Citizens. Currently the Canadian government regulates that any data pertaining to Canadian citizens must be store in Canada only. The issue that we had was that fact that we are a US based company that sometimes works with Canadian PII as part of a service that we provide. As you can see we are US based company and dealing with Canadian Data, so we had to place a file server inside the border of Canada in order for us to continue working for our Canadian customers.

    Read the article

  • MERGE gives better OUTPUT options

    - by Rob Farley
    MERGE is very cool. There are a ton of useful things about it – mostly around the fact that you can implement a ton of change against a table all at once. This is great for data warehousing, handling changes made to relational databases by applications, all kinds of things. One of the more subtle things about MERGE is the power of the OUTPUT clause. Useful for logging.   If you’re not familiar with the OUTPUT clause, you really should be – it basically makes your DML (INSERT/DELETE/UPDATE/MERGE) statement return data back to you. This is a great way of returning identity values from INSERT commands (so much better than SCOPE_IDENTITY() or the older (and worse) @@IDENTITY, because you can get lots of rows back). You can even use it to grab default values that are set using non-deterministic functions like NEWID() – things you couldn’t normally get back without running another query (or with a trigger, I guess, but that’s not pretty). That inserted table I referenced – that’s part of the ‘behind-the-scenes’ work that goes on with all DML changes. When you insert data, this internal table called inserted gets populated with rows, and then used to inflict the appropriate inserts on the various structures that store data (HoBTs – the Heaps or B-Trees used to store data as tables and indexes). When deleting, the deleted table gets populated. Updates get a matching row in both tables (although this doesn’t mean that an update is a delete followed by an inserted, it’s just the way it’s handled with these tables). These tables can be referenced by the OUTPUT clause, which can show you the before and after for any DML statement. Useful stuff. MERGE is slightly different though. With MERGE, you get a mix of entries. Your MERGE statement might be doing some INSERTs, some UPDATEs and some DELETEs. One of the most common examples of MERGE is to perform an UPSERT command, where data is updated if it already exists, or inserted if it’s new. And in a single operation too. Here, you can see the usefulness of the deleted and inserted tables, which clearly reflect the type of operation (but then again, MERGE lets you use an extra column called $action to show this). (Don’t worry about the fact that I turned on IDENTITY_INSERT, that’s just so that I could insert the values) One of the things I love about MERGE is that it feels almost cursor-like – the UPDATE bit feels like “WHERE CURRENT OF …”, and the INSERT bit feels like a single-row insert. And it is – but into the inserted and deleted tables. The operations to maintain the HoBTs are still done using the whole set of changes, which is very cool. And $action – very convenient. But as cool as $action is, that’s not the point of my post. If it were, I hope you’d all be disappointed, as you can’t really go near the MERGE statement without learning about it. The subtle thing that I love about MERGE with OUTPUT is that you can hook into more than just inserted and deleted. Did you notice in my earlier query that my source table had a ‘src’ field, that wasn’t used in the insert? Normally, this would be somewhat pointless to include in my source query. But with MERGE, I can put that in the OUTPUT clause. This is useful stuff, particularly when you’re needing to audit the changes. Suppose your query involved consolidating data from a number of sources, but you didn’t need to insert that into the actual table, just into a table for audit. This is now very doable, either using the INTO clause of OUTPUT, or surrounding the whole MERGE statement in brackets (parentheses if you’re American) and using a regular INSERT statement. This is also doable if you’re using MERGE to just do INSERTs. In case you hadn’t realised, you can use MERGE in place of an INSERT statement. It’s just like the UPSERT-style statement we’ve just seen, except that we want nothing to match. That’s easy to do, we just use ON 1=2. This is obviously more convoluted than a straight INSERT. And it’s slightly more effort for the database engine too. But, if you want the extra audit capabilities, the ability to hook into the other source columns is definitely useful. Oh, and before people ask if you can also hook into the target table’s columns... Yes, of course. That’s what deleted and inserted give you.

    Read the article

  • Ranking drop after using reverse proxy for blog subdirectory and robots.txt for old blog subdomain

    - by user40387
    We have a 3Dcart store and a WordPress blog hosted on a separate server. Originally, we had a CNAME set up to point the blog to http://blog.example.com/. However, in our attempt to boost link-based and traffic-based authority on the main site, we've opted to do a reverse proxy to http://www.example.com/blog/. It’s been about two months since we finished the reverse proxy migration. It appears that everything is technically working as intended, including some robots and sitemap changes; the new URLs are even generating some traffic, as indicated on Google Analytics. While Google has been indexing the new URL locations, they’re ranking very poorly, even for non-competitive, long-tail keywords. Meanwhile, the old subdomain URLs are still ranking mostly as well as they used to (even though they aren’t showing meta titles and descriptions due to being blocked by robots.txt). Our working theory is that Google has an old index of the subdomain URLs, and is considering the new URLs to be duplicate content, since it’s being told not to crawl the subdomain and therefore can’t see the rel canonicals we have in place. To resolve this, we’ve updated the subdomain’s robot.txt to no longer block crawling and indexing. Theoretically, seeing the canonical tag on the subdomain pages will resolve any perceived duplicate content issues. In the meantime, we were wondering if anyone would have any other ideas. We are very concerned that we’ll be losing valuable traffic, as we’re entering our on season at the moment.

    Read the article

  • Commercial NAS RAID1 disks moved to Software Raid system?

    - by Rolnik
    I've got a couple of commercial NAS boxes and I'm wondering if they (ReadyNas duo, DLink DNS-323) or any other NAS is suitable for having their RAIDed disks moved to a software-based NAS. To be specific, I'm a big fan of the (largely) Debian-based Ubuntu. Can the aforementioned NAS drives be migrated to Ubuntu (e.g. using the mdadm Linux command)? Secondly, is there any commercial NAS that can be migrated over? Incidentally, here is a link to somebody who succeeded in a migration: http://www.linuxquestions.org/questions/slackware-14/moving-raid1-drives-into-computer-with-same-md-numbers-862312/ My specific scenario I'd like to prepare for, is the eventual (sudden) death of one of the NAS motherboards.

    Read the article

  • When is the default storage rule not really the default storage rule?

    - by Kevin Smith
    In 11g WebCenter Content (WCC) introduced dispersion rules in the vault and weblayout directory paths to better distribute content across the directories. The dispersion rule was based on dRevClassID. The only problem with this is that dRevClassID did not remain the same when you copied content from one WCC instance to another using Archiver like in a contribution-consumption scenario. This could cause problems because the web-viewable path would not be the same between the contribution and consumption instances. In the PS5 (11.1.1.6.0) release of WCC they addressed this by configuring the File Store Provider (FSP) so that all new content would use a storage rule with a dispersion rule based on dDocName, which would stay the same when content was copied to another WCC instance. To support migration from older versions of WCC they left the default storage rule unchanged and created a new storage rule called DispByContentId and made that the default storage rule for all new content. I only stumbled upon this a while back when I was trying to change the FSP configuration so that all content used a webless storage rule. I changed the default storage rule, restarted WCC, and checked in a new content item. To my surprise the new content was not created as webless. I struggled with this for a while until I noticed there were multiple storage rules defined in the FSP configuration. When I looked at the default value for the xStorageRule field in Configuration Manager, sure enough it was no longer default, but was now DispByContentId. Once I updated the DispByContentId storage rule to webless and restarted WCC all my new content was now created using the webless storage rule, just like I wanted. I noticed when I was creating this blog post that the default storage rule is also listed on the File Store Provider Information page, but I guess I didn't see that when I originally did this.

    Read the article

  • Backup Exec backup-to-disk folder creation - Access denied

    - by ewwhite
    I'm having a difficult time creating a backup-to-disk folder in Symantec Backup Exec 12.5 and Backup Exec 2010. The backend storage is a Nexenta/ZFS-based NAS filer sharing the volume via CIFS. I've also seen the issue on other *nix-based NAS devices. I've attempted mapping the drive, providing the full paths to the folder, etc. I can browse to the share just fine from within Windows, but Backup Exec fails to create the B2D folder with different variants of a Unable to create new backup folder. Access denied error. I've attempted creating service accounts in Backup Exec to handle the authentication, but nothing seems to work. What's the key to making this work?

    Read the article

  • ESSO Webcast Replay with Live Q&A

    - by B Shashikumar
    In our ESSO webcast on Oct 19th, we discussed how Oracle Enterprise Single-Sign On Suite can not only eliminate your password reset and helpdesk headaches but also offers a healthy ROI which enterprises just cannot overlook. In our webcast we discussed how Oracle ESSO Suite can deliver an ROI of 140% within the first year of deployment. Due to popular demand, we are now doing a re-broadcast of this webcast in the European time zone. The webcast will be followed by live Q&A. Matt Berzinski, Product Manager for Oracle ESSO Suite will be on air to answer all of your ESSO and Identity Management questions.  Join us on this webcast to find out how Oracle ESSO Suite Plus can deliver quick wins for your organization. Register here for this webcast.

    Read the article

  • How can I "diff" two files with Nautilus?

    - by bioShark
    I have installed Meld and found out it's a great comparing tool. Unfortunately there is no integration with Nautilus 3.2. This means, I can't right click on files and select an option to open them in Meld for comparison. I have seen in the tools comment that the tool need the diff-ext package to be installed. This package has been removed from Ubuntu universe, I am guessing because gtk 3.0. Even if I manually downloaded from source forge the diff-ext package, when I try to configure it the check fails with the message: checking for DIFF_EXT... configure: error: Package requirements (libnautilus-extension >= 2.14.0 gconf-2.0 >= 2.14.0 gnome-vfs-module-2.0 >= 2.14) were not met: No package 'libnautilus-extension' found No package 'gconf-2.0' found No package 'gnome-vfs-module-2.0' found Ok, so from this output I gather that indeed gtk 2 is being required to install the diff extension to nautilus. Now, my question is: Is there a possibility to integrate Meld into Nautilus? Or, are there any other diff based tool which integrate with current Nautilus? So gtk3 based. I am using Ubuntu 11.10 if there was any doubt so far. cheers and thanks in advance.

    Read the article

  • Access Control Service v2

    - by Your DisplayName here!
    A Resource-STS (others call it RP-STS or federation gateway) is a necessity for non-trivial federated identity scenarios. ADFS v2 does an excellent job in fulfilling that role – but (as of now) you have to run ADFS on-premise. The Azure Access Control Service is a Resource-STS in the cloud (with all the usual scalability/availability) promises. Unfortunately a lot of (the more interesting) features in ACS v1 had to be cut due to constrained time/resources. The good news is that ACS v2 is now in CTP and brings back a lot of the missing features (like WS* support) and adds some really sweet new ones (out of the box federation with Google, Facebook, LiveID – and OpenId in general). You can read about the details here. On a related note – ACS v2 works out of the box with StarterSTS – simply choose the ADFS v2 option and point the management portal to the StarterSTS WS-Federation metadata endpoint. Have fun ;)

    Read the article

  • Horse Drawn Fiber Optics Bring Broadband to Remote Areas

    - by Jason Fitzpatrick
    When you think of fiber optics and high speed internet the last thing you likely think of is… horses. Yet horses have been put to use rolling out fiber optics to remote rural locations. In Vermont a Belgium draft horse named Fred, seen in the photo above being tended by his handler Claude, is a distinctly 19th century solution to a 21st century problem; how to run fiber optic cable through remote areas where trucks cannot easily pass. The man and animal are indispensable to cable and phone-service provider FairPoint Communications because they easily can access hard-to-reach job sites along country roads, which bulky utility trucks often cannot. “It just saves so much work – it would take probably 15 guys to do what Fred and Claude can do,” said Paul Clancy, foreman of a line crew from FairPoint. “They can pull 5,000 feet of cable with no sweat.” You can read more about the use of draft horses to draw lines and the roll out of broadband to rural Vermont at the link below. Vermont Uses Draft Horse to Lay Cables for Internet Access [Reuters] How To Encrypt Your Cloud-Based Drive with BoxcryptorHTG Explains: Photography with Film-Based CamerasHow to Clean Your Dirty Smartphone (Without Breaking Something)

    Read the article

  • What emulator / VM software can I use to create a Win32-portable Linux Guest?

    - by Jotham
    Hi, I want to create a portable VM setup so that I can boot a Linux install regardless of which Windows XP / Windows 7 host machine I am on. I was looking at Qemu but it doesn't appear to have a relatively safe win32 build. Other things like VirtualBox require complete install on the host OS for performance reasons. I'm not so concerned about performance, I just want to run a few curses based applications. My ideal end goal would be a a memory stick of some size with a VM/Emulator I can boot on most WinXP/Windows 7 machines and access my own curses based applications (probably archlinux or debian). Any help would be appreciated. Regards,

    Read the article

  • ISACA Information Security & Risk Management Conference, Nov 14-16

    - by Troy Kitch
    Please join Oracle, as a platinum sponsor, at this year's ISACA Information Security and Risk Management Conference in Las Vegas, Nov 14-16. This year’s conference offers up to 32 CPE hours and is designed to meet the needs of information security, governance, compliance, and risk management professionals. The event builds on and includes the key elements of information security, governance, compliance and risk management practices, and offers a fresh perspective on current and future trends. As provider of the world’s most complete, open, and integrated business software and hardware systems, Oracle can uniquely safeguard your information throughout its entire lifecycle and is the recognized leader in Data Security, Identity Management, and Governance, Risk, and Compliance solutions. Also, attend the Oracle Megatrends Session, Gone in 60 Seconds: Mitigating Database Security Risk and stop by our booth, # 100 & #102, to meet with Oracle Security Solution experts, see live product demos, and more. Learn more and register.

    Read the article

  • Initializing entities vs having a constructor parameter

    - by Vee
    I'm working on a turn-based tile-based puzzle game, and to create new entities, I use this code: Field.CreateEntity(10, 5, Factory.Player()); This creates a new Player at [10; 5]. I'm using a factory-like class to create entities via composition. This is what the CreateEntity method looks like: public void CreateEntity(int mX, int mY, Entity mEntity) { mEntity.Field = this; TileManager.AddEntity(mEntity, true); GetTile(mX, mY).AddEntity(mEntity); mEntity.Initialize(); InvokeOnEntityCreated(mEntity); } Since many of the components (and also logic) of the entities require to know what the tile they're in is, or what the field they belong to is, I need to have mEntity.Initialize(); to know when the entity knows its own field and tile. The Initialize(); method contains a call to an event handler, so that I can do stuff like this in the factory class: result.OnInitialize += () => result.AddTags(TDLibConstants.GroundWalkableTag, TDLibConstants.TrapdoorTag); result.OnInitialize += () => result.AddComponents(new RenderComponent(), new ElementComponent(), new DirectionComponent()); This works so far, but it is not elegant and it's very open to bugs. I'm also using the same idea with components: they have a parameterless constructor, and when you call the AddComponent(mComponent); method in an entity, it is the entity's job to set the component's entity to itself. The alternative would be having a Field, int, int parameters in the factory class, to do stuff like: new Entity(Field, 10, 5); But I also don't like the fact that I have to create new entities like this. I would prefer creating entities via the Field object itself. How can I make entity/component creation more elegant and less prone to bugs?

    Read the article

  • Options for PCI-DSS on AWS - file integrity monitoring and intrusion detection

    - by Brill Pappin
    I need to deploy some file integrity monitoring and intrusion detections software on AWS instances. I really wanted to use OSSEC, however it does not work well in an environment where servers can auto deploy and shut down based on load, because it requires server managed keys to be generated. Including the agent in the AMI will not allow monitoring as soon as it comes up because of that. There are many options out there, and several are listed in other posts on this site, however none that I've seen so far deal with the unique problems inherent in AWS or cloud based deployments in general. Can anyone point me at some products, preferably open source, that we might use to cover those portions of PCI DSS that require this software? Has anyone else achieved this on AWS?

    Read the article

  • Optimize Many-to-Many with SUMMARIZE and Other Techniques

    - by Marco Russo (SQLBI)
    We are still in the early days of DAX and even if I have been using it since 2 years ago, there is still a lot to learn on that. One of the topics that historically interests me (and many of the readers here, probably) is the many-to-many relationships between dimensions in a dimensional data model. When I and Alberto wrote the The Many to Many Revolution 2.0 we discovered the SUMMARIZE based pattern very late in the whitepaper writing. It is very important for performance optimization and it should be always used. In the last month, Gerhard Brueckl also presented an approach based on cross table filtering behavior that simplify the syntax involved, even if it’s harder to explain how it works internally. I published a short article titled Optimize Many-to-Many Calculation in DAX with SUMMARIZE and Cross Table Filtering on SQLBI website just to provide a quick reference to the three patterns available. A further study is still required to compare performance between SUMMARIZE and Cross Table Filtering patterns. Up to now, I haven’t observed big differences between them, even if their execution plans might be not identical and this suggest me that depending on other conditions you might favor one over the other.

    Read the article

  • ADF Essentials - Available for free and certified on GlassFish!

    - by delabassee
    If you are an Oracle customer, you are probably familiar with Oracle ADF (Application Development Framework). If you are not, ADF is, in a nutshell, a Java EE based framework that simplifies the development of enterprise applications. It is the development framework that was used, among other things, to build Oracle Fusion Applications. Oracle has just released ADF Essentials, a free to develop and deploy version of Oracle ADF's core technologies. As a good news never come alone, GlassFish 3.1.2 is now a certified container for ADF Essentials! ADF Essentials leverage core ADF features and includes: Oracle ADF Faces - a set of more than 150 JSF 2.0 rich components that simplify the creation of rich Web user interfaces (charting, data vizualization, advanced tables, drag and drop, touch gesture support, extensive windowing capabilities, etc.) Oracle ADF Controller - an extension of the JSF controller that helps build reusable process flows and provides the ability to create dynamic regions within Web pages. Oracle ADF Binding - an XML-based, meta-data abstraction layer to connect user interfaces to business services. Oracle ADF Business Components – a declaratively-configured layer that simplifies developing business services against relational databases by providing reusable components that implement common design patterns. ADF is a highly declarative framework, it has always had a very good tooling support. Visual development for Oracle ADF Essentials is provided in Oracle JDeveloper 11.1.2.3. Eclispe support is planned for a later OEPE (Oracle Enterprise Pack for Eclipse) release. Here are some relevant links to quickly learn on how to use ADF Essentials on GlassFish: Video : Oracle ADF Essentials Overview and Demo Deploying Oracle ADF Essentials Applications to Glassfish OTN : Oracle ADF Essentials Ressources

    Read the article

  • Open Source Software Development Center at University of Belgrade

    - by Tori Wieldt
    A new Open Source Software Development Center is open at University of Belgrade, Serbia. It centers around using Java & NetBeans as open source projects to learn from and contribute to. Assistant Professor Zoran Sevarac says that not only does the center allow him to teach software development using open source projects, but also "we are improving our University courses based on the experience we get from working on open source code."  Some of the projects underway are a NetBeans UML plugin; Neuroph (a Java neural network framework, with a NetBeans Platform-based UI); a NetBeans DOAP Plugin; WorkieTalkie (NetBeans chat plugin); and 2D and 3D visualization plugins for NetBeans. University of Belgrade also has an official university course about open source development, where students learn to use development tools, work in teams, participate in open source projects and learn from real world software development projects. Students, teachers, and researchers at the University of Belgrade, and any member of the open source community are welcome to come to learn software development from successful open source projects. For more information, you can contact Zoran Sevarac (@neuroph on Twitter).

    Read the article

  • Reboot loop after Windows XP Service Pack 3 update

    - by espais
    Recently I upgraded to Service Pack 3, and now it seems that something has gone terribly wrong with the update. After logging in, my computer will blue screen after about 5 minutes and then go into a reboot loop (I don't have the exact error message handy). I have a Sager NP2092 notebook, running an Intel chipset. I'd rather avoid having to reformat my XP, especially with my copy of Windows 7 arriving right around the corner. After doing some Googling, I came across this article: Does your AMD-based computer boot after installing XP SP3? However, it deals with the AMD chips, and specifically states not to use its fix on Intel based systems. EDIT After killing the reboot, this is the error that pops up: STOP: 0x000000F4 (0x00000003, 0x8A187118, 0x8A18728C, 0x80604438) EDIT2 I have run Memtest86, and it reported 0 errors.

    Read the article

< Previous Page | 247 248 249 250 251 252 253 254 255 256 257 258  | Next Page >