Search Results

Search found 18454 results on 739 pages for 'oracle thoughts'.

Page 486/739 | < Previous Page | 482 483 484 485 486 487 488 489 490 491 492 493  | Next Page >

  • Best Of 2010

    - by Mike Dietrich
    Hi there, in Australia, Japan, Singapore and many other countries it's already 2011 - but Germany and the US is still some time until midnight :-) To round up the year you'll find a few off-topic pictures from 2010. You might click on the pictures to get a better resolution. Enjoy ... Moscow - Red Square Tokyo Train - Cell Phone Mania Great Chinese Wall near Beijing Hong Kong by Night Yearing Station Winery, Yarra - Victoria, Australia Dublin, Ireland - during the ash cloud - no comment - Liberty It's sometime foggy in SF Singapore Opera Stockholm - Gamla Stan Unbelievable white beach at Camps Bay, Clifton, Capetown Words fail me ... Mike

    Read the article

  • 64-bit Archives Needed

    - by user9154181
    A little over a year ago, we received a question from someone who was trying to build software on Solaris. He was getting errors from the ar command when creating an archive. At that time, the ar command on Solaris was a 32-bit command. There was more than 2GB of data, and the ar command was hitting the file size limit for a 32-bit process that doesn't use the largefile APIs. Even in 2011, 2GB is a very large amount of code, so we had not heard this one before. Most of our toolchain was extended to handle 64-bit sized data back in the 1990's, but archives were not changed, presumably because there was no perceived need for it. Since then of course, programs have continued to get larger, and in 2010, the time had finally come to investigate the issue and find a way to provide for larger archives. As part of that process, I had to do a deep dive into the archive format, and also do some Unix archeology. I'm going to record what I learned here, to document what Solaris does, and in the hope that it might help someone else trying to solve the same problem for their platform. Archive Format Details Archives are hardly cutting edge technology. They are still used of course, but their basic form hasn't changed in decades. Other than to fix a bug, which is rare, we don't tend to touch that code much. The archive file format is described in /usr/include/ar.h, and I won't repeat the details here. Instead, here is a rough overview of the archive file format, implemented by System V Release 4 (SVR4) Unix systems such as Solaris: Every archive starts with a "magic number". This is a sequence of 8 characters: "!<arch>\n". The magic number is followed by 1 or more members. A member starts with a fixed header, defined by the ar_hdr structure in/usr/include/ar.h. Immediately following the header comes the data for the member. Members must be padded at the end with newline characters so that they have even length. The requirement to pad members to an even length is a dead giveaway as to the age of the archive format. It tells you that this format dates from the 1970's, and more specifically from the era of 16-bit systems such as the PDP-11 that Unix was originally developed on. A 32-bit system would have required 4 bytes, and 64-bit systems such as we use today would probably have required 8 bytes. 2 byte alignment is a poor choice for ELF object archive members. 32-bit objects require 4 byte alignment, and 64-bit objects require 64-bit alignment. The link-editor uses mmap() to process archives, and if the members have the wrong alignment, we have to slide (copy) them to the correct alignment before we can access the ELF data structures inside. The archive format requires 2 byte padding, but it doesn't prohibit more. The Solaris ar command takes advantage of this, and pads ELF object members to 8 byte boundaries. Anything else is padded to 2 as required by the format. The archive header (ar_hdr) represents all numeric values using an ASCII text representation rather than as binary integers. This means that an archive that contains only text members can be viewed using tools such as cat, more, or a text editor. The original designers of this format clearly thought that archives would be used for many file types, and not just for objects. Things didn't turn out that way of course — nearly all archives contain relocatable objects for a single operating system and machine, and are used primarily as input to the link-editor (ld). Archives can have special members that are created by the ar command rather than being supplied by the user. These special members are all distinguished by having a name that starts with the slash (/) character. This is an unambiguous marker that says that the user could not have supplied it. The reason for this is that regular archive members are given the plain name of the file that was inserted to create them, and any path components are stripped off. Slash is the delimiter character used by Unix to separate path components, and as such cannot occur within a plain file name. The ar command hides the special members from you when you list the contents of an archive, so most users don't know that they exist. There are only two possible special members: A symbol table that maps ELF symbols to the object archive member that provides it, and a string table used to hold member names that exceed 15 characters. The '/' convention for tagging special members provides room for adding more such members should the need arise. As I will discuss below, we took advantage of this fact to add an alternate 64-bit symbol table special member which is used in archives that are larger than 4GB. When an archive contains ELF object members, the ar command builds a special archive member known as the symbol table that maps all ELF symbols in the object to the archive member that provides it. The link-editor uses this symbol table to determine which symbols are provided by the objects in that archive. If an archive has a symbol table, it will always be the first member in the archive, immediately following the magic number. Unlike member headers, symbol tables do use binary integers to represent offsets. These integers are always stored in big-endian format, even on a little endian host such as x86. The archive header (ar_hdr) provides 15 characters for representing the member name. If any member has a name that is longer than this, then the real name is written into a special archive member called the string table, and the member's name field instead contains a slash (/) character followed by a decimal representation of the offset of the real name within the string table. The string table is required to precede all normal archive members, so it will be the second member if the archive contains a symbol table, and the first member otherwise. The archive format is not designed to make finding a given member easy. Such operations move through the archive from front to back examining each member in turn, and run in O(n) time. This would be bad if archives were commonly used in that manner, but in general, they are not. Typically, the ar command is used to build an new archive from scratch, inserting all the objects in one operation, and then the link-editor accesses the members in the archive in constant time by using the offsets provided by the symbol table. Both of these operations are reasonably efficient. However, listing the contents of a large archive with the ar command can be rather slow. Factors That Limit Solaris Archive Size As is often the case, there was more than one limiting factor preventing Solaris archives from growing beyond the 32-bit limits of 2GB (32-bit signed) and 4GB (32-bit unsigned). These limits are listed in the order they are hit as archive size grows, so the earlier ones mask those that follow. The original Solaris archive file format can handle sizes up to 4GB without issue. However, the ar command was delivered as a 32-bit executable that did not use the largefile APIs. As such, the ar command itself could not create a file larger than 2GB. One can solve this by building ar with the largefile APIs which would allow it to reach 4GB, but a simpler and better answer is to deliver a 64-bit ar, which has the ability to scale well past 4GB. Symbol table offsets are stored as 32-bit big-endian binary integers, which limits the maximum archive size to 4GB. To get around this limit requires a different symbol table format, or an extension mechanism to the current one, similar in nature to the way member names longer than 15 characters are handled in member headers. The size field in the archive member header (ar_hdr) is an ASCII string capable of representing a 32-bit unsigned value. This places a 4GB size limit on the size of any individual member in an archive. In considering format extensions to get past these limits, it is important to remember that very few archives will require the ability to scale past 4GB for many years. The old format, while no beauty, continues to be sufficient for its purpose. This argues for a backward compatible fix that allows newer versions of Solaris to produce archives that are compatible with older versions of the system unless the size of the archive exceeds 4GB. Archive Format Differences Among Unix Variants While considering how to extend Solaris archives to scale to 64-bits, I wanted to know how similar archives from other Unix systems are to those produced by Solaris, and whether they had already solved the 64-bit issue. I've successfully moved archives between different Unix systems before with good luck, so I knew that there was some commonality. If it turned out that there was already a viable defacto standard for 64-bit archives, it would obviously be better to adopt that rather than invent something new. The archive file format is not formally standardized. However, the ar command and archive format were part of the original Unix from Bell Labs. Other systems started with that format, extending it in various often incompatible ways, but usually with the same common shared core. Most of these systems use the same magic number to identify their archives, despite the fact that their archives are not always fully compatible with each other. It is often true that archives can be copied between different Unix variants, and if the member names are short enough, the ar command from one system can often read archives produced on another. In practice, it is rare to find an archive containing anything other than objects for a single operating system and machine type. Such an archive is only of use on the type of system that created it, and is only used on that system. This is probably why cross platform compatibility of archives between Unix variants has never been an issue. Otherwise, the use of the same magic number in archives with incompatible formats would be a problem. I was able to find information for a number of Unix variants, described below. These can be divided roughly into three tribes, SVR4 Unix, BSD Unix, and IBM AIX. Solaris is a SVR4 Unix, and its archives are completely compatible with those from the other members of that group (GNU/Linux, HP-UX, and SGI IRIX). AIX AIX is an exception to rule that Unix archive formats are all based on the original Bell labs Unix format. It appears that AIX supports 2 formats (small and big), both of which differ in fundamental ways from other Unix systems: These formats use a different magic number than the standard one used by Solaris and other Unix variants. They include support for removing archive members from a file without reallocating the file, marking dead areas as unused, and reusing them when new archive items are inserted. They have a special table of contents member (File Member Header) which lets you find out everything that's in the archive without having to actually traverse the entire file. Their symbol table members are quite similar to those from other systems though. Their member headers are doubly linked, containing offsets to both the previous and next members. Of the Unix systems described here, AIX has the only format I saw that will have reasonable insert/delete performance for really large archives. Everyone else has O(n) performance, and are going to be slow to use with large archives. BSD BSD has gone through 4 versions of archive format, which are described in their manpage. They use the same member header as SVR4, but their symbol table format is different, and their scheme for long member names puts the name directly after the member header rather than into a string table. GNU/Linux The GNU toolchain uses the SVR4 format, and is compatible with Solaris. HP-UX HP-UX seems to follow the SVR4 model, and is compatible with Solaris. IRIX IRIX has 32 and 64-bit archives. The 32-bit format is the standard SVR4 format, and is compatible with Solaris. The 64-bit format is the same, except that the symbol table uses 64-bit integers. IRIX assumes that an archive contains objects of a single ELFCLASS/MACHINE, and any archive containing ELFCLASS64 objects receives a 64-bit symbol table. Although they only use it for 64-bit objects, nothing in the archive format limits it to ELFCLASS64. It would be perfectly valid to produce a 64-bit symbol table in an archive containing 32-bit objects, text files, or anything else. Tru64 Unix (Digital/Compaq/HP) Tru64 Unix uses a format much like ours, but their symbol table is a hash table, making specific symbol lookup much faster. The Solaris link-editor uses archives by examining the entire symbol table looking for unsatisfied symbols for the link, and not by looking up individual symbols, so there would be no benefit to Solaris from such a hash table. The Tru64 ld must use a different approach in which the hash table pays off for them. Widening the existing SVR4 archive symbol tables rather than inventing something new is the simplest path forward. There is ample precedent for this approach in the ELF world. When ELF was extended to support 64-bit objects, the approach was largely to take the existing data structures, and define 64-bit versions of them. We called the old set ELF32, and the new set ELF64. My guess is that there was no need to widen the archive format at that time, but had there been, it seems obvious that this is how it would have been done. The Implementation of 64-bit Solaris Archives As mentioned earlier, there was no desire to improve the fundamental nature of archives. They have always had O(n) insert/delete behavior, and for the most part it hasn't mattered. AIX made efforts to improve this, but those efforts did not find widespread adoption. For the purposes of link-editing, which is essentially the only thing that archives are used for, the existing format is adequate, and issues of backward compatibility trump the desire to do something technically better. Widening the existing symbol table format to 64-bits is therefore the obvious way to proceed. For Solaris 11, I implemented that, and I also updated the ar command so that a 64-bit version is run by default. This eliminates the 2 most significant limits to archive size, leaving only the limit on an individual archive member. We only generate a 64-bit symbol table if the archive exceeds 4GB, or when the new -S option to the ar command is used. This maximizes backward compatibility, as an archive produced by Solaris 11 is highly likely to be less than 4GB in size, and will therefore employ the same format understood by older versions of the system. The main reason for the existence of the -S option is to allow us to test the 64-bit format without having to construct huge archives to do so. I don't believe it will find much use outside of that. Other than the new ability to create and use extremely large archives, this change is largely invisible to the end user. When reading an archive, the ar command will transparently accept either form of symbol table. Similarly, the ELF library (libelf) has been updated to understand either format. Users of libelf (such as the link-editor ld) do not need to be modified to use the new format, because these changes are encapsulated behind the existing functions provided by libelf. As mentioned above, this work did not lift the limit on the maximum size of an individual archive member. That limit remains fixed at 4GB for now. This is not because we think objects will never get that large, for the history of computing says otherwise. Rather, this is based on an estimation that single relocatable objects of that size will not appear for a decade or two. A lot can change in that time, and it is better not to overengineer things by writing code that will sit and rot for years without being used. It is not too soon however to have a plan for that eventuality. When the time comes when this limit needs to be lifted, I believe that there is a simple solution that is consistent with the existing format. The archive member header size field is an ASCII string, like the name, and as such, the overflow scheme used for long names can also be used to handle the size. The size string would be placed into the archive string table, and its offset in the string table would then be written into the archive header size field using the same format "/ddd" used for overflowed names.

    Read the article

  • Preventing Users From Accessing wp-admin

    - by Gary Pendergast
    If you have a WordPress site that you allow people to sign up for, you often don’t want them to be able to access wp-admin. It’s not that there are any security issues, you just want to ensure that your users are accessing your site in a predictable manner.To block non-admin users from getting into wp-admin, you just need to add the following code to your functions.php, or somewhere similar:add_action( 'init', 'blockusers_init' );   function blockusers_init() { if ( is_admin() && ! current_user_can( 'administrator' ) ) { wp_redirect( home_url() ); exit; } }Ta-da! Now, only administrator users can access wp-admin, everyone else will be re-directed to the homepage.

    Read the article

  • Redehost Transforms Cloud & Hosting Services with MySQL Enterprise Edition

    - by Mat Keep
    RedeHost are one of Brazil's largest cloud computing and web hosting providers, with more than 60,000 customers and 52,000 web sites running on its infrastructure. As the company grew, Redehost needed to automate operations, such as system monitoring, making the operations team more proactive in solving problems. Redehost also sought to improve server uptime, robustness, and availability, especially during backup windows, when performance would often dip. To address the needs of the business, Redehost migrated from the community edition of MySQL to MySQL Enterprise Edition, which has delivered a host of benefits: - Pro-active database management and monitoring using MySQL Enterprise Monitor, enabling Redehost to fulfil customer SLAs. Using the Query Analyzer, Redehost were able to more rapidly identify slow queries, improving customer support - Quadrupled backup speed with MySQL Enterprise Backup, leading to faster data recovery and improved system availability - Reduced DBA overhead by 50% due to the improved support capabilities offered by MySQL Enterprise Edition. - Enabled infrastructure consolidation, avoiding unnecessary energy costs and premature hardware acquisition You can learn more from the full Redehost Case Study Also, take a look at the recently updated MySQL in the Cloud whitepaper for the latest developments that are making it even simpler and more efficient to develop and deploy new services with MySQL in the cloud

    Read the article

  • Understanding the JSF Lifecycle and ADF Optimized Lifecycle

    - by Steven Davelaar
    While coaching ADF development teams over the years, I have noticed that many developers lack a basic understanding of Java Server Faces, in particular the JSF lifecycle and how ADF optimizes this lifecycle in specific situations. As a result, ADF developers who are tasked to build a seemingly simple ADF page, can get extremely frustrated by the -in their eyes- unexpected or unlogical behavior of ADF.  They start to play with the immediate property and the partialTriggers property in a trial-and-error manner. Often, they play with these properties until their specific issue is solved, unaware of other more severe bugs that might be introduced by the values they choose for these properties. So, I decided to submit a presentation for the UKOUG entitled "What you need to know about JSF to be succesful with ADF".  The abstract was accepted, and I started putting together the presentation and demo application. I built up a demo application step-by-step, trying to cover the JSF-related  top issues and challenges I encountered over the years in a simple "Hello World" demo. This turned out to be both a very time-consuming and very interesting journey. I had never thought I would learn so much myself in preparing this presentation. I never thought I would end up with potentially controversial conclusions like "Never set immediate=true on an editable component".  I did not realize the sometimes immense implications of the ADF optimized lifecycle beforehand. I never thought that "Hello World" demo's could get so complex. But as I went on I was confident this was valuable material, even for experienced ADF developers with a good understanding of JSF. When I finished, I realized the original title and abstract was misleading, as was the target audience. Yes, it was covering the JSF lifecycle, but no other aspects of JSF you need to know for ADF development. Yes, it was covering some JSF basics as mentioned in the abstract, but all in all it had become a pretty advanced presentation. At the same time, the issues discussed are very common, novice ADF developers might easily run into them while building their first pages. I ran out of time, so I decided to just present what I had, apologizing at the beginning for the misleading title, showing a second slide with a better title "18 invaluable lessons about ADF-JSF interaction". I think the presentation was well received overall, although people who don't like it or don't understand it, usually don't come and tell you afterwards.... I am still struggling with the title, for this blog post I used yet another title, anyway, you can download the presentation-that-still-lacks-a-good-title here. The finished JDev 11.1.1.6 demo app can be downloaded here.  The 18 lessons mentioned in the presentation are summarized here. As mentioned on the last slide, print out the lessons, and learn them by heart, I am pretty sure it will save you lots of time and frustration!

    Read the article

  • Java EE/GlassFish Adoption Story by Kerry Wilson/Vanderbilt University

    - by reza_rahman
    Kerry Wilson is a Software Engineer at the Vanderbilt University Medical Center. He served in a consultant role to design a lightweight systems integration solution for the next generation Foundations Recovery Network using GlassFish, Java EE 6, JPA, @Scheduled EJBs, CDI, JAX-RS and JSF. He shared his story at the JavaOne 2013 Sunday GlassFish community event - check out the video below: Kerry outlined some of the details of the implementation and emphasized the fact that Java EE can be a great solution for applications that are considered small/lightweight. He mentioned the productivity gains through the modern Java EE programming model centered on annotations, POJOs and zero-configuration - comparing it with competing frameworks that aim towards similar productivity for lightweight applications. Kerry also stressed the quality of the excellent NetBeans integration with GlassFish and the need for community self-support in free, non-commercial open source projects like GlassFish. You can check out the details of his story on the GlassFish stories blog. Do you have a Java EE/GlassFish adoption story to share? Let us know and we will highlight it for the community.

    Read the article

  • Java EE/GlassFish Adoption Story by Kerry Wilson/Vanderbilt University

    - by reza_rahman
    Kerry Wilson is a Software Engineer at the Vanderbilt University Medical Center. He served in a consultant role to design a lightweight systems integration solution for the next generation Foundations Recovery Network using GlassFish, Java EE 6, JPA, @Scheduled EJBs, CDI, JAX-RS and JSF. He lives in Nashville, TN where he helps organize the Nashville Java User Group. Kerry shared his Java EE/GlassFish adoption story at the JavaOne 2013 Sunday GlassFish community event - check out the video below: Here is the slide deck for his talk: GlassFish Story by Kerry Wilson/Vanderbilt University Medical Center from glassfish Kerry outlined some of the details of the implementation and emphasized the fact that Java EE can be a great solution for applications that are considered small/lightweight. He mentioned the productivity gains through the modern Java EE programming model centered on annotations, POJOs and zero-configuration - comparing it with competing frameworks that aim towards similar productivity for lightweight applications. Kerry also stressed the quality of the excellent NetBeans integration with GlassFish and the need for community self-support in free, non-commercial open source projects like GlassFish.

    Read the article

  • Patrick Curran Session-Keynote at DOAG 2012

    - by Heather VanCura
    Patrick Curran, Chair of the  Java Community Process (JCP) and Director of the JCP Program Management Office, will be speaking this week at the DOAG 2012 event in Nuremberg Germany. Keynote Java: Restructuring the Java Community ProcessNovember, 22nd | 09:00-09:45 am The Java Community Process (JCP) plays a critical role in the evolution of Java.  This keynote will explain how the JCP is organized and how interested members of the Java community - commercial organizations, non-profits, Java user-groups, and individual developers - work together to advance the Java language and platforms. It will then discuss recent and upcoming changes to the JCP's structure and operating processes, and will explain how these changes ('JCP.next') will make the organization more efficient and will ensure that its work is carried out in a more open and more transparent manner.

    Read the article

  • JFXtras Project: More cool features for your JavaFX app

    - by terrencebarr
    JFXtras in an open source project that provides a bunch of interesting components and pieces to make your JavaFX application even more productive, engaging, and, yes, sexy. And saves you coding time along the way. Check out the new JFXtras Ensemble demo, which showcases in one fell swoop all the features and bits you can take advantage of. Also, bookmark Jim Weaver’s excellent blog to keep up with all things JavaFX and rich client. Cheers, – Terrence Filed under: Mobile & Embedded Tagged: JavaFX, JFXtras, Open Source

    Read the article

  • Running & Managing Concurrent Queries in SQL Developer

    - by thatjeffsmith
    We’ve all been there – you’ve managed to write a query that takes longer than a few seconds to execute. Tuning aside, sometimes it takes longer than you want for a query to run. So what’s a SQL Developer user to do? I say, keep going! While you’re waiting for your query to finish, there’s no reason why you can’t continue on with your work. If you need to execute something else in a worksheet, there’s no reason to launch a 2nd or 3rd copy of SQL Developer. Just open an un-shared worksheet. Now while you’ve got 1 or more queries running, you can easily get yourself into a situation where you’re not sure what’s running where. Or maybe you want to cancel a query or just check how long something’s been running. Just open the Task Progress Panel If a query or task in SQL Developer takes more than 3-5 seconds, it will appear in the Task Progress panel. You can then watch the throbbers go back and forth while you sip your coffee/soda/Red Bull. Run a query, spawn a new worksheet, run another query, watch them in the Task Progress panel. Kudos and thanks to @leight0nn for helping me get the title of this post right If you’re looking for help in managing and monitoring sessions in general, check out this post.

    Read the article

  • Dart and NetBeans IDE 7.4

    - by Geertjan
    Here's the start of Dart in NetBeans IDE. Basic Dart editing support is done and on saving a Dart file the related JavaScript files are automatically generated. In the context of an HTML5 application in NetBeans IDE, that gives you deep integration with the embedded browser and, even better, Chrome, as well as Chrome Developer Tools. Below, notice that the "Sunflower Spectacular" H1 element is selected (click the image to enlarge it to get a better view), which is therefore highlighted in the live DOM view in the bottom left, as well as in the CSS Styles window in the top right, from where the CSS styles can be edited and from where the related files can be opened in the IDE. Identical features are available for Chrome, as well as on Android and iOS. And if you like that, watch this YouTube movie showing how Chrome Developer Tools integration can fit directly into the workflow below. Anyone want to help get this plugin further? What's needed: Much deeper Dart editing support, i.e., right now only very basic syntax coloring is provided, i.e., an ANTLR lexer is integrated into the NetBeans syntax coloring infrastructure. Parsing, error checking, code completion, and some small code templates are needed. A new panel is needed in the Project Properties dialog on NetBeans HTML5 projects for enabling Dart (i.e., similar to enabling Cordova), at which point the "dart.js" file and other Dart artifacts should be added to the project, so that a Dart project is immediately generated and the application should be immediately deployable. Whenever changes are made to a Dart file, Dart should run in the background to create the Dart artifacts in some hidden way, so that the user doesn't see all the Dart artifacts as is currently the case. Some way of recognizing Dart projects (there's a YAML file as an identifier) and creating NetBeans HTML5 projects from that, i.e., from Dart projects outside the IDE. I think that's all... The official Dart Editor is based on Eclipse and requires a massive download of heaps of Eclipse bundles. Compare that to the NetBeans equivalent, which is a very small "HTML5 and PHP" bundle (60 MB), available here, together with the above small Dart plugin. Plus, when you look at how NetBeans IDE integrates with a bunch of Google-oriented projects, i.e., Chrome, Chrome Developer Tools, and Android (via Cordova), that's a pretty interesting toolbox for anyone using Dart. And bear in mind that ANTLRWorks, Microchip, and heaps of other organizations have built and are building their tools on top of NetBeans!

    Read the article

  • Digital Agenda in the EU means open standards after all

    - by trond-arne.undheim
    European Commission Vice President Neelie Kroes speech on Openness at the heart of the EU Digital Agenda at Open Forum Europe 2010 Summit in Brussels refocuses the EU Digital Agenda on open standards. I say the speech scores a 90/100, smooth, smart, a little vicious at the fringes, maybe? Anyway, it shows the strategy might age and implement well. This is Dutch pragmatism at its best. The EU Digital Agenda (I give it an 85/100 score), while laudable, stops short of using the term. The next step for the European Commission is defining the term open standards. If they do that, and do it right, Vice President Kroes will go into history as having made a significant contribution towards global progress in e-government by possibly eradicating lock-in forever. Moreover, she will put Europe's SMEs in a better position to succeed in a global IT market filled with barriers to entry from players not fully understanding, using, or unpacking standards. Kroes' interesting suggestion that she will now explore a "legal proposal" on interoperability that will have an impact on all IT companies operating in the European market is more up for debate. An interoperability directive? One run by DG COMP or one run by DG INFSO, telecom style? Would something like that work? Would the industry like it? Would it help European governments? Possibly, if done right. The good thing was, Kroes pointed out that she will look for input from the industry. Kroes' track record is one of not being scared of taking on the Titans. She also wants to enact real, positive, lasting change. "I will not go anywhere", she said. All of that is good. And she does understand the importance of open standards. Let's now start discussing the details. Implementing the Digital Agenda is not simple. It requires collaboration across the various Directorates in the European Commission. Mounting a new Interoperability directive is also never attempted before. Getting it right is important. Even possibly finding out it cannot be done right and choosing a more light weight approach that is equally effective would be bold. Go Kroes!

    Read the article

  • User Experience Guidance for Developers: Anti-Patterns

    - by ultan o'broin
    Picked this up from a recent Dublin Google Technology User Group meeting: Android App Mistakes: Avoiding the Anti-Patterns by Mark Murphy, CommonsWare Interesting approach of "anti-patterns" aimed at mobile developers (in this case Android), looking at the best way to use code and what's in the SDK while combining it with UX guidance (the premise being the developer does the lot). Interestingly, the idea came through that developers need to stop trying to make one O/S behave like another--on UX grounds. Also, pretty clear that a web-based paradigm is being promoting for Android (translators tell me that translating an Android app reminded them of translating web pages too). Haven't see the "anti"-approach before, developer cookbooks and design patterns sure. Check out the slideshare presentation.

    Read the article

  • Adding SSE support in Java EE 8

    - by delabassee
    SSE (Server-Sent Event) is a standard mechanism used to push, over HTTP, server notifications to clients.  SSE is often compared to WebSocket as they are both supported in HTML 5 and they both provide the server a way to push information to their clients but they are different too! See here for some of the pros and cons of using one or the other. For REST application, SSE can be quite complementary as it offers an effective solution for a one-way publish-subscribe model, i.e. a REST client can 'subscribe' and get SSE based notifications from a REST endpoint. As a matter of fact, Jersey (JAX-RS Reference Implementation) already support SSE since quite some time (see the Jersey documentation for more details). There might also be some cases where one might want to use SSE directly from the Servlet API. Sending SSE notifications using the Servlet API is relatively straight forward. To give you an idea, check here for 2 SSE examples based on the Servlet 3.1 API.  We are thinking about adding SSE support in Java EE 8 but the question is where as there are several options, in the platform, where SSE could potentially be supported: the Servlet API the WebSocket API JAX-RS or even having a dedicated SSE API, and thus a dedicated JSR too! Santiago Pericas-Geertsen (JAX-RS Co-Spec Lead) conducted an initial investigation around that question. You can find the arguments for the different options and Santiago's findings here. So at this stage JAX-RS seems to be a good choice to support SSE in Java EE. This will obviously be discussed in the respective JCP Expert Groups but what is your opinion on this question?

    Read the article

  • Performance Enhancement in Full-Text Search Query

    - by Calvin Sun
    Ever since its first release, we are continuing consolidating and developing InnoDB Full-Text Search feature. There is one recent improvement that worth blogging about. It is an effort with MySQL Optimizer team that simplifies some common queries’ Query Plans and dramatically shorted the query time. I will describe the issue, our solution and the end result by some performance numbers to demonstrate our efforts in continuing enhancement the Full-Text Search capability. The Issue: As we had discussed in previous Blogs, InnoDB implements Full-Text index as reversed auxiliary tables. The query once parsed will be reinterpreted into several queries into related auxiliary tables and then results are merged and consolidated to come up with the final result. So at the end of the query, we’ll have all matching records on hand, sorted by their ranking or by their Doc IDs. Unfortunately, MySQL’s optimizer and query processing had been initially designed for MyISAM Full-Text index, and sometimes did not fully utilize the complete result package from InnoDB. Here are a couple examples: Case 1: Query result ordered by Rank with only top N results: mysql> SELECT FTS_DOC_ID, MATCH (title, body) AGAINST ('database') AS SCORE FROM articles ORDER BY score DESC LIMIT 1; In this query, user tries to retrieve a single record with highest ranking. It should have a quick answer once we have all the matching documents on hand, especially if there are ranked. However, before this change, MySQL would almost retrieve rankings for almost every row in the table, sort them and them come with the top rank result. This whole retrieve and sort is quite unnecessary given the InnoDB already have the answer. In a real life case, user could have millions of rows, so in the old scheme, it would retrieve millions of rows' ranking and sort them, even if our FTS already found there are two 3 matched rows. Apparently, the million ranking retrieve is done in vain. In above case, it should just ask for 3 matched rows' ranking, all other rows' ranking are 0. If it want the top ranking, then it can just get the first record from our already sorted result. Case 2: Select Count(*) on matching records: mysql> SELECT COUNT(*) FROM articles WHERE MATCH (title,body) AGAINST ('database' IN NATURAL LANGUAGE MODE); In this case, InnoDB search can find matching rows quickly and will have all matching rows. However, before our change, in the old scheme, every row in the table was requested by MySQL one by one, just to check whether its ranking is larger than 0, and later comes up a count. In fact, there is no need for MySQL to fetch all rows, instead InnoDB already had all the matching records. The only thing need is to call an InnoDB API to retrieve the count The difference can be huge. Following query output shows how big the difference can be: mysql> select count(*) from searchindex_inno where match(si_title, si_text) against ('people')  +----------+ | count(*) | +----------+ | 666877 | +----------+ 1 row in set (16 min 17.37 sec) So the query took almost 16 minutes. Let’s see how long the InnoDB can come up the result. In InnoDB, you can obtain extra diagnostic printout by turning on “innodb_ft_enable_diag_print”, this will print out extra query info: Error log: keynr=2, 'people' NL search Total docs: 10954826 Total words: 0 UNION: Searching: 'people' Processing time: 2 secs: row(s) 666877: error: 10 ft_init() ft_init_ext() keynr=2, 'people' NL search Total docs: 10954826 Total words: 0 UNION: Searching: 'people' Processing time: 3 secs: row(s) 666877: error: 10 Output shows it only took InnoDB only 3 seconds to get the result, while the whole query took 16 minutes to finish. So large amount of time has been wasted on the un-needed row fetching. The Solution: The solution is obvious. MySQL can skip some of its steps, optimize its plan and obtain useful information directly from InnoDB. Some of savings from doing this include: 1) Avoid redundant sorting. Since InnoDB already sorted the result according to ranking. MySQL Query Processing layer does not need to sort to get top matching results. 2) Avoid row by row fetching to get the matching count. InnoDB provides all the matching records. All those not in the result list should all have ranking of 0, and no need to be retrieved. And InnoDB has a count of total matching records on hand. No need to recount. 3) Covered index scan. InnoDB results always contains the matching records' Document ID and their ranking. So if only the Document ID and ranking is needed, there is no need to go to user table to fetch the record itself. 4) Narrow the search result early, reduce the user table access. If the user wants to get top N matching records, we do not need to fetch all matching records from user table. We should be able to first select TOP N matching DOC IDs, and then only fetch corresponding records with these Doc IDs. Performance Results and comparison with MyISAM The result by this change is very obvious. I includes six testing result performed by Alexander Rubin just to demonstrate how fast the InnoDB query now becomes when comparing MyISAM Full-Text Search. These tests are base on the English Wikipedia data of 5.4 Million rows and approximately 16G table. The test was performed on a machine with 1 CPU Dual Core, SSD drive, 8G of RAM and InnoDB_buffer_pool is set to 8 GB. Table 1: SELECT with LIMIT CLAUSE mysql> SELECT si_title, match(si_title, si_text) against('family') as rel FROM si WHERE match(si_title, si_text) against('family') ORDER BY rel desc LIMIT 10; InnoDB MyISAM Times Faster Time for the query 1.63 sec 3 min 26.31 sec 127 You can see for this particular query (retrieve top 10 records), InnoDB Full-Text Search is now approximately 127 times faster than MyISAM. Table 2: SELECT COUNT QUERY mysql>select count(*) from si where match(si_title, si_text) against('family‘); +----------+ | count(*) | +----------+ | 293955 | +----------+ InnoDB MyISAM Times Faster Time for the query 1.35 sec 28 min 59.59 sec 1289 In this particular case, where there are 293k matching results, InnoDB took only 1.35 second to get all of them, while take MyISAM almost half an hour, that is about 1289 times faster!. Table 3: SELECT ID with ORDER BY and LIMIT CLAUSE for selected terms mysql> SELECT <ID>, match(si_title, si_text) against(<TERM>) as rel FROM si_<TB> WHERE match(si_title, si_text) against (<TERM>) ORDER BY rel desc LIMIT 10; Term InnoDB (time to execute) MyISAM(time to execute) Times Faster family 0.5 sec 5.05 sec 10.1 family film 0.95 sec 25.39 sec 26.7 Pizza restaurant orange county California 0.93 sec 32.03 sec 34.4 President united states of America 2.5 sec 36.98 sec 14.8 Table 4: SELECT title and text with ORDER BY and LIMIT CLAUSE for selected terms mysql> SELECT <ID>, si_title, si_text, ... as rel FROM si_<TB> WHERE match(si_title, si_text) against (<TERM>) ORDER BY rel desc LIMIT 10; Term InnoDB (time to execute) MyISAM(time to execute) Times Faster family 0.61 sec 41.65 sec 68.3 family film 1.15 sec 47.17 sec 41.0 Pizza restaurant orange county california 1.03 sec 48.2 sec 46.8 President united states of america 2.49 sec 44.61 sec 17.9 Table 5: SELECT ID with ORDER BY and LIMIT CLAUSE for selected terms mysql> SELECT <ID>, match(si_title, si_text) against(<TERM>) as rel  FROM si_<TB> WHERE match(si_title, si_text) against (<TERM>) ORDER BY rel desc LIMIT 10; Term InnoDB (time to execute) MyISAM(time to execute) Times Faster family 0.5 sec 5.05 sec 10.1 family film 0.95 sec 25.39 sec 26.7 Pizza restaurant orange county califormia 0.93 sec 32.03 sec 34.4 President united states of america 2.5 sec 36.98 sec 14.8 Table 6: SELECT COUNT(*) mysql> SELECT count(*) FROM si_<TB> WHERE match(si_title, si_text) against (<TERM>) LIMIT 10; Term InnoDB (time to execute) MyISAM(time to execute) Times Faster family 0.47 sec 82 sec 174.5 family film 0.83 sec 131 sec 157.8 Pizza restaurant orange county califormia 0.74 sec 106 sec 143.2 President united states of america 1.96 sec 220 sec 112.2  Again, table 3 to table 6 all showing InnoDB consistently outperform MyISAM in these queries by a large margin. It becomes obvious the InnoDB has great advantage over MyISAM in handling large data search. Summary: These results demonstrate the great performance we could achieve by making MySQL optimizer and InnoDB Full-Text Search more tightly coupled. I think there are still many cases that InnoDB’s result info have not been fully taken advantage of, which means we still have great room to improve. And we will continuously explore the area, and get more dramatic results for InnoDB full-text searches. Jimmy Yang, September 29, 2012

    Read the article

  • Devoxx UK JCP & Adopt-a-JSR activities

    - by Heather VanCura
    Devoxx UK starts this week!  The JCP Program is organizing many activities throughout the conference, including some tables in the Hackergarten area on 12-13 June.  Topics include Java EE, Data Grids, Java SE 8 (Lambdas and Date & Time API), Money & Currency API and OpenJDK.  We will have two book signings by Richard Warburton and Peter Pilgrim during the Hackergarten - free signed copy of their books at these times - first come, first served (limited quantities available).  Thursday night is the party and the Birds of a Feather (BoF) sessions - come with your favorite questions and topics related to the JCP, Adopt-a-JSR and Adopt OpenJDK Programs!  See below for the schedule of activities; I will fill in details for each session tomorrow.    Thursday 12 June 10:20 - 12:50 Java EE -- Arun Gupta 13:30-17:00 Lambdas/Date & Time API --Richard Warburton & Raoul-Gabriel Urma (also a book signing with Richard Warburon during the afternoon break) 14:30-17:30 Data Grids - Peter Lawrey 14:30-18:00 Money & Currency -- Anatole Tresch 18:45 Adopt OpenJDK BoF session (Java EE BoF runs concurrently) 19:45 JCP & Adopt-a-JSR BoF session Friday 13 June 10:20-13:00 OpenJDK -- Mani Sarkar  10:20- 14:30 Money & Currency -- Anatole Tresch 10:20 - 13:00 Java EE -- Peter Pilgrim 13:00-13:30 Peter Pilgrim Java EE 7 Book signing sponsored by JCP @ lunch time 13:30 - 15:30 JCP.Next/JSR 364 -- Heather VanCura

    Read the article

  • New videos: Getting started with embedded Java and more

    - by terrencebarr
    OTN just published a set of six videos related to embedded Java: Java at ARM TechCon Java SE Embedded Development Made Easy, Part 1 Java SE Embedded Development Made Easy, Part 2 Mobile Database Synchronization – Healthcare Demonstration Tomcat Micro Cluster Java Embedded Partnerships Good stuff. Enjoy! Cheers, – Terrence Filed under: Mobile & Embedded Tagged: embedded, Java Embedded, Java SE Embedded, video

    Read the article

  • Thank You MySQL Community! MySQL 5.6.9 Release Candidate Available Now!

    - by Rob Young
    The MySQL Community continues its good work in testing and refining MySQL 5.6, and as such the next iteration of the 5.6 Release Candidate is now available for download.  You can get MySQL 5.6.9 here (look under the "Development Releases" tab).  This version is the result of feedback we have gotten since MySQL 5.6.7 was announced at MySQL Connect in late September. As iron sharpens iron, Community feedback sharpens the quality and performance of MySQL so please download 5.6.9 and let us know how we can improve it as we move toward the production-ready product release in early 2013. MySQL 5.6 is designed to meet the agility demands of the next generation of web apps and services and includes across the board improvements to the Optimizer, InnoDB performance/scale and online DDL operations, self-healing Replication, Performance Schema Instrumentation, Security and developer enabling NoSQL functionality.  You can learn all the details and follow MySQL Engineering blogs on all of the key features in this MySQL DevZone article. On a related note, plan to join this week's live webinars to learn more about MySQL 5.6 Self-Healing Replication Clusters and Building the Next Generation of Web, Cloud, SaaS, Embedded Application and Services with MySQL 5.6.  Hurry!  Seating is limited!  As always, thanks for your continued support of MySQL!

    Read the article

  • JSR Updates and Inactive JSRs

    - by heathervc
     The following JSRs have made progress in the JCP program this week: JSR 342, Java Platform, Enterprise Edition 7 (Java EE 7) Specification, has posted an Early Draft 2 Review.  This review closes 30 November. JSR 338, Java Persistence 2.1, has posted an Early Draft 2 Review.  This review closes 30 November.  JSR 346, Contexts and Dependency Injection for Java, EE 1.1, has posted a Public Review.  This review closes 3 December.  JSR 352, Batch Applications for the Java Platform, has posted a Public Review.  This review closes 3 December. Inactive JSRs: In 2008, we initiated an effort to identify JSRs that had not continued to make progress in the JCP program.  We have reported on this topic since that time at JCP Executive Committee Meetings. The term 'Inactive JSRs' was introduced, and a process was developed with the guidance of the EC to reduce the number of Inactive JSRs  (reduced from over 60 to 2 JSRs) through either moving to the next JSR stage or being Withdrawn or declared Dormant.  This process has been formalized in JCP 2.8 and above, with the introduction of JSR deadlines.  The JSRs which were put to a Dormancy Ballot in September 2012  have been approved by the EC and are now declared Dormant.  You can view the results of the JSR Voting on JCP.org.  The latest Inactive JSRs report is available as part of the September 2012 JCP EC Face-to-Face Meeting Materials. 

    Read the article

  • OAGi Architecture Council OAGIS Ten Work Group Completes first round review of Concepts for OAGIS Te

    - by michael.rowell
    Today the OAGi Architecture Council OAGIS Ten Work group completed the first level review of concepts for existing content for OAGIS Ten. This is one of the first milestones for OAGIS Ten. In doing this the concepts of key objects (the Nouns) have been identified along with the key context for their use. While OAGIS Ten remains a work-in-process the work group shows progress. Going forward the other councils will provide additional input to these and there own concepts and the contexts for each. Additionally, sub groups will focus on concepts for given domains. Stay tuned for future progress. If anyone is interested in joining the effort. OAGi membership is open to anyone, please see the OAGi Web site.

    Read the article

  • JSF 2.2 recent progress - Early Draft

    - by alexismp
    JSF specification lead Ed Burns has an update on the progress of JSF 2.2, another component which should be required as part of the upcoming Java EE 7 standard. This includes a reminder of the scope of this specification, the availability of the early draft and height specific features that are being worked on and split into "Mostly Specified Features" and "Not Yet Fully Specified Features" (I think you can read the latter as "at risk"). My favorite is "763-EverythingIsInjectable". Remember that JSF 2.2 is due out in the middle of 2012 which is in time to be integrated in the Java EE 7 platform JSR (currently scheduled for second half of 2012). In the mean time, JSF 2.2 nightly builds are available.

    Read the article

  • SOA Suite 11g Releases

    - by antony.reynolds
    A few years ago Mars renamed one of the most popular chocolate bars in England from Marathon to Snickers.  Even today there are still some people confused by the name change and refer to them as marathons. Well last week we released SOA Suite 11.1.1.3 and BPM Suite 11.1.1.3 as well as OSB 11.1.1.3.  Seems that some people are a little confused by the naming and how to install these new versions, probably the same Brits who call Snickers a Marathon :-).  Seems that calling all the revisions 11g Release 1 has caused confusion.  To help these people I have created a little diagram to show how you can get the latest version onto your machine.  The dotted lines indicate dependencies. Note that SOA Suite 11.1.1.3 and BPM 11.1.1.3 are provided as a patch that is applied to SOA Suite 11.1.1.2.  For a new install there is no need to run the 11.1.1.2 RCU, you can run the 11.1.1.3 RCU directly. All SOA & BPM Suite 11g installations are built on a WebLogic Server base.  The WebLogic 11g Release 1 version is 10.3 with an additional number indicating the revision.  Similarly the 11g Release 1 SOA Suite, Service Bus and BPM Suite have a version 11.1.1 with an additional number indicating the revision.  The final revision number should match the final revision in the WebLogic Server version.  The products are also sometimes identified by a Patch Set number, indicating whether this is the 11gR1 product with the first or second patch set.  The table below show the different revisions with their alias. Product Version Base WebLogic Alias SOA Suite 11gR1 11.1.1.1 10.3.1 Release 1 or R1 SOA Suite 11gR1 11.1.1.2 10.3.2 Patch Set 1 or PS1 SOA Suite 11gR1 11.1.1.3 10.3.3 Patch Set 2 or PS2 BPM Suite 11gR1 11.1.1.3 10.3.3 Release 1 or R1 OSB 11gR1 11.1.1.3 10.3.3 Release 1 or R1 Hope this helps some people, if you find it useful you could always send me a Marathon bar, sorry Snickers!

    Read the article

  • Why All The Hype Around Live Help?

    - by ruth.donohue
    I am pleased to introduce guest blogger, Damien Acheson today. Based in Cambridge, MA, Damien is the Product Marketing Manager for ATG’s Live Help products. Welcome, Damien!! BY DAMIEN ACHESON Why all the hype around live help? An eCommerce professional recently asked me: “Why all the hype around live chat and click to call?” I already have a customer service phone number that’s available to my online visitors. Why would I want to add live help? If anything, I want my website to reduce the number of calls to my contact center, not increase it!” The effect of adding live help to a website is counter-intuitive. Done right, live help doesn’t increase your call volume; it optimizes it by replacing traditional telephone calls with smarter, more productive, live voice and live chat interactions. This generates instant cost savings, and a measurable lift in sales and customer retention. A live help interaction differs from a traditional telephone call in six radical ways: Targeting. With live help you can target specific visitors at just the exact right time with a live call or live chat invitation based on hundreds of different parameters. For example, visitors who appear to hesitate before making a large purchase may receive a live help invitation, while others may not. Productivity. By reserving live voice to visitors with complex questions, and offering self-service and live chat for more simple interactions, agents with the right domain expertise can handle simultaneous queries and achieve substantial productivity gains. Routing. Live help interactions take into account visitors’ web context to intelligently route queries to the best available agent, thereby lifting first contact resolution. Context. Traditional telephone numbers force online customers to “change channels” and “start over” with a phone agent. With Live help, agents get the context of the web session and can instantly access the customer’s transaction details and account information, substantially reducing handle times. Interaction. Agents can solve a customer’s problem more effectively co-browsing and collaborating with the visitor in real-time to complete online forms and transactions. Analytics. Unlike traditional telephone numbers, live help allows you to tie Web analytics to customer satisfaction and agent performance indicators. To better understand these differences and advantages over traditional customer service, watch this demo on optimizing customer interactions with Live Help. Technorati Tags: ATG,Live Help,Commerce

    Read the article

  • QCon SF 2011

    - by user12607987
    To San Francisco for QCon SF 2011, where I spoke on Java SE: Where We've Been, Where We're Going. QCon is much further "up the stack" than JavaOne, so has far fewer talks about the "foundation", Java SE. I thought it was important to review the features delivered in Java SE 7 before discussing what's planned for Java SE 8. This worked out well, as most of the audience were using Java SE 6. The language changes in SE 7 look small, but examining merely two of them - precise rethrow and suppressed exceptions - reveals a new exception handling idiom applicable to many thousands of Java classes. And thumbs up to the QCon organizers for the instant feedback mechanism!

    Read the article

< Previous Page | 482 483 484 485 486 487 488 489 490 491 492 493  | Next Page >