Search Results

Search found 20172 results on 807 pages for 'oracle forms to adf'.

Page 607/807 | < Previous Page | 603 604 605 606 607 608 609 610 611 612 613 614  | Next Page >

  • New qeep app for Java ME feature phones: meet qeepy people

    - by hinkmond
    Is it "qeepy" if you meet people by using your cell phone instead of, you know, talking to them? Nah. Not if it's a Java ME cell phone! See: Use Qeep to Meet Peeps Here's a quote: Qeep is a free app, and compatible with over 1,000 Java-enabled feature phones... ... Qeep is one of the world's largest mobile gaming and social discovery platforms. Members of the mobile community can play live multiplayer games; blog photos; send sound attacks, text messages and virtual gifts; and meet new friends worldwide. So, go on. Go, use Qeep on your Java ME feature phone to play multiplayer games, blog photos, and meet new friends worldwide. No one will think that you're weird... Not much, at least. Hinkmond

    Read the article

  • Take Two: Comparing JVMs on ARM/Linux

    - by user12608080
    Although the intent of the previous article, entitled Comparing JVMs on ARM/Linux, was to introduce and highlight the availability of the HotSpot server compiler (referred to as c2) for Java SE-Embedded ARM v7,  it seems, based on feedback, that everyone was more interested in the OpenJDK comparisons to Java SE-E.  In fact there were two main concerns: The fact that the previous article compared Java SE-E 7 against OpenJDK 6 might be construed as an unlevel playing field because version 7 is newer and therefore potentially more optimized. That the generic compiler settings chosen to build the OpenJDK implementations did not put those versions in a particularly favorable light. With those considerations in mind, we'll institute the following changes to this version of the benchmarking: In order to help alleviate an additional concern that there is some sort of benchmark bias, we'll use a different suite, called DaCapo.  Funded and supported by many prestigious organizations, DaCapo's aim is to benchmark real world applications.  Further information about DaCapo can be found at http://dacapobench.org. At the suggestion of Xerxes Ranby, who has been a great help through this entire exercise, a newer Linux distribution will be used to assure that the OpenJDK implementations were built with more optimal compiler settings.  The Linux distribution in this instance is Ubuntu 11.10 Oneiric Ocelot. Having experienced difficulties getting Ubuntu 11.10 to run on the original D2Plug ARMv7 platform, for these benchmarks, we'll switch to an embedded system that has a supported Ubuntu 11.10 release.  That platform is the Freescale i.MX53 Quick Start Board.  It has an ARMv7 Coretex-A8 processor running at 1GHz with 1GB RAM. We'll limit comparisons to 4 JVM implementations: Java SE-E 7 Update 2 c1 compiler (default) Java SE-E 6 Update 30 (c1 compiler is the only option) OpenJDK 6 IcedTea6 1.11pre 6b23~pre11-0ubuntu1.11.10.2 CACAO build 1.1.0pre2 OpenJDK 6 IcedTea6 1.11pre 6b23~pre11-0ubuntu1.11.10.2 JamVM build-1.6.0-devel Certain OpenJDK implementations were eliminated from this round of testing for the simple reason that their performance was not competitive.  The Java SE 7u2 c2 compiler was also removed because although quite respectable, it did not perform as well as the c1 compilers.  Recall that c2 works optimally in long-lived situations.  Many of these benchmarks completed in a relatively short period of time.  To get a feel for where c2 shines, take a look at the first chart in this blog. The first chart that follows includes performance of all benchmark runs on all platforms.  Later on we'll look more at individual tests.  In all runs, smaller means faster.  The DaCapo aficionado may notice that only 10 of the 14 DaCapo tests for this version were executed.  The reason for this is that these 10 tests represent the only ones successfully completed by all 4 JVMs.  Only the Java SE-E 6u30 could successfully run all of the tests.  Both OpenJDK instances not only failed to complete certain tests, but also experienced VM aborts too. One of the first observations that can be made between Java SE-E 6 and 7 is that, for all intents and purposes, they are on par with regards to performance.  While it is a fact that successive Java SE releases add additional optimizations, it is also true that Java SE 7 introduces additional complexity to the Java platform thus balancing out any potential performance gains at this point.  We are still early into Java SE 7.  We would expect further performance enhancements for Java SE-E 7 in future updates. In comparing Java SE-E to OpenJDK performance, among both OpenJDK VMs, Cacao results are respectable in 4 of the 10 tests.  The charts that follow show the individual results of those four tests.  Both Java SE-E versions do win every test and outperform Cacao in the range of 9% to 55%. For the remaining 6 tests, Java SE-E significantly outperforms Cacao in the range of 114% to 311% So it looks like OpenJDK results are mixed for this round of benchmarks.  In some cases, performance looks to have improved.  But in a majority of instances, OpenJDK still lags behind Java SE-Embedded considerably. Time to put on my asbestos suit.  Let the flames begin...

    Read the article

  • SQL Saturday 43 (Redmond, WA) Review

    - by BuckWoody
    Last Saturday (June 12th) we held a “SQL Saturday” (more about those here) event in Redmond, Washington. The event was held at the Microsoft campus, at the Mixer in our new location called the “Commons”. This is a mall-like area that we have on campus, and the Mixer is a large building with lots of meeting rooms, so it made a perfect location for the event. There was a sign to find the parking, and once there they had a sign to show how to get to the building. Since it’s a secure facility, Greg Larsen and crew had a person manning the door so that even late arrivals could get in. We had about 400 sign up for the event, and a little over 300 attend (official numbers later). I think we would have had a lot more, but the sun was out – and you just can’t underestimate the effect of that here in the Pacific Northwest. We joke a lot about not seeing the sun much, but when a day like what we had on Saturday comes around, and on a weekend at that, you’d cancel your wedding to go outside to play in the sun. And your spouse would agree with you for doing it. We had some top-notch speakers, including Clifford Dibble and Kalen Delany. The food was great, we had multiple sponsors (including Confio who seems to be at all of these) and the attendees were from all over the professional spectrum, from developers to BI to DBA’s. Everyone I saw was very engaged, and when I visited room-to-room I saw almost no one in the halls – everyone was in the sessions. I also saw a much larger Microsoft presence this year, especially from Dan Jones’ team. I had a great turnout at my session, and yes, I was wearing an Oracle staff shirt. I did that because I wanted to show that the session I gave on “SQL Server for the Oracle DBA” was non-marketing – I couldn’t exactly bash Oracle wearing their colors! These events are amazing. I can’t emphasize enough how much I appreciate the volunteers and how much work they put into these events, and to you for coming. If you’re reading this and you haven’t attended one yet, definitely find out if there is one in your area – and if not, start one. It’s a lot of work, but it’s totally worth it.       Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Inline template efficiency

    - by Darryl Gove
    I like inline templates, and use them quite extensively. Whenever I write code with them I'm always careful to check the disassembly to see that the resulting output is efficient. Here's a potential cause of inefficiency. Suppose we want to use the mis-named Leading Zero Detect (LZD) instruction on T4 (this instruction does a count of the number of leading zero bits in an integer register - so it should really be called leading zero count). So we put together an inline template called lzd.il looking like: .inline lzd lzd %o0,%o0 .end And we throw together some code that uses it: int lzd(int); int a; int c=0; int main() { for(a=0; a<1000; a++) { c=lzd(c); } return 0; } We compile the code with some amount of optimisation, and look at the resulting code: $ cc -O -xtarget=T4 -S lzd.c lzd.il $ more lzd.s .L77000018: /* 0x001c 11 */ lzd %o0,%o0 /* 0x0020 9 */ ld [%i1],%i3 /* 0x0024 11 */ st %o0,[%i2] /* 0x0028 9 */ add %i3,1,%i0 /* 0x002c */ cmp %i0,999 /* 0x0030 */ ble,pt %icc,.L77000018 /* 0x0034 */ st %i0,[%i1] What is surprising is that we're seeing a number of loads and stores in the code. Everything could be held in registers, so why is this happening? The problem is that the code is only inlined at the code generation stage - when the actual instructions are generated. Earlier compiler phases see a function call. The called functions can do all kinds of nastiness to global variables (like 'a' in this code) so we need to load them from memory after the function call, and store them to memory before the function call. Fortunately we can use a #pragma directive to tell the compiler that the routine lzd() has no side effects - meaning that it does not read or write to memory. The directive to do that is #pragma no_side_effect(<routine name), and it needs to be placed after the declaration of the function. The new code looks like: int lzd(int); #pragma no_side_effect(lzd) int a; int c=0; int main() { for(a=0; a<1000; a++) { c=lzd(c); } return 0; } Now the loop looks much neater: /* 0x0014 10 */ add %i1,1,%i1 ! 11 ! { ! 12 ! c=lzd(c); /* 0x0018 12 */ lzd %o0,%o0 /* 0x001c 10 */ cmp %i1,999 /* 0x0020 */ ble,pt %icc,.L77000018 /* 0x0024 */ nop

    Read the article

  • Cloud Odyssey: A Hero's Quest Wins Two Telly Awards!

    - by Sandra Cheevers
    Cloud Odyssey: A Hero's Quest is a sci-fi movie experience that shows you the key success factors for guiding your own journey to the cloud.   The movie shows the journey to a mysterious cloud planet, as a metaphor to YOUR journey to the cloud. And now, Cloud Odyssey: A Hero's Quest! receives 2 Telly awards in the categories 1) Motivational and 2) Use of Animation. This is truly an honor to be recognized in the company of so many outstanding entries from a wide range of major players, including Disney, Coca-Cola, NBC, Discovery...Kudos to the Cloud Odyssey team!

    Read the article

  • JCP 2.9 & Transparency Spec Lead Call material is available

    - by Heather VanCura
    The JCP 2.9 & Transparency Spec Lead Call materials and recording from 9 November are now available on the JCP.org multimedia page.  Learn about changes introduced with JCP 2.9, effective Tuesday, 13 November, and a review of the JCP.Next reform efforts. Plus, a progress report on JCP 2.8, specifically around the areas of transparency, participation and agility, as well as suggestions for how you can get more involved in supporting these efforts with the current JCP program JSRs. 

    Read the article

  • Management Software in Java for Networked Bus Systems

    - by Geertjan
    Telemotive AG develops complex networked bus systems such as Ethernet, MOST, CAN, FlexRay, LIN and Bluetooth as well as in-house product developments in infotainment, entertainment, and telematics related to driver assistance, connectivity, diagnosis, and e-mobility. Devices such as those developed by Telemotive typically come with management software, so that the device can be configured. (Just like an internet router comes with management software too.) The blue AdmiraL is a development and analysis device for the APIX (Automotive Pixel Link) technology. Here is its management tool: The blue PiraT is an optimised multi-data logger, developed by Telemotive specifically for the automotive industry. With the blue PiraT the communication of bus systems and control units are monitored and relevant data can be recorded very precisely. And here is how the tool is managed: Both applications are created in Java and, as clearly indicated in many ways in the screenshots above, are based on the NetBeans Platform. More details can be found on the Telemotive site.

    Read the article

  • Skynet Big Data Demo Using Hexbug Spider Robot, Raspberry Pi, and Java SE Embedded (Part 3)

    - by hinkmond
    In Part 2, I described what connections you need to make for this demo using a Hexbug Spider Robot, a Raspberry Pi, and Java SE Embedded for programming. Here are some photos of me doing the soldering. Software engineers should not be afraid of a little soldering work. It's all good. See: Skynet Big Data Demo (Part 2) One thing to watch out for when you open the remote is that there may be some glue covering the contact points. Make sure to use an Exacto knife or small screwdriver to scrape away any glue or non-conductive material covering each place where you need to solder. And after you are done with your soldering and you gave the solder enough time to cool, make sure all your connections are marked so that you know which wire goes where. Give each wire a very light tug to make sure it is soldered correctly and is making good contact. There are lots of videos on the Web to help you if this is your first time soldering. Check out Laday Ada's (from adafruit.com) links on how to solder if you need some additional help: http://www.ladyada.net/learn/soldering/thm.html If everything looks good, zip everything back up and meet back here for how to connect these wires to your Raspberry Pi. That will be it for the hardware part of this project. See, that wasn't so bad. Hinkmond

    Read the article

  • NetBeans IDE 7.3 Knows Null

    - by Geertjan
    What's the difference between these two methods, "test1" and "test2"? public int test1(String str) {     return str.length(); } public int test2(String str) {     if (str == null) {         System.err.println("Passed null!.");         //forgotten return;     }     return str.length(); } The difference, or at least, the difference that is relevant for this blog entry, is that whoever wrote "test2" apparently thinks that the variable "str" may be null, though did not provide a null check. In NetBeans IDE 7.3, you see this hint for "test2", but no hint for "test1", since in that case we don't know anything about the developer's intention for the variable and providing a hint in that case would flood the source code with too many false positives:  Annotations are supported in understanding how a piece of code is intended to be used. If method return types use @Nullable, @NullAllowed, @CheckForNull, the value is considered to be "strongly possible to be null", as well as if the variable is tested to be null, as shown above. When using @NotNull, @NonNull, @Nonnull, the value is considered to be non-null. (The exact FQNs of the annotations are ignored, only simple names are checked.) Here are examples showing where the hints are displayed for the non-null hints (the "strongly possible to be null" hints are not shown below, though you can see one of them in the screenshot above), together with a comment showing what is shown when you hover over the hint: There isn't a "one size fits all" refactoring for these various instances relating to null checks, hence you can't do an automated refactoring across your code base via tools in NetBeans IDE, as shown yesterday for class member reordering across code bases. However, you can, instead, go to Source | Inspect and then do a scan throughout a scope (e.g., current file/package/project or combinations of these or all open projects) for class elements that the IDE identifies as potentially having a problem in this area: Thanks to Jan Lahoda, who reports that this currently also works in NetBeans IDE 7.3 dev builds for fields but that may need to be disabled since right now too many false positives are returned, for help with the info above and any misunderstandings are my own fault!

    Read the article

  • Online ALTER TABLE in MySQL 5.6

    - by Marko Mäkelä
    This is the low-level view of data dictionary language (DDL) operations in the InnoDB storage engine in MySQL 5.6. John Russell gave a more high-level view in his blog post April 2012 Labs Release – Online DDL Improvements. MySQL before the InnoDB Plugin Traditionally, the MySQL storage engine interface has taken a minimalistic approach to data definition language. The only natively supported operations were CREATE TABLE, DROP TABLE and RENAME TABLE. Consider the following example: CREATE TABLE t(a INT); INSERT INTO t VALUES (1),(2),(3); CREATE INDEX a ON t(a); DROP TABLE t; The CREATE INDEX statement would be executed roughly as follows: CREATE TABLE temp(a INT, INDEX(a)); INSERT INTO temp SELECT * FROM t; RENAME TABLE t TO temp2; RENAME TABLE temp TO t; DROP TABLE temp2; You could imagine that the database could crash when copying all rows from the original table to the new one. For example, it could run out of file space. Then, on restart, InnoDB would roll back the huge INSERT transaction. To fix things a little, a hack was added to ha_innobase::write_row for committing the transaction every 10,000 rows. Still, it was frustrating that even a simple DROP INDEX would make the table unavailable for modifications for a long time. Fast Index Creation in the InnoDB Plugin of MySQL 5.1 MySQL 5.1 introduced a new interface for CREATE INDEX and DROP INDEX. The old table-copying approach can still be forced by SET old_alter_table=0. This interface is used in MySQL 5.5 and in the InnoDB Plugin for MySQL 5.1. Apart from the ability to do a quick DROP INDEX, the main advantage is that InnoDB will execute a merge-sort algorithm before inserting the index records into each index that is being created. This should speed up the insert into the secondary index B-trees and potentially result in a better B-tree fill factor. The 5.1 ALTER TABLE interface was not perfect. For example, DROP FOREIGN KEY still invoked the table copy. Renaming columns could conflict with InnoDB foreign key constraints. Combining ADD KEY and DROP KEY in ALTER TABLE was problematic and not atomic inside the storage engine. The ALTER TABLE interface in MySQL 5.6 The ALTER TABLE storage engine interface was completely rewritten in MySQL 5.6. Instead of introducing a method call for every conceivable operation, MySQL 5.6 introduced a handful of methods, and data structures that keep track of the requested changes. In MySQL 5.6, online ALTER TABLE operation can be requested by specifying LOCK=NONE. Also LOCK=SHARED and LOCK=EXCLUSIVE are available. The old-style table copying can be requested by ALGORITHM=COPY. That one will require at least LOCK=SHARED. From the InnoDB point of view, anything that is possible with LOCK=EXCLUSIVE is also possible with LOCK=SHARED. Most ALGORITHM=INPLACE operations inside InnoDB can be executed online (LOCK=NONE). InnoDB will always require an exclusive table lock in two phases of the operation. The execution phases are tied to a number of methods: handler::check_if_supported_inplace_alter Checks if the storage engine can perform all requested operations, and if so, what kind of locking is needed. handler::prepare_inplace_alter_table InnoDB uses this method to set up the data dictionary cache for upcoming CREATE INDEX operation. We need stubs for the new indexes, so that we can keep track of changes to the table during online index creation. Also, crash recovery would drop any indexes that were incomplete at the time of the crash. handler::inplace_alter_table In InnoDB, this method is used for creating secondary indexes or for rebuilding the table. This is the ‘main’ phase that can be executed online (with concurrent writes to the table). handler::commit_inplace_alter_table This is where the operation is committed or rolled back. Here, InnoDB would drop any indexes, rename any columns, drop or add foreign keys, and finalize a table rebuild or index creation. It would also discard any logs that were set up for online index creation or table rebuild. The prepare and commit phases require an exclusive lock, blocking all access to the table. If MySQL times out while upgrading the table meta-data lock for the commit phase, it will roll back the ALTER TABLE operation. In MySQL 5.6, data definition language operations are still not fully atomic, because the data dictionary is split. Part of it is inside InnoDB data dictionary tables. Part of the information is only available in the *.frm file, which is not covered by any crash recovery log. But, there is a single commit phase inside the storage engine. Online Secondary Index Creation It may occur that an index needs to be created on a new column to speed up queries. But, it may be unacceptable to block modifications on the table while creating the index. It turns out that it is conceptually not so hard to support online index creation. All we need is some more execution phases: Set up a stub for the index, for logging changes. Scan the table for index records. Sort the index records. Bulk load the index records. Apply the logged changes. Replace the stub with the actual index. Threads that modify the table will log the operations to the logs of each index that is being created. Errors, such as log overflow or uniqueness violations, will only be flagged by the ALTER TABLE thread. The log is conceptually similar to the InnoDB change buffer. The bulk load of index records will bypass record locking. We still generate redo log for writing the index pages. It would suffice to log page allocations only, and to flush the index pages from the buffer pool to the file system upon completion. Native ALTER TABLE Starting with MySQL 5.6, InnoDB supports most ALTER TABLE operations natively. The notable exceptions are changes to the column type, ADD FOREIGN KEY except when foreign_key_checks=0, and changes to tables that contain FULLTEXT indexes. The keyword ALGORITHM=INPLACE is somewhat misleading, because certain operations cannot be performed in-place. For example, changing the ROW_FORMAT of a table requires a rebuild. Online operation (LOCK=NONE) is not allowed in the following cases: when adding an AUTO_INCREMENT column, when the table contains FULLTEXT indexes or a hidden FTS_DOC_ID column, or when there are FOREIGN KEY constraints referring to the table, with ON…CASCADE or ON…SET NULL option. The FOREIGN KEY limitations are needed, because MySQL does not acquire meta-data locks on the child or parent tables when executing SQL statements. Theoretically, InnoDB could support operations like ADD COLUMN and DROP COLUMN in-place, by lazily converting the table to a newer format. This would require that the data dictionary keep multiple versions of the table definition. For simplicity, we will copy the entire table, even for DROP COLUMN. The bulk copying of the table will bypass record locking and undo logging. For facilitating online operation, a temporary log will be associated with the clustered index of table. Threads that modify the table will also write the changes to the log. When altering the table, we skip all records that have been marked for deletion. In this way, we can simply discard any undo log records that were not yet purged from the original table. Off-page columns, or BLOBs, are an important consideration. We suspend the purge of delete-marked records if it would free any off-page columns from the old table. This is because the BLOBs can be needed when applying changes from the log. We have special logging for handling the ROLLBACK of an INSERT that inserted new off-page columns. This is because the columns will be freed at rollback.

    Read the article

  • Schema Based Code Completion for NetBeans Platform Applications

    - by Geertjan
    Toni's recent blog entry provides, among several other interesting things, instructions for something I've been wanting to cover for a long time, which is schema based code completion: The above is a sample I created via Toni's tutorial, using the schema described here: http://www.w3schools.com/schema/schema_example.asp The support for the Navigator ain't bad either, especially considering I didn't do any coding at all to get all this: And here's where you can find the whole sample: http://java.net/projects/nb-api-samples/sources/api-samples/show/versions/7.2/misc/ShipOrder

    Read the article

  • RPi and Java Embedded GPIO: Connecting LEDs

    - by hinkmond
    Next, we need some low-level peripherals to connect to the Raspberry Pi GPIO header. So, we'll do what's called a "Fry's Run" in Silicon Valley, which means we go shop at the local Fry's Electronics store for parts. In this case, we'll need some breadboard jumper wires (blue wires in photo), some LEDs, and some resistors (for the RPi GPIO, 150 ohms - 300 ohms would work for the 3.3V output of the GPIO ports). And, if you want to do other projects, you might as well by a breadboard, which is a development board with lots of holes in it. Ask a Fry's clerk for help. Or, better yet, ask the customer standing next to you in the electronics components aisle for help. (Might be faster) So, go to your local hobby electronics store, or go to Fry's if you have one close by, and come back here to the next blog post to see how to hook these parts up. Hinkmond

    Read the article

  • Reading a ZFS USB drive with Mac OS X Mountain Lion

    - by Karim Berrah
    The problem: I'm using a MacBook, mainly with Solaris 11, but something with Mac OS X (ML). The only missing thing is that Mac OS X can't read my external ZFS based USB drive, where I store all my data. So, I decided to look for a solution. Possible solution: I decided to use VirtualBox with a Solaris 11 VM as a passthrough to my data. Here are the required steps: Installing a Solaris 11 VM Install VirtualBox on your Mac OS X, add the extension pack (needed for USB) Plug your ZFS based USB drive on your Mac, ignore it when asked to initialize it. Create a VM for Solaris (bridged network), and before installing it, create a USB filter (in the settings of your Vbox VM, go to Ports, then USB, then add a new USB filter from the attached device "grey usb-connector logo with green plus sign")  Install a Solaris 11 VM, boot it, and install the Guest addition check with "ifconfg -a" the IP address of your Solaris VM Creating a path to your ZFS USB drive In MacOS X, use the "Disk Utility" to unmount the USB attached drive, and unplug the USB device. Switch back to VirtualBox, select the top of the window where your Solaris 11 is running plug your ZFS USB drive, select "ignore" if Mac OS invite you to initialize the disk In the VirtualBox VM menu, go to "Devices" then "USB Devices" and select from the dropping menu your "USB device" Connection your Solaris VM to the USB drive Inside Solaris, you might now check that your device is accessible by using the "format" cli command If not, repeat previous steps Now, with root privilege, force a zpool import -f myusbdevicepoolname because this pool was created on another system check that you see your new pool with "zpool status" share your pool with NFS: share -F NFS /myusbdevicepoolname Accessing the USB ZFS drive from Mac OS X This is the easiest step: access an NFS share from mac OS Create a "ZFSdrive" folder on your MacOS desktop from a terminal under mac OS: mount -t nfs IPadressofMySoalrisVM:/myusbdevicepoolname  /Users/yourusername/Desktop/ZFSdrive et voila ! you might access your data, on a ZFS USB drive, directly from your Mountain Lion Desktop. You might play with the share rights in order to alter any read/write rights as needed. You might activate compression, encryption inside the Solaris 11 VM ...

    Read the article

  • Blog on hiatus once more

    - by Steven Chan
    I am off for a much-needed vacation, so this blog is going on hiatus until mid-June.  You're welcome to post comments and questions; they'll be reviewed and approved for publication in my absence.  However, I won't be publishing any new articles until my return.See you in a few weeks.

    Read the article

  • Performance triage

    - by Dave
    Folks often ask me how to approach a suspected performance issue. My personal strategy is informed by the fact that I work on concurrency issues. (When you have a hammer everything looks like a nail, but I'll try to keep this general). A good starting point is to ask yourself if the observed performance matches your expectations. Expectations might be derived from known system performance limits, prototypes, and other software or environments that are comparable to your particular system-under-test. Some simple comparisons and microbenchmarks can be useful at this stage. It's also useful to write some very simple programs to validate some of the reported or expected system limits. Can that disk controller really tolerate and sustain 500 reads per second? To reduce the number of confounding factors it's better to try to answer that question with a very simple targeted program. And finally, nothing beats having familiarity with the technologies that underlying your particular layer. On the topic of confounding factors, as our technology stacks become deeper and less transparent, we often find our own technology working against us in some unexpected way to choke performance rather than simply running into some fundamental system limit. A good example is the warm-up time needed by just-in-time compilers in Java Virtual Machines. I won't delve too far into that particular hole except to say that it's rare to find good benchmarks and methodology for java code. Another example is power management on x86. Power management is great, but it can take a while for the CPUs to throttle up from low(er) frequencies to full throttle. And while I love "turbo" mode, it makes benchmarking applications with multiple threads a chore as you have to remember to turn it off and then back on otherwise short single-threaded runs may look abnormally fast compared to runs with higher thread counts. In general for performance characterization I disable turbo mode and fix the power governor at "performance" state. Another source of complexity is the scheduler, which I've discussed in prior blog entries. Lets say I have a running application and I want to better understand its behavior and performance. We'll presume it's warmed up, is under load, and is an execution mode representative of what we think the norm would be. It should be in steady-state, if a steady-state mode even exists. On Solaris the very first thing I'll do is take a set of "pstack" samples. Pstack briefly stops the process and walks each of the stacks, reporting symbolic information (if available) for each frame. For Java, pstack has been augmented to understand java frames, and even report inlining. A few pstack samples can provide powerful insight into what's actually going on inside the program. You'll be able to see calling patterns, which threads are blocked on what system calls or synchronization constructs, memory allocation, etc. If your code is CPU-bound then you'll get a good sense where the cycles are being spent. (I should caution that normal C/C++ inlining can diffuse an otherwise "hot" method into other methods. This is a rare instance where pstack sampling might not immediately point to the key problem). At this point you'll need to reconcile what you're seeing with pstack and your mental model of what you think the program should be doing. They're often rather different. And generally if there's a key performance issue, you'll spot it with a moderate number of samples. I'll also use OS-level observability tools to lock for the existence of bottlenecks where threads contend for locks; other situations where threads are blocked; and the distribution of threads over the system. On Solaris some good tools are mpstat and too a lesser degree, vmstat. Try running "mpstat -a 5" in one window while the application program runs concurrently. One key measure is the voluntary context switch rate "vctx" or "csw" which reflects threads descheduling themselves. It's also good to look at the user; system; and idle CPU percentages. This can give a broad but useful understanding if your threads are mostly parked or mostly running. For instance if your program makes heavy use of malloc/free, then it might be the case you're contending on the central malloc lock in the default allocator. In that case you'd see malloc calling lock in the stack traces, observe a high csw/vctx rate as threads block for the malloc lock, and your "usr" time would be less than expected. Solaris dtrace is a wonderful and invaluable performance tool as well, but in a sense you have to frame and articulate a meaningful and specific question to get a useful answer, so I tend not to use it for first-order screening of problems. It's also most effective for OS and software-level performance issues as opposed to HW-level issues. For that reason I recommend mpstat & pstack as my the 1st step in performance triage. If some other OS-level issue is evident then it's good to switch to dtrace to drill more deeply into the problem. Only after I've ruled out OS-level issues do I switch to using hardware performance counters to look for architectural impediments.

    Read the article

  • Selecting Items in a GeoToolkit Driven Map

    - by Geertjan
    When you take a look at all the tools provided by GeoToolkit, you'll be quite impressed. For example, within the US map shown in yesterday's blog entry, you can drill down into individual states by selecting them via the mouse, as shown below: With that, the basis of a more complex application is laid, since all the map-related functionality is handed to you out of the box. The sample referred to yesterday has been updated, if you check it out and run it (assuming you've taken the additional steps mentioned yesterday), you'll see the above. http://java.net/projects/nb-api-samples/sources/api-samples/show/versions/7.3/tutorials/geospatial/geotoolkit/MyGeospatialSystem

    Read the article

  • Library order is important

    - by Darryl Gove
    I've written quite extensively about link ordering issues, but I've not discussed the interaction between archive libraries and shared libraries. So let's take a simple program that calls a maths library function: #include <math.h int main() { for (int i=0; i<10000000; i++) { sin(i); } } We compile and run it to get the following performance: bash-3.2$ cc -g -O fp.c -lm bash-3.2$ timex ./a.out real 6.06 user 6.04 sys 0.01 Now most people will have heard of the optimised maths library which is added by the flag -xlibmopt. This contains optimised versions of key mathematical functions, in this instance, using the library doubles performance: bash-3.2$ cc -g -O -xlibmopt fp.c -lm bash-3.2$ timex ./a.out real 2.70 user 2.69 sys 0.00 The optimised maths library is provided as an archive library (libmopt.a), and the driver adds it to the link line just before the maths library - this causes the linker to pick the definitions provided by the static library in preference to those provided by libm. We can see the processing by asking the compiler to print out the link line: bash-3.2$ cc -### -g -O -xlibmopt fp.c -lm /usr/ccs/bin/ld ... fp.o -lmopt -lm -o a.out... The flag to the linker is -lmopt, and this is placed before the -lm flag. So what happens when the -lm flag is in the wrong place on the command line: bash-3.2$ cc -g -O -xlibmopt -lm fp.c bash-3.2$ timex ./a.out real 6.02 user 6.01 sys 0.01 If the -lm flag is before the source file (or object file for that matter), we get the slower performance from the system maths library. Why's that? If we look at the link line we can see the following ordering: /usr/ccs/bin/ld ... -lmopt -lm fp.o -o a.out So the optimised maths library is still placed before the system maths library, but the object file is placed afterwards. This would be ok if the optimised maths library were a shared library, but it is not - instead it's an archive library, and archive library processing is different - as described in the linker and library guide: "The link-editor searches an archive only to resolve undefined or tentative external references that have previously been encountered." An archive library can only be used resolve symbols that are outstanding at that point in the link processing. When fp.o is placed before the libmopt.a archive library, then the linker has an unresolved symbol defined in fp.o, and it will search the archive library to resolve that symbol. If the archive library is placed before fp.o then there are no unresolved symbols at that point, and so the linker doesn't need to use the archive library. This is why libmopt needs to be placed after the object files on the link line. On the other hand if the linker has observed any shared libraries, then at any point these are checked for any unresolved symbols. The consequence of this is that once the linker "sees" libm it will resolve any symbols it can to that library, and it will not check the archive library to resolve them. This is why libmopt needs to be placed before libm on the link line. This leads to the following order for placing files on the link line: Object files Archive libraries Shared libraries If you use this order, then things will consistently get resolved to the archive libraries rather than to the shared libaries.

    Read the article

  • Experiencing the New Social Enterprise

    - by kellsey.ruppel
    Social media and networking tools, popularly known as Web 2.0 technologies, are rapidly transforming user expectations of enterprise systems. Many organizations are investing in these new tools to cultivate a modern user experience in an “Enterprise 2.0” environment that unlocks the full potential of traditional IT systems and fosters collaboration in key business processes. Here are some key points and takeaways from some of the keynotes yesterday at the Enterprise 2.0 Conference: Social networks continue to forge complex connections between people, processes, and content, facilitating collaboration and the sharing of information The customer of today lives inside of Facebook, on your web, or has an app for that – and they have a question – and want an answer NOW Empowered employees are able to connect to colleagues, build relationships, develop expertise, self-select projects of interest to them, and expand skill sets well beyond their formal roles A fundamental promise of Enterprise 2.0 is that ideas will be generated and shared by everyone across the organization, leading to increased innovation, agility, and competitive advantage How well is your organizating delivering on these concepts? Are you able to successfully bring together people, processes and content? Are you providing the social tools your employees want and need? Are you experiencing the new social enterprise?

    Read the article

  • Salon du E-commerce et Social CRM B2B

    - by Valérie De Montvallon
    Nous participions au Salon du E-commerce et Social CRM B2B en septembre dernier et nous vous proposons la vidéo réalisée par Les décideurs de la relation client. Découvrez des avis d'experts de la Relation Client pour en savoir toujours plus sur le Social CRM BtoB. Pour le BtoB, la gestion de la Relation Client semble bien simple quand il s’agit de récolter des informations à partir d’appels téléphoniques, d’entretiens physiques ou d’emails. Toutefois, la tâche s’enhardit sur les réseaux sociaux. Ces plateformes sont-elles réellement adaptées au BtoB ? Comment procéder quand on se lance ? Quels sont les pièges à éviter ? Quels sont les éléments qui laissent à penser que le Social CRM BtoB est une vraie tendance de la Relation Client ? Autant de questions auxquelles les experts rencontrés ont apporté des éléments de réponse. Vous découvrirez l'interview de notre expert, Khalid Madarbokus, qui s'exprime sur la remontée d'informations depuis les médias sociaux au sein des départements d'une entreprise B2B (à 3:20)

    Read the article

  • Exalogic 2.0.1 Tea Break Snippets - Creating and using Distribution Groups

    - by The Old Toxophilist
    By default running your Exalogic in a Virtual provides you with, what to Cloud Users, is a single large resource and they can just create vServers and not care about how they are laid down on the the underlying infrastructure. All the Cloud Users will know is that they can create vServers. For example if we have a Quarter Rack (8 Nodes) and our Cloud User creates 8 vServers those 8 vServers may run on 8 distinct nodes or may all run on the same node. Although in many cases we, as Cloud Users, may not be to worried how the Virtualisation Algorithm decides where to place our vServers there are cases where it is extremely important that vServers run on distinct physical compute nodes. For example if we have a Weblogic Cluster we will want the Servers with in the cluster to run on distinct physical node to cover for the situation where one physical node is lost. To achieve this the Exalogic Virtualised implementation provides Distribution Groups that define and anti-aliasing policy that the underlying Virtualisation Algorithm will take into account when placing vServers. It should be noted that Distribution Groups must be created before you create vServers because a vServer can only be added to a Distribution Group at creation time. Creating A Distribution Group To create a Distribution Groups we will first need to select the Account in which we want the Distribution Group to be created. Once we have selected the account we will see the Interface update and Account specific Actions will be displayed within the Action Panes. From the Action pane (or Right-Click on the Account) select the "Create Distribution Group" action. This will initiate the create wizard as follows. Distribution Group Details Within the first Step of the Wizard we can specify the name of the distribution group and this should be unique. In addition we can provide a detailed description of the group. Distribution Group Configuration The second step of the configuration wizard allows you to specify the number of elements that are required within this group and will specify a maximum of the number of nodes within you Exalogic. At this point it is always better to specify a group with spare capacity allowing for future expansion. As vServers are added to group the available slots decrease. Summary Finally the last step of the wizard display a summary of the information entered.

    Read the article

  • Kostenlose MySQL Seminare im Mai

    - by A&C Redaktion
    Im Mai führen wir für Sie zahlreiche MySQL Seminare mit unterschiedlichen Themenschwerpunkten durch. Vom „Skalierbarkeitstag“ über einen praxisorienterten MySQL Enterprise Workshop bis hin zum Überblick über die Hochverfügbarkeitslösungen für MySQL mit Anwendungsbeispiel aus der Praxis. Wir würden uns sehr freuen, Sie bei einem dieser Seminare begrüßen zu dürfen. Die einzelnen Termine und Anmeldungslinks finden Sie hier. Wir freuen uns auf Ihre Teilnahme!

    Read the article

  • Will We See More Partisan Splits from the SEC?

    - by Theresa Hickman
    The SEC's lawsuit against Goldman Sachs has made recent headlines. The fact that the SEC seems to be growing more litigious by making examples out of invididuals and companies is not the topic of my blog. The most interesting thing about this case is that the 5 SEC commissioners did not vote unanimously to bring the lawsuit. The commissioners had a 3-2 partisan split. Ms. Shapiro (a registered independent) voted with the 2 Democrats. Split votes rarely happen by the SEC, especially when they are enforcing actions against firms they regulate. I wonder if we will be seeing more of these partisan split votes when it comes to other decisions, say IFRS adoption? I know both the Democrats and Republicans have stated that they support a unified accounting standard. However, will the Republicans want to push back simply because there is a Democrat in office? (Seems childish to me, but I never understood politics). I think Ms. Shapiro will most definitely want a unanimous consensus related to the IFRS topic. There is already talk that we will be seeing more SEC split votes in the future. For example, there will most likely be a split vote regarding Obama's proposed financial-regulatory overhaul. I don't see why IFRS would be exempt. I really hope it doesn't happen because the last thing we need is more road blocks on our IFRS road trip.

    Read the article

  • Finding which activities will execute next in a process instance

    - by Mark Nelson
      We have had a few queries lately about how to find out what activity (or activities) will be the next to execute in a particular process instance.  It is possible to do this, however you will need to use a couple of undocumented APIs.  That means that they could (and probably will) change in some future release and break your code.  If you understand the risks of using undocumented APIs and are prepared to accept that risk, read on… READ MORE >>

    Read the article

  • Mobile or the Science of Programming Languages

    - by user12652314
    Just two things to share today. First is some news in the mobile computing space and a pretty cool new relationship developing with DubLabs and AT&T to enable a student-centric mobile experience for our Campus Solution customers. And second, is an interesting article shared by a friend on Research in Programming Languages related to STEM education, a key story element to my project with Americas Cup and iED, but also to our national interest

    Read the article

< Previous Page | 603 604 605 606 607 608 609 610 611 612 613 614  | Next Page >