Search Results

Search found 18454 results on 739 pages for 'oracle thoughts'.

Page 511/739 | < Previous Page | 507 508 509 510 511 512 513 514 515 516 517 518  | Next Page >

  • Virtual Grocery Store

    - by David Dorf
    Because South Korean's are so busy, Tesco decided that its Homeplus grocery chain should offer a virtual alternative in subways.  As you can see in the video below, shoppers passing through a subway station can see a virtual representation of the store and scan items with their mobile phones.  This builds a shopping list which is delivered to their homes later that day. This is a very cool example of leveraging technology to offer a shopping experience that's different from bricks and clicks.

    Read the article

  • Un-used Indexes on MDP_MATRIX Consuming Resources

    - by user702295
    Disable un-used Indexes: As much as it is recommended to create relevant indexes, it is advised not to have too many indexes on the mdp_matrix table.  Too many indexes will cause long waits on the table as indexes needs to get updated every time the table is updated.  There are many seeded indexes on mdp_matrix, every out of the box data model level has an index on the matrix table.  If a level is unused in the specific data model of the implementation, it is advisable to disable that index.  If the customer is not sure if and how indexes are utilized, the DBA can monitor all indexes.  After a few cycles of operation, the DBA should go over that list and see which indexes have not been used.  Consider disabling them. There are scripts on the net to monitor indexes or use the monitoring usage clause in the alter index statement.

    Read the article

  • NetBeans IDE 7.3 Knows Null

    - by Geertjan
    What's the difference between these two methods, "test1" and "test2"? public int test1(String str) {     return str.length(); } public int test2(String str) {     if (str == null) {         System.err.println("Passed null!.");         //forgotten return;     }     return str.length(); } The difference, or at least, the difference that is relevant for this blog entry, is that whoever wrote "test2" apparently thinks that the variable "str" may be null, though did not provide a null check. In NetBeans IDE 7.3, you see this hint for "test2", but no hint for "test1", since in that case we don't know anything about the developer's intention for the variable and providing a hint in that case would flood the source code with too many false positives:  Annotations are supported in understanding how a piece of code is intended to be used. If method return types use @Nullable, @NullAllowed, @CheckForNull, the value is considered to be "strongly possible to be null", as well as if the variable is tested to be null, as shown above. When using @NotNull, @NonNull, @Nonnull, the value is considered to be non-null. (The exact FQNs of the annotations are ignored, only simple names are checked.) Here are examples showing where the hints are displayed for the non-null hints (the "strongly possible to be null" hints are not shown below, though you can see one of them in the screenshot above), together with a comment showing what is shown when you hover over the hint: There isn't a "one size fits all" refactoring for these various instances relating to null checks, hence you can't do an automated refactoring across your code base via tools in NetBeans IDE, as shown yesterday for class member reordering across code bases. However, you can, instead, go to Source | Inspect and then do a scan throughout a scope (e.g., current file/package/project or combinations of these or all open projects) for class elements that the IDE identifies as potentially having a problem in this area: Thanks to Jan Lahoda, who reports that this currently also works in NetBeans IDE 7.3 dev builds for fields but that may need to be disabled since right now too many false positives are returned, for help with the info above and any misunderstandings are my own fault!

    Read the article

  • Sources of NetBeans Gradle Plugin

    - by Geertjan
    Here is where you can find the sources of the latest and greatest NetBeans Gradle plugin: http://java.net/projects/nb-api-samples/sources/api-samples/show/versions/7.1/misc/GradleSupport To use it, download the sources above, open the sources into the IDE (which must be 7.1.1 or above), then you'll have a NetBeans module. Right-click it to run the module into a new instance of NetBeans IDE. In the Options window's Miscellaneous tab, there's a Gradle subtab for setting the Gradle location. In the New File dialog, in the Other category, you'll find a template named "Empty Gradle file". Make sure to name it "build" and to put it in the root directory of the application (by leaving the Folder field empty, you're specifying it should be created in the root directory). You'll then be able to expand the build.gradle file: Double-click a task to run it. When you open the file, it opens in the Groovy editor, if the Groovy editor is installed. When you make changes in the file, the list of tasks, shown above, is automatically recreated. It's at a really early stage of development and it would be great if developers out there would be interested in adding more features to it.

    Read the article

  • Tools for Enterprise Architects: OmniGraffle for iPad?

    - by pat.shepherd
    Well, I have to admit to being a bit of an Apple fan and, of course, and early adopter of gadgets and technology in general.  So, when FedEx showed up with my iPad 3G last week, I was a kid in a candy store.  One of the apps that my “buy finger” was hovering over for a while (like all of 3 days) was Omnigraffle for the iPad.  I imagined that it would be very cool to use this with a customer’s EA’s to sketch out Business, Application, Information and Technology architectures.  Instead of using the blackboard, this seemed to offer promise as a white-boarding tool with obvious benefits over a traditional white-board.  I figured I’d get a VGA adapter, plug it into the customer’s projector and off we would go with a great JAD tool.  The touch pad approach offered an additional hands-on kind of feel. So, I made the $49.99 purchase + the $29.99 VGA adapter and tried to give it a go.  Well, I was both pleasantly and unpleasantly surprised.  It is both powerful and easy to use.  There are great stencils included for shapes, software icons, Visio shapes, and even UML notation.  There is even a free-hand tool that works well.  I created some diagrams pretty quickly.   The one below was just a test and took all of 10 minuets to do. The only problem was that Onmigraffle does not recognize the VGA output, so I was stopped dead in my tracks, as it were.  My use case was as a collaborative diagramming tool with other architects, though I can still use it off line.  I called Omnigraffle and they said that VGA support is on the feature request list so, hopefully, in a short amount of time, I can use the tool as I envisioned.   Review: Criteria Result Is it fun? Yes Is it Useful? Yes Does it Show Promise? Yes Did the VGA Output Work? No File/diagram Formats PDF, Onmigraffle proprietary, image   Quick Sample:     OmniGraffle for iPad - Products - The Omni Group

    Read the article

  • JCP 2.9 & Transparency Spec Lead Call material is available

    - by Heather VanCura
    The JCP 2.9 & Transparency Spec Lead Call materials and recording from 9 November are now available on the JCP.org multimedia page.  Learn about changes introduced with JCP 2.9, effective Tuesday, 13 November, and a review of the JCP.Next reform efforts. Plus, a progress report on JCP 2.8, specifically around the areas of transparency, participation and agility, as well as suggestions for how you can get more involved in supporting these efforts with the current JCP program JSRs. 

    Read the article

  • Library order is important

    - by Darryl Gove
    I've written quite extensively about link ordering issues, but I've not discussed the interaction between archive libraries and shared libraries. So let's take a simple program that calls a maths library function: #include <math.h int main() { for (int i=0; i<10000000; i++) { sin(i); } } We compile and run it to get the following performance: bash-3.2$ cc -g -O fp.c -lm bash-3.2$ timex ./a.out real 6.06 user 6.04 sys 0.01 Now most people will have heard of the optimised maths library which is added by the flag -xlibmopt. This contains optimised versions of key mathematical functions, in this instance, using the library doubles performance: bash-3.2$ cc -g -O -xlibmopt fp.c -lm bash-3.2$ timex ./a.out real 2.70 user 2.69 sys 0.00 The optimised maths library is provided as an archive library (libmopt.a), and the driver adds it to the link line just before the maths library - this causes the linker to pick the definitions provided by the static library in preference to those provided by libm. We can see the processing by asking the compiler to print out the link line: bash-3.2$ cc -### -g -O -xlibmopt fp.c -lm /usr/ccs/bin/ld ... fp.o -lmopt -lm -o a.out... The flag to the linker is -lmopt, and this is placed before the -lm flag. So what happens when the -lm flag is in the wrong place on the command line: bash-3.2$ cc -g -O -xlibmopt -lm fp.c bash-3.2$ timex ./a.out real 6.02 user 6.01 sys 0.01 If the -lm flag is before the source file (or object file for that matter), we get the slower performance from the system maths library. Why's that? If we look at the link line we can see the following ordering: /usr/ccs/bin/ld ... -lmopt -lm fp.o -o a.out So the optimised maths library is still placed before the system maths library, but the object file is placed afterwards. This would be ok if the optimised maths library were a shared library, but it is not - instead it's an archive library, and archive library processing is different - as described in the linker and library guide: "The link-editor searches an archive only to resolve undefined or tentative external references that have previously been encountered." An archive library can only be used resolve symbols that are outstanding at that point in the link processing. When fp.o is placed before the libmopt.a archive library, then the linker has an unresolved symbol defined in fp.o, and it will search the archive library to resolve that symbol. If the archive library is placed before fp.o then there are no unresolved symbols at that point, and so the linker doesn't need to use the archive library. This is why libmopt needs to be placed after the object files on the link line. On the other hand if the linker has observed any shared libraries, then at any point these are checked for any unresolved symbols. The consequence of this is that once the linker "sees" libm it will resolve any symbols it can to that library, and it will not check the archive library to resolve them. This is why libmopt needs to be placed before libm on the link line. This leads to the following order for placing files on the link line: Object files Archive libraries Shared libraries If you use this order, then things will consistently get resolved to the archive libraries rather than to the shared libaries.

    Read the article

  • GNOME 3.4 released, with smooth & fast magnification

    - by Peter Korn
    The GNOME community released GNOME 3.4 today. This release contains several new accessibility features, along with a new set of custom high-contrast icons which improve the user experience for users needing improved contrast. This release also makes available the AEGIS-funded GNOME Shell Magnifier. This magnifier leverages the powerful graphics functionality built into all modern video cards for smooth and fast magnification in GNOME. You can watch a video of that magnifier (with the previous version of the preference dialog), which shows all of the features now available in GNOME 3.4. This includes full/partial screen magnification, a magnifier lens, full or partial mouse cross hairs with translucency, and several mouse tracking modes. Future improvements planned for GNOME 3.6 include focus & caret tracking, and a variety of color/contrast controls.

    Read the article

  • Online ALTER TABLE in MySQL 5.6

    - by Marko Mäkelä
    This is the low-level view of data dictionary language (DDL) operations in the InnoDB storage engine in MySQL 5.6. John Russell gave a more high-level view in his blog post April 2012 Labs Release – Online DDL Improvements. MySQL before the InnoDB Plugin Traditionally, the MySQL storage engine interface has taken a minimalistic approach to data definition language. The only natively supported operations were CREATE TABLE, DROP TABLE and RENAME TABLE. Consider the following example: CREATE TABLE t(a INT); INSERT INTO t VALUES (1),(2),(3); CREATE INDEX a ON t(a); DROP TABLE t; The CREATE INDEX statement would be executed roughly as follows: CREATE TABLE temp(a INT, INDEX(a)); INSERT INTO temp SELECT * FROM t; RENAME TABLE t TO temp2; RENAME TABLE temp TO t; DROP TABLE temp2; You could imagine that the database could crash when copying all rows from the original table to the new one. For example, it could run out of file space. Then, on restart, InnoDB would roll back the huge INSERT transaction. To fix things a little, a hack was added to ha_innobase::write_row for committing the transaction every 10,000 rows. Still, it was frustrating that even a simple DROP INDEX would make the table unavailable for modifications for a long time. Fast Index Creation in the InnoDB Plugin of MySQL 5.1 MySQL 5.1 introduced a new interface for CREATE INDEX and DROP INDEX. The old table-copying approach can still be forced by SET old_alter_table=0. This interface is used in MySQL 5.5 and in the InnoDB Plugin for MySQL 5.1. Apart from the ability to do a quick DROP INDEX, the main advantage is that InnoDB will execute a merge-sort algorithm before inserting the index records into each index that is being created. This should speed up the insert into the secondary index B-trees and potentially result in a better B-tree fill factor. The 5.1 ALTER TABLE interface was not perfect. For example, DROP FOREIGN KEY still invoked the table copy. Renaming columns could conflict with InnoDB foreign key constraints. Combining ADD KEY and DROP KEY in ALTER TABLE was problematic and not atomic inside the storage engine. The ALTER TABLE interface in MySQL 5.6 The ALTER TABLE storage engine interface was completely rewritten in MySQL 5.6. Instead of introducing a method call for every conceivable operation, MySQL 5.6 introduced a handful of methods, and data structures that keep track of the requested changes. In MySQL 5.6, online ALTER TABLE operation can be requested by specifying LOCK=NONE. Also LOCK=SHARED and LOCK=EXCLUSIVE are available. The old-style table copying can be requested by ALGORITHM=COPY. That one will require at least LOCK=SHARED. From the InnoDB point of view, anything that is possible with LOCK=EXCLUSIVE is also possible with LOCK=SHARED. Most ALGORITHM=INPLACE operations inside InnoDB can be executed online (LOCK=NONE). InnoDB will always require an exclusive table lock in two phases of the operation. The execution phases are tied to a number of methods: handler::check_if_supported_inplace_alter Checks if the storage engine can perform all requested operations, and if so, what kind of locking is needed. handler::prepare_inplace_alter_table InnoDB uses this method to set up the data dictionary cache for upcoming CREATE INDEX operation. We need stubs for the new indexes, so that we can keep track of changes to the table during online index creation. Also, crash recovery would drop any indexes that were incomplete at the time of the crash. handler::inplace_alter_table In InnoDB, this method is used for creating secondary indexes or for rebuilding the table. This is the ‘main’ phase that can be executed online (with concurrent writes to the table). handler::commit_inplace_alter_table This is where the operation is committed or rolled back. Here, InnoDB would drop any indexes, rename any columns, drop or add foreign keys, and finalize a table rebuild or index creation. It would also discard any logs that were set up for online index creation or table rebuild. The prepare and commit phases require an exclusive lock, blocking all access to the table. If MySQL times out while upgrading the table meta-data lock for the commit phase, it will roll back the ALTER TABLE operation. In MySQL 5.6, data definition language operations are still not fully atomic, because the data dictionary is split. Part of it is inside InnoDB data dictionary tables. Part of the information is only available in the *.frm file, which is not covered by any crash recovery log. But, there is a single commit phase inside the storage engine. Online Secondary Index Creation It may occur that an index needs to be created on a new column to speed up queries. But, it may be unacceptable to block modifications on the table while creating the index. It turns out that it is conceptually not so hard to support online index creation. All we need is some more execution phases: Set up a stub for the index, for logging changes. Scan the table for index records. Sort the index records. Bulk load the index records. Apply the logged changes. Replace the stub with the actual index. Threads that modify the table will log the operations to the logs of each index that is being created. Errors, such as log overflow or uniqueness violations, will only be flagged by the ALTER TABLE thread. The log is conceptually similar to the InnoDB change buffer. The bulk load of index records will bypass record locking. We still generate redo log for writing the index pages. It would suffice to log page allocations only, and to flush the index pages from the buffer pool to the file system upon completion. Native ALTER TABLE Starting with MySQL 5.6, InnoDB supports most ALTER TABLE operations natively. The notable exceptions are changes to the column type, ADD FOREIGN KEY except when foreign_key_checks=0, and changes to tables that contain FULLTEXT indexes. The keyword ALGORITHM=INPLACE is somewhat misleading, because certain operations cannot be performed in-place. For example, changing the ROW_FORMAT of a table requires a rebuild. Online operation (LOCK=NONE) is not allowed in the following cases: when adding an AUTO_INCREMENT column, when the table contains FULLTEXT indexes or a hidden FTS_DOC_ID column, or when there are FOREIGN KEY constraints referring to the table, with ON…CASCADE or ON…SET NULL option. The FOREIGN KEY limitations are needed, because MySQL does not acquire meta-data locks on the child or parent tables when executing SQL statements. Theoretically, InnoDB could support operations like ADD COLUMN and DROP COLUMN in-place, by lazily converting the table to a newer format. This would require that the data dictionary keep multiple versions of the table definition. For simplicity, we will copy the entire table, even for DROP COLUMN. The bulk copying of the table will bypass record locking and undo logging. For facilitating online operation, a temporary log will be associated with the clustered index of table. Threads that modify the table will also write the changes to the log. When altering the table, we skip all records that have been marked for deletion. In this way, we can simply discard any undo log records that were not yet purged from the original table. Off-page columns, or BLOBs, are an important consideration. We suspend the purge of delete-marked records if it would free any off-page columns from the old table. This is because the BLOBs can be needed when applying changes from the log. We have special logging for handling the ROLLBACK of an INSERT that inserted new off-page columns. This is because the columns will be freed at rollback.

    Read the article

  • Feynman's inbox

    - by user12607414
    Here is Richard Feynman writing on the ease of criticizing theories, and the difficulty of forming them: The problem is not just to say something might be wrong, but to replace it by something — and that is not so easy. As soon as any really definite idea is substituted it becomes almost immediately apparent that it does not work. The second difficulty is that there is an infinite number of possibilities of these simple types. It is something like this. You are sitting working very hard, you have worked for a long time trying to open a safe. Then some Joe comes along who knows nothing about what you are doing, except that you are trying to open the safe. He says ‘Why don’t you try the combination 10:20:30?’ Because you are busy, you have tried a lot of things, maybe you have already tried 10:20:30. Maybe you know already that the middle number is 32 not 20. Maybe you know as a matter of fact that it is a five digit combination… So please do not send me any letters trying to tell me how the thing is going to work. I read them — I always read them to make sure that I have not already thought of what is suggested — but it takes too long to answer them, because they are usually in the class ‘try 10:20:30’. (“Seeking New Laws”, page 161 in The Character of Physical Law.) As a sometime designer (and longtime critic) of widely used computer systems, I have seen similar difficulties appear when anyone undertakes to publicly design a piece of software that may be used by many thousands of customers. (I have been on both sides of the fence, of course.) The design possibilities are endless, but the deep design problems are usually hidden beneath a mass of superfluous detail. The sheer numbers can be daunting. Even if only one customer out of a thousand feels a need to express a passionately held idea, it can take a long time to read all the mail. And it is a fact of life that many of those strong suggestions are only weakly supported by reason or evidence. Opinions are plentiful, but substantive research is time-consuming, and hence rare. A related phenomenon commonly seen with software is bike-shedding, where interlocutors focus on surface details like naming and syntax… or (come to think of it) like lock combinations. On the other hand, software is easier than quantum physics, and the population of people able to make substantial suggestions about software systems is several orders of magnitude bigger than Feynman’s circle of colleagues. My own work would be poorer without contributions — sometimes unsolicited, sometimes passionately urged on me — from the open source community. If a Nobel prize winner thought it was worthwhile to read his mail on the faint chance of learning a good idea, I am certainly not going to throw mine away. (In case anyone is still reading this, and is wondering what provoked a meditation on the quality of one’s inbox contents, I’ll simply point out that the volume has been very high, for many months, on the Lambda-Dev mailing list, where the next version of the Java language is being discussed. Bravo to those of my colleagues who are surfing that wave.) I started this note thinking there was an odd parallel between the life of the physicist and that of a software designer. On second thought, I’ll bet that is the story for anybody who works in public on something requiring special training. (And that would be pretty much anything worth doing.) In any case, Feynman saw it clearly and said it well.

    Read the article

  • JavaOne Latin America Schedule Posted

    - by reza_rahman
    The official schedule for JavaOne Latin America 2012 is now posted. For the folks that are not yet aware, JavaOne Latin America is to be held on 4-6 December at the Transamerica Expo Center in São Paulo, Brazil. As you can expect there are keynotes, technical sessions, hands-on labs and demos led by Java luminaries from Brazil, Latin America and across the globe. There's tons of good stuff on Java EE and GlassFish. Arun Gupta will be delivering the Java technical keynote alongside the likes of Judson Althoff, Nandini Ramani, Georges Saab, Henrik Stahl, Simon Ritter and Terrence Barr. Here are just some of the Java EE centric sessions: Time Title Location Tuesday, Dec 4 12:15 PM Designing Java EE Applications in the Age of CDI Mezanino: Sala 14 Wednesday, Dec 5 5:30 PM Java EE 7 Platform: More Productivity and Integrated HTML Keynote Hall Thursday, Dec 6 11:15 AM Developing JAX-RS Web Applications Utilizing Server-Sent Events and WebSocket Mezanino: Sala 2 Thursday, Dec 6 12:30 PM HTML5 WebSocket and Java Mezanino: Sala 12 Thursday, Dec 6 1:45 PM What's new in Java Message Service 2.0 Mezanino: Sala 14 Thursday, Dec 6 3:00 PM JAX-RS 2.0: New and Noteworthy in the RESTful Web Services API Keynote Hall Thursday, Dec 6 4:15 PM Testing JavaServer Faces Applications with Arquillian and Selenium Mezanino: Sala 13 Thursday, Dec 6 4:15 PM Distributed Caching to Data Grids: The Past, Present, and Future of Scalable Java Mezanino: Sala 14 There will also be Java EE/GlassFish demos at the DEMOgrounds. The full schedule is posted here. Hope to see you there!

    Read the article

  • Inline template efficiency

    - by Darryl Gove
    I like inline templates, and use them quite extensively. Whenever I write code with them I'm always careful to check the disassembly to see that the resulting output is efficient. Here's a potential cause of inefficiency. Suppose we want to use the mis-named Leading Zero Detect (LZD) instruction on T4 (this instruction does a count of the number of leading zero bits in an integer register - so it should really be called leading zero count). So we put together an inline template called lzd.il looking like: .inline lzd lzd %o0,%o0 .end And we throw together some code that uses it: int lzd(int); int a; int c=0; int main() { for(a=0; a<1000; a++) { c=lzd(c); } return 0; } We compile the code with some amount of optimisation, and look at the resulting code: $ cc -O -xtarget=T4 -S lzd.c lzd.il $ more lzd.s .L77000018: /* 0x001c 11 */ lzd %o0,%o0 /* 0x0020 9 */ ld [%i1],%i3 /* 0x0024 11 */ st %o0,[%i2] /* 0x0028 9 */ add %i3,1,%i0 /* 0x002c */ cmp %i0,999 /* 0x0030 */ ble,pt %icc,.L77000018 /* 0x0034 */ st %i0,[%i1] What is surprising is that we're seeing a number of loads and stores in the code. Everything could be held in registers, so why is this happening? The problem is that the code is only inlined at the code generation stage - when the actual instructions are generated. Earlier compiler phases see a function call. The called functions can do all kinds of nastiness to global variables (like 'a' in this code) so we need to load them from memory after the function call, and store them to memory before the function call. Fortunately we can use a #pragma directive to tell the compiler that the routine lzd() has no side effects - meaning that it does not read or write to memory. The directive to do that is #pragma no_side_effect(<routine name), and it needs to be placed after the declaration of the function. The new code looks like: int lzd(int); #pragma no_side_effect(lzd) int a; int c=0; int main() { for(a=0; a<1000; a++) { c=lzd(c); } return 0; } Now the loop looks much neater: /* 0x0014 10 */ add %i1,1,%i1 ! 11 ! { ! 12 ! c=lzd(c); /* 0x0018 12 */ lzd %o0,%o0 /* 0x001c 10 */ cmp %i1,999 /* 0x0020 */ ble,pt %icc,.L77000018 /* 0x0024 */ nop

    Read the article

  • Groovy Refactoring in NetBeans

    - by Martin Janicek
    Hi guys, during the NetBeans 7.3 feature development, I spend quite a lot of time trying to get some basic Groovy refactoring to the game. I've implemented find usages and rename refactoring for some basic constructs (class types, fields, properties, variables and methods). It's certainly not perfect and it will definitely need a lot fixes and improvements to get it hundred percent reliable, but I need to start somehow :) I would like to ask all of you to test it as much as possible and file a new tickets to the cases where it doesn't work as expected (e.g. some occurrences which should be in usages isn't there etc.) ..it's really important for me because I don't have real Groovy project and thus I can test only some simple cases. I can promise, that with your help we can make it really useful for the next release. Also please be aware that the current version is focusing only on the .groovy files. That means it won't find any usages from the .java files (and the same applies for finding usages from java files - it won't find any groovy usages). I know it's not ideal, but as I said.. we have to start somehow and it wasn't possible to make it all-in-one, so only other option was to wait for the NetBeans 7.4. I'll focus on better Java-Groovy integration in the next release (not only in refactoring, but also in navigation, code completion etc.) BTW: I've created a new component with surprising name "Refactoring" in our bugzilla[1], so please put the reported issues into this category. [1] http://netbeans.org/bugzilla/buglist.cgi?product=groovy;component=Refactoring

    Read the article

  • APEX-Berichte automatisch aktualisieren

    - by carstenczarski
    Einen Bericht auf einer Anwendungsseite in regelmäßigen Abständen zu aktualisieren, ist recht einfach: Seit APEX 4.0 muss man noch nicht einmal JavaScript-Code dafür programmieren; mit einem einfach zu nutzenden Plugin des APEX-Entwicklerteams setzt man das in kürzester Zeit um. In diesem Tipp gehen wir noch etwas weiter: Für eine Tabelle, die eine Spalte mit dem Zeitpunkt der letzten Änderung enthält, wollen wir die zuletzt geänderten Werte hervorheben, so dass man sie leichter erkennen kann.

    Read the article

  • Kostenlose MySQL Seminare im Mai

    - by A&C Redaktion
    Im Mai führen wir für Sie zahlreiche MySQL Seminare mit unterschiedlichen Themenschwerpunkten durch. Vom „Skalierbarkeitstag“ über einen praxisorienterten MySQL Enterprise Workshop bis hin zum Überblick über die Hochverfügbarkeitslösungen für MySQL mit Anwendungsbeispiel aus der Praxis. Wir würden uns sehr freuen, Sie bei einem dieser Seminare begrüßen zu dürfen. Die einzelnen Termine und Anmeldungslinks finden Sie hier. Wir freuen uns auf Ihre Teilnahme!

    Read the article

  • Mobile or the Science of Programming Languages

    - by user12652314
    Just two things to share today. First is some news in the mobile computing space and a pretty cool new relationship developing with DubLabs and AT&T to enable a student-centric mobile experience for our Campus Solution customers. And second, is an interesting article shared by a friend on Research in Programming Languages related to STEM education, a key story element to my project with Americas Cup and iED, but also to our national interest

    Read the article

  • SOA Suite 11g: Unable to start domain (Error occurred during initialization of VM)

    - by Chris Tomkins
    If you have recently installed SOA Suite, created a domain and then tried to start it only to find it fails with the error: Error occurred during initialization of VM Could not reserve enough space for object heap Could not create the Java virtual machine. the solution is to edit the file <domain home>\bin\setSOADomainEnv.cmd/sh (depending on your platform) and modify the line: set DEFAULT_MEM_ARGS=-Xms512m -Xmx1024m to something like: set DEFAULT_MEM_ARGS=-Xms512m -Xmx768m Save the file and then try to start your domain again. Everything should now work at least it does on the Dell Latitude 630 laptop with 4Gb RAM that I have. Technorati Tags: soa suite,11g,java,troubleshooting,problems,domain

    Read the article

  • Cloud Odyssey: A Hero's Quest Wins Two Telly Awards!

    - by Sandra Cheevers
    Cloud Odyssey: A Hero's Quest is a sci-fi movie experience that shows you the key success factors for guiding your own journey to the cloud.   The movie shows the journey to a mysterious cloud planet, as a metaphor to YOUR journey to the cloud. And now, Cloud Odyssey: A Hero's Quest! receives 2 Telly awards in the categories 1) Motivational and 2) Use of Animation. This is truly an honor to be recognized in the company of so many outstanding entries from a wide range of major players, including Disney, Coca-Cola, NBC, Discovery...Kudos to the Cloud Odyssey team!

    Read the article

  • Managing Custom Series

    - by user702295
    Custom series that have been added should be done with client Defined Prefix, ex. ACME Final Forecast, so they are can be identified as non-standard series.  With that said, it is not always done, so beginning in v7.3.0 there is a new column called Application_Id in the Computed_Fields table.  This is the table that stores the Series information.  Standard Series will have have a prefix similar to COMPUTED_FIELD, while a custom series will have an Application_Id value similar to 9041128B99FC454DB8E8A289E5E8F0C5. So a SQL that will return the list of custom series in your database might look something like this: select computed_title Series_Name, application_id from computed_fields where application_id not like '%COMPUTED_FIELD%' order by 1;

    Read the article

  • Managing Custom Series

    - by user702295
    Custom series that have been added should be done with client Defined Prefix, ex. ACME Final Forecast, so they are can be identified as non-standard series.  With that said, it is not always done, so beginning in v7.3.0 there is a new column called Application_Id in the Computed_Fields table.  This is the table that stores the Series information.  Standard Series will have have a prefix similar to COMPUTED_FIELD, while a custom series will have an Application_Id value similar to 9041128B99FC454DB8E8A289E5E8F0C5. So a SQL that will return the list of custom series in your database might look something like this: select computed_title Series_Name, application_id from computed_fields where application_id not like '%COMPUTED_FIELD%' order by 1;

    Read the article

  • Parleys Testimonial at GlassFish Community Event, JavaOne 2012

    - by arungupta
    Parleys.com is an e-learning platform that provide a unique experience of online and offline viewing presentations, with integrated movies and chaptering, from the top notch developer conferences and about 40 JUGs all around the world. Stephan Janssen (the Devoxx man and Parleys webmaster) presented at the GlassFish Community Event at JavaOne 2012 and shared why they moved from Tomcat to GlassFish. The move paid off as GlassFish was able to handle 2000 concurrent users very easily. Now they are also running Devoxx CFP and registration on this updated infrastructure. The GlassFish clustering, the asadmin CLI, application versioning, and JMS implementation are some of the features that made them a happy user. Recently they migrated their application from Spring to Java EE 6. This allows them to get locked into proprietary frameworks and also avoid 40MB WAR file deployments. Stateless application, JAX-RS, MongoDB, and Elastic Search is their magical forumla for success there. Watch the video below showing him in full action: More details about their infrastructure is available here.

    Read the article

  • WhatsApp Chat Messenger available for Java ME phones

    - by hinkmond
    If you like sending SMS text messages from your Java ME tech-enabled mobile phone without having to pay carrier charges, then WhatsApp Messenger is for you. See: Don't pay, Use Java ME WhatsApp Here's a quote: Free WhatsApp Messenger Download For S40 Java Phone now Available. The IM chat app whatsapp was earlier targeted on high end/cross-platform mobile phone with support for messaging exchange, SMS messages, send and receive pictures, exchange of videos and audios, share your location with your contacts etc. So, be a cheap-skate. It's OK. You're entitled. As long as you use WhatsApp and Java ME technology, that is. Hinkmond

    Read the article

  • It could be worse....

    - by Darryl Gove
    As "guest" pointed out, in my file I/O test I didn't open the file with O_SYNC, so in fact the time was spent in OS code rather than in disk I/O. It's a straightforward change to add O_SYNC to the open() call, but it's also useful to reduce the iteration count - since the cost per write is much higher: ... #define SIZE 1024 void test_write() { starttime(); int file = open("./test.dat",O_WRONLY|O_CREAT|O_SYNC,S_IWGRP|S_IWOTH|S_IWUSR); ... Running this gave the following results: Time per iteration 0.000065606310 MB/s Time per iteration 2.709711563906 MB/s Time per iteration 0.178590114758 MB/s Yup, disk I/O is way slower than the original I/O calls. However, it's not a very fair comparison since disks get written in large blocks of data and we're deliberately sending a single byte. A fairer result would be to look at the I/O operations per second; which is about 65 - pretty much what I'd expect for this system. It's also interesting to examine at the profiles for the two cases. When the write() was trapping into the OS the profile indicated that all the time was being spent in system. When the data was being written to disk, the time got attributed to sleep. This gives us an indication how to interpret profiles from apps doing I/O. It's the sleep time that indicates disk activity.

    Read the article

< Previous Page | 507 508 509 510 511 512 513 514 515 516 517 518  | Next Page >