Search Results

Search found 34162 results on 1367 pages for 'oracle products distributed document capture'.

Page 658/1367 | < Previous Page | 654 655 656 657 658 659 660 661 662 663 664 665  | Next Page >

  • A tour of the GlassFish 3.1.2 DCOM support

    - by alexismp
    While we've mentioned the DCOM support in GlassFish 3.1.2 several times before, you'll probably find Byron's DCOM blog entry to be useful if you're using Windows as a deployment platform for your GlassFish cluster. Byron discusses how DCOM is used to communicate with remote Windows nodes participating in a GlassFish cluster, what Java libraries were used to wrap around DCOM, what new asadmin commands were addd (in particular validate-dcom) as well as some tips to make this all work on your specific environment. In addition to this blog post, you should considering reading the official product documentation : • Considerations for Using DCOM for Centralized Administration • Setting Up DCOM and Testing the DCOM Set Up

    Read the article

  • A Plea for Doug

    - by user12652314
    Doug was a key leader in the JCP and did all his research on sparc/solaris. That is until we changed the free patch policy support academics & research post CIC and he and many left in droves entirely pissed off. Well, we're working on a fix now so that all faculty can set-up a server environment, get free patch support and innovate on our stack from OS to virtualization to toolsets in support research, academic use and teaching. Hopefully, just maybe, we can start to bring Doug and the others back home as a result.

    Read the article

  • eFX on NetBeans Platform at Silicon Valley JavaFX User Group

    - by Geertjan
    Below you can watch (in addition to seeing Steve Chin and Ben Evans) Sven Reimers presenting eFX, a JavaFX application framework on the NetBeans Platform, yesterday at the Silicon Valley JavaFX User Group. While watching, you'll learn quite a few things about the NetBeans Platform, at the same time. In the end, you see a VisualVM clone written in JavaFX on the NetBeans Platform. Sven will also talk on this topic at NetBeans Day and during his sessions at JavaOne.

    Read the article

  • Merge Records in Session bean by using ADF Drag/Drop

    - by shantala.sankeshwar
    This article describes how to merge multiple selected records in Session Bean using ADF drag & drop feature. Below described is simple use case that shows how exactly this can be achieved. Here we will have table & user input field.Table shows  EMP records & user input field accepts Salary.When we drag & drop multiple records on user input field,the selected records get updated with the new Salary provided. Steps: Let us suppose that we have created Java EE Web Application with Entities from Emp table.Then create EJB Session Bean & generate Data control for the same. Write a simple code in sessionEJBBean & expose this method to local interface :  public void updateEmprecords(List empList, Object sal) {       Emp emp = null;       for (int i = 0; i < empList.size(); i++)       {        emp = em.find(Emp.class, empList.get(i));         emp.setSal((BigDecimal)sal);       }      em.merge(emp);   } Now let us create updateEmpRecords.jspx page in viewController project & Drop empFindAll object as ADF Table Define custom SelectionListener method for the table :   public void selectionListener(SelectionEvent selectionEvent)     {     // This method gets the Empno of the selected record & stores in the list object      UIXTable table = (UIXTable)selectionEvent.getComponent();      FacesCtrlHierNodeBinding fcr      =(FacesCtrlHierNodeBinding)table.getSelectedRowData();      Number empNo = (Number)fcr.getAttribute("empno") ;      this.getSelectedRowsList().add(empNo);     }Set table's selectedRowKeys to #{bindings.empFindAll.collectionModel.selectedRow}"Drop inputText on the same jspx page that accepts Salary .Now we would like to drag records from the above table & drop that on the inputtext field.This feature can be achieved by inserting dragSource operation inside the table & dropTraget operation inside the inputText:<af:dragSource discriminant="tab"/> //Insert this inside the table<af:inputText label="Enter Salary" id="it13" autoSubmit="true"       binding="# {test.deptValue}">       <af:dropTarget dropListener="#{test.handleTableDrop}">       <af:dataFlavor        flavorClass="org.apache.myfaces.trinidad.model.RowKeySet"    discriminant="tab"/>       </af:dropTarget>       <af:convertNumber/> </af:inputText> In the above code when the user drags & drops multiple records on inputText,the dropListener method gets called.Goto the respective page definition file & create updateEmprecords method action& execute action dropListener method code:        public DnDAction handleTableDrop(DropEvent dropEvent)        {          //Below code gets the updateEmprecords method,passes parameters & executes method            DataFlavor<RowKeySet> df = DataFlavor.getDataFlavor(RowKeySet.class);            RowKeySet droppedKeySet = dropEvent.getTransferable().getData(df);            if (droppedKeySet != null && droppedKeySet.size() > 0)           {                  DCBindingContainer bindings =                  (DCBindingContainer)BindingContext.getCurrent().getCurrentBindingsEntry();                  OperationBinding updateEmp;                  updateEmp= bindings.getOperationBinding("updateEmprecords");                  updateEmp.getParamsMap().put("sal",                  this.getDeptValue().getAttributes().get("value"));                            updateEmp.getParamsMap().put("empList", this.getSelectedRowsList());                  updateEmp.execute(); //Below code performs execute operation to refresh the updated records                 OperationBinding executeBinding;                 executeBinding= bindings.getOperationBinding("Execute");                 executeBinding.execute(); AdfFacesContext.getCurrentInstance().addPartialTarget(dropEvent.getDragComponent());                this.getSelectedRowsList().clear();          }                 return DnDAction.NONE;        }Run updateEmpRecords.jspx page & enter any Salary say '5000'.Select multiple records in table & drop these selected records on the inputText Salary. Note that all the selected records salary value gets updated to 5000.Technorati Tags: ADF Drag and drop,EJB Session bean,ADF table,inputText,DropEvent  

    Read the article

  • Script to set NO_HEARTBEAT Flag

    - by Koppar
    The new plugin provides some flags to help debug the browser side VM. Below are the flags: JPI_PLUGIN2_DEBUG=1JPI_PLUGIN2_VERBOSE=1 The above 2 provide tracing information JPI_PLUGIN2_NO_HEARTBEAT = 1 This disables sending of  heartbeat messages between browser side VM and the client JVM instance(s). This lets the client JVM stay independent of browser side VM. These are to be set as system environment variables. Many a times we are required to set them using scripts. Here is a small script to achieve the same: -----------set_jpi_flags.vbs------------------------------------------ Set WSHShell = WScript.CreateObject("WScript.Shell")Set WshEnv = WshShell.Environment("USER")WshEnv("JPI_PLUGIN2_NO_HEARTBEAT") = "1"WshEnv("JPI_PLUGIN2_DEBUG") = "1"WshEnv("JPI_PLUGIN2_VERBOSE") = "1"WScript.Echo WshEnv("JPI_PLUGIN2_NO_HEARTBEAT")   ---- displays the value of the NO_HEARTBEAT var ---------------------------------------------------------------------------

    Read the article

  • What more a Business Service can do?

    - by Rajesh Sharma
    Business services can be accessed from outside the application via XAI inbound service, or from within the application via scripting, Java, or info zones. Below is an example to what you can do with a business service wrapping an info zone.   Generally, a business service is specific to a page service program which references a maintenance object, that means one business service = one service program = one maintenance object. There have been quite a few threads in the forum around this topic where the business service is misconstrued to perform services only on a single object, for e.g. only for CILCSVAP - SA Page Maintenance, CILCPRMP - Premise Page Maintenance, CILCACCP - Account Page Maintenance, etc.   So what do you do when you want to retrieve some "non-persistent" field or information associated with some object/entity? Consider few business requirements: ·         Retrieve all the field activities associated to an account. ·         Retrieve the last bill date for an account. ·         Retrieve next bill date for an account.   It can be as simple as described below, for this post, we'll use the first scenario - Retrieve all the field activities associated to an account. To achieve this we'll have to do the following:   Step 1: Define an info zone   (A basic Zone of type F1-DE-SINGLE - Info Data Explorer - Single SQL has been used; you can use F1-DE - Info Data Explorer - Multiple SQLs for more complex scenarios)   Parameter Description Value To Enter User Filter 1 F1 Initial Display Columns C1 C2 C3 SQL Condition F1 SQL Statement SELECT     FA_ID, FA_STATUS_FLG, CRE_DTTM FROM     CI_FA WHERE     SP_ID IN         (SELECT SP_ID         FROM CI_SA_SP         WHERE             SA_ID IN                 (SELECT SA_ID                  FROM CI_SA                  WHERE                     ACCT_ID = :F1)) Column 1 source=SQLCOL sqlcol=FA_ID Column 2 source=SQLCOL sqlcol=FA_STATUS_FLG Column 3 type=TIME source=SQLCOL sqlcol=CRE_DTTM order=DESC   Note: Zone code specified was 'CM_ACCTFA'   Step 2: Define a business service Create a business service linked to 'Service Name' FWLZDEXP - Data Explorer. Schema will look like this:   <schema> <zoneCd mapField="ZONE_CD" default="CM_ACCTFA"/>      <accountId mapField="F1_VALUE"/>      <rowCount mapField="ROW_CNT"/>      <result type="group">         <selectList type="list" mapList="DE">             <faId mapField="COL_VALUE">                 <row mapList="DE_VAL">                     <SEQNO is="1"/>                 </row>             </faId>              <status mapField="COL_VALUE">                 <row mapList="DE_VAL">                     <SEQNO is="2"/>                 </row>             </status>              <createdDateTime mapField="COL_VALUE">                 <row mapList="DE_VAL">                     <SEQNO is="3"/>                 </row>             </createdDateTime>         </selectList>     </result> </schema>      What's next? As mentioned above, you can invoke this business service from an outside application via XAI inbound service or call this business service from within a script.   Step 3: Create a XAI inbound service for above created business service         Step 4: Test the inbound service   Go to XAI Submission and test the newly created service   <RXS_AccountFA>       <accountId>5922116763</accountId> </RXS_AccountFA>  

    Read the article

  • How to save Word documents as HTML to be viewed in Firefox

    - by private_meta
    I'm in need for saving a Word document as HTML. It has some background images, other images, texts, ... It opens correctly in Internet Explorer, but how can I save a word doc as HTML so that Firefox and other current browsers render it correctly? All images are missing in the document. I looked through the generated html document, but the paths for the images appear to be correct. Any idea? Things like "Don't save docs as html" won't be helpful here. Edit: To make myself clear, the normal "Save as HTML" doesn't cut it, the result is broken in any browser other than Internet Explorer. Edit 2: What I'm using is Word 2010 and Firefox 4. I also tried rendering it in the latest Chrome version, which failed as well. I used different compatibility settings for saving as html, it did not help

    Read the article

  • WebCenter Content shared folders for clustering

    - by Kyle Hatlestad
    When configuring a WebCenter Content (WCC) cluster, one of the things which makes it unique from some other WebLogic Server applications is its requirement for a shared file system.  This is actually not any different then 10g and previous versions of UCM when it ran directly on a JVM.  And while it is simple enough to say it needs a shared file system, there are some crucial details in how those directories are configured. And if they aren't followed, you may result in some unwanted behavior. This blog post will go into the details on how exactly the file systems should be split and what options are required. [Read More]

    Read the article

  • Java DB talks at JavaOne 2012

    - by kah
    It's soon time for JavaOne again in San Francisco, and Java DB is represented this year too. Dag Wanvik will give an introductory talk on Java DB on Tuesday, October 2 at 10:00: CON5141 - Java DB in JDK 7: A Free, Feature-Rich, Embeddable SQL Database Rick Hillegas and Noel Poore will discuss how to use Java DB on embedded devices in their talk on Thursday, October 4 at 14:00: CON6684 - Data Storage for Embedded Middleware Mark your calendars! :)

    Read the article

  • Word 2007 Question

    - by Lijo
    Hi Team, While preparing a Word 2007 document, I made a mistake. (Not to say I don't have any other copy of the document) While formatting (as a try) I applied the style "Apply Style to Body to match selection". This caused the document to go totally in a wronfg format - having numbers even in tables. Have you ever faced this? Could you please tell how to correct it? Hope you would be kind enough to answer this even though it is not striclty technical. Thanks Lijo

    Read the article

  • JMS : Specifying Message Paging Directory on Weblogic server.

    - by adejuanc
    Two ways to configure or modify Paging directory, here the examples : 1.- Via config.xml file. <paging-directory>C:\temp</paging-directory> <jms-server> <name>JMSServerMS1</name> <target>MS1</target> <persistent-store xsi:nil="true"></persistent-store> <hosting-temporary-destinations>true</hosting-temporary-destinations> <temporary-template-resource xsi:nil="true"></temporary-template-resource> <temporary-template-name xsi:nil="true"></temporary-template-name> <message-buffer-size>-1</message-buffer-size> <paging-directory>C:\temp</paging-directory> <paging-file-locking-enabled>true</paging-file-locking-enabled> <expiration-scan-interval>30</expiration-scan-interval> </jms-server> ------------------------------------------------------- 2 .- Via WLST (Weblogic scripting tool) startEdit() cd('/Deployments/JMSServerMS1') cmo.setPagingDirectory('C:\\temp') activate()

    Read the article

  • Duke at JavaOne

    - by Tori Wieldt
    A living, life-size Duke is a popular feature at every JavaOne developer conference.  One of the highlights for attendees is to meet Duke "in person" and get their picture taken. It's fun to show to your friends...and try to explain why you are standing next to a tooth.* While Duke refused any interviews, I found a slip of paper stuck to Duke's foot. If you wonder why I give 100%,The community deserves no less.Neither JavaOne.They both deserve the best! So much of the world we enjoy today,and the places we're sure to advanceis due to engineering brillianceand gives our species that chance. So when I dance and give it my allas Duke, to rally some cheer,the honor and the privilegemakes me smile, ear to ear. *Duke was designed to represent a "software agent" that performed tasks for the user. In 2006, Duke was officially open sourced under a BSD license. Developers and designers can play around with Duke and have access to Duke’s graphical specifications through a java.net project at http://duke.kenai.com. JavaOne attendees can find Duke in the Zone all week.

    Read the article

  • Recent JSR Updates-JSR 356, 357, 355, 349, 236

    - by heathervc
    JSR 357, Social Media API, was not approved by the SE/EE EC to continue development in the JCP program. JSR 356, Java API for WebSocket, was approved by the SE/EE EC to continue development in the JCP program. JSR 355,  JCP Executive Committee Merge, published an Early Draft Review; this review closes 27 April.  You can read more about JSR 355 here. JSR 349, Bean Validation 1.1, published an Early Draft Review; this review closes 27 April. JSR 236,  Concurrency Utilities for Java EE, has updated the JSR page and moved to JCP version 2.8.

    Read the article

  • New Enhancements for InnoDB Memcached

    - by Calvin Sun
    In MySQL 5.6, we continued our development on InnoDB Memcached and completed a few widely desirable features that make InnoDB Memcached a competitive feature in more scenario. Notablely, they are 1) Support multiple table mapping 2) Added background thread to auto-commit long running transactions 3) Enhancement in binlog performance  Let’s go over each of these features one by one. And in the last section, we will go over a couple of internally performed performance tests. Support multiple table mapping In our earlier release, all InnoDB Memcached operations are mapped to a single InnoDB table. In the real life, user might want to use this InnoDB Memcached features on different tables. Thus being able to support access to different table at run time, and having different mapping for different connections becomes a very desirable feature. And in this GA release, we allow user just be able to do both. We will discuss the key concepts and key steps in using this feature. 1) "mapping name" in the "get" and "set" command In order to allow InnoDB Memcached map to a new table, the user (DBA) would still require to "pre-register" table(s) in InnoDB Memcached “containers” table (there is security consideration for this requirement). If you would like to know about “containers” table, please refer to my earlier blogs in blogs.innodb.com. Once registered, the InnoDB Memcached will then be able to look for such table when they are referred. Each of such registered table will have a unique "registration name" (or mapping_name) corresponding to the “name” field in the “containers” table.. To access these tables, user will include such "registration name" in their get or set commands, in the form of "get @@new_mapping_name.key", prefix "@@" is required for signaling a mapped table change. The key and the "mapping name" are separated by a configurable delimiter, by default, it is ".". So the syntax is: get [@@mapping_name.]key_name set [@@mapping_name.]key_name  or  get @@mapping_name set @@mapping_name Here is an example: Let's set up three tables in the "containers" table: The first is a map to InnoDB table "test/demo_test" table with mapping name "setup_1" INSERT INTO containers VALUES ("setup_1", "test", "demo_test", "c1", "c2", "c3", "c4", "c5", "PRIMARY");  Similarly, we set up table mappings for table "test/new_demo" with name "setup_2" and that to table "mydatabase/my_demo" with name "setup_3": INSERT INTO containers VALUES ("setup_2", "test", "new_demo", "c1", "c2", "c3", "c4", "c5", "secondary_index_x"); INSERT INTO containers VALUES ("setup_3", "my_database", "my_demo", "c1", "c2", "c3", "c4", "c5", "idx"); To switch to table "my_database/my_demo", and get the value corresponding to “key_a”, user will do: get @@setup_3.key_a (this will also output the value that corresponding to key "key_a" or simply get @@setup_3 Once this is done, this connection will switch to "my_database/my_demo" table until another table mapping switch is requested. so it can continue issue regular command like: get key_b  set key_c 0 0 7 These DMLs will all be directed to "my_database/my_demo" table. And this also implies that different connections can have different bindings (to different table). 2) Delimiter: For the delimiter "." that separates the "mapping name" and key value, we also added a configure option in the "config_options" system table with name of "table_map_delimiter": INSERT INTO config_options VALUES("table_map_delimiter", "."); So if user wants to change to a different delimiter, they can change it in the config_option table. 3) Default mapping: Once we have multiple table mapping, there should be always a "default" map setting. For this, we decided if there exists a mapping name of "default", then this will be chosen as default mapping. Otherwise, the first row of the containers table will chosen as default setting. Please note, user tables can be repeated in the "containers" table (for example, user wants to access different columns of the table in different settings), as long as they are using different mapping/configure names in the first column, which is enforced by a unique index. 4) bind command In addition, we also extend the protocol and added a bind command, its usage is fairly straightforward. To switch to "setup_3" mapping above, you simply issue: bind setup_3 This will switch this connection's InnoDB table to "my_database/my_demo" In summary, with this feature, you now can direct access to difference tables with difference session. And even a single connection, you can query into difference tables. Background thread to auto-commit long running transactions This is a feature related to the “batch” concept we discussed in earlier blogs. This “batch” feature allows us batch the read and write operations, and commit them only after certain calls. The “batch” size is controlled by the configure parameter “daemon_memcached_w_batch_size” and “daemon_memcached_r_batch_size”. This could significantly boost performance. However, it also comes with some disadvantages, for example, you will not be able to view “uncommitted” operations from SQL end unless you set transaction isolation level to read_uncommitted, and in addition, this will held certain row locks for extend period of time that might reduce the concurrency. To deal with this, we introduce a background thread that “auto-commits” the transaction if they are idle for certain amount of time (default is 5 seconds). The background thread will wake up every second and loop through every “connections” opened by Memcached, and check for idle transactions. And if such transaction is idle longer than certain limit and not being used, it will commit such transactions. This limit is configurable by change “innodb_api_bk_commit_interval”. Its default value is 5 seconds, and minimum is 1 second, and maximum is 1073741824 seconds. With the help of such background thread, you will not need to worry about long running uncommitted transactions when set daemon_memcached_w_batch_size and daemon_memcached_r_batch_size to a large number. This also reduces the number of locks that could be held due to long running transactions, and thus further increase the concurrency. Enhancement in binlog performance As you might all know, binlog operation is not done by InnoDB storage engine, rather it is handled in the MySQL layer. In order to support binlog operation through InnoDB Memcached, we would have to artificially create some MySQL constructs in order to access binlog handler APIs. In previous lab release, for simplicity consideration, we open and destroy these MySQL constructs (such as THD) for each operations. This required us to set the “batch” size always to 1 when binlog is on, no matter what “daemon_memcached_w_batch_size” and “daemon_memcached_r_batch_size” are configured to. This put a big restriction on our capability to scale, and also there are quite a bit overhead in creating destroying such constructs that bogs the performance down. With this release, we made necessary change that would keep MySQL constructs as long as they are valid for a particular connection. So there will not be repeated and redundant open and close (table) calls. And now even with binlog option is enabled (with innodb_api_enable_binlog,), we still can batch the transactions with daemon_memcached_w_batch_size and daemon_memcached_r_batch_size, thus scale the write/read performance. Although there are still overheads that makes InnoDB Memcached cannot perform as fast as when binlog is turned off. It is much better off comparing to previous release. And we are continuing optimize the solution is this area to improve the performance as much as possible. Performance Study: Amerandra of our System QA team have conducted some performance studies on queries through our InnoDB Memcached connection and plain SQL end. And it shows some interesting results. The test is conducted on a “Linux 2.6.32-300.7.1.el6uek.x86_64 ix86 (64)” machine with 16 GB Memory, Intel Xeon 2.0 GHz CPU X86_64 2 CPUs- 4 Core Each, 2 RAID DISKS (1027 GB,733.9GB). Results are described in following tables: Table 1: Performance comparison on Set operations Connections 5.6.7-RC-Memcached-plugin ( TPS / Qps) with memcached-threads=8*** 5.6.7-RC* X faster Set (QPS) Set** 8 30,000 5,600 5.36 32 59,000 13,000 4.54 128 68,000 8,000 8.50 512 63,000 6.800 9.23 * mysql-5.6.7-rc-linux2.6-x86_64 ** The “set” operation when implemented in InnoDB Memcached involves a couple of DMLs: it first query the table to see whether the “key” exists, if it does not, the new key/value pair will be inserted. If it does exist, the “value” field of matching row (by key) will be updated. So when used in above query, it is a precompiled store procedure, and query will just execute such procedures. *** added “–daemon_memcached_option=-t8” (default is 4 threads) So we can see with this “set” query, InnoDB Memcached can run 4.5 to 9 time faster than MySQL server. Table 2: Performance comparison on Get operations Connections 5.6.7-RC-Memcached-plugin ( TPS / Qps) with memcached-threads=8 5.6.7-RC* X faster Get (QPS) Get 8 42,000 27,000 1.56 32 101,000 55.000 1.83 128 117,000 52,000 2.25 512 109,000 52,000 2.10 With the “get” query (or the select query), memcached performs 1.5 to 2 times faster than normal SQL. Summary: In summary, we added several much-desired features to InnoDB Memcached in this release, allowing user to operate on different tables with this Memcached interface. We also now provide a background commit thread to commit long running idle transactions, thus allow user to configure large batch write/read without worrying about large number of rows held or not being able to see (uncommit) data. We also greatly enhanced the performance when Binlog is enabled. We will continue making efforts in both performance enhancement and functionality areas to make InnoDB Memcached a good demo case for our InnoDB APIs. Jimmy Yang, September 29, 2012

    Read the article

  • libjpcap.so on Ubuntu for AT&T ARO

    - by Geertjan
    I now have AT&T ARO also running on Ubuntu, in addition to the Windows scenario I blogged about earlier: I managed to get it up and running thanks again to Doug Sillars, who pointed me here: http://developer.att.com/developer/forward.jsp?passedItemId=14100207&passedItemId=14100207 My plan is to make a screencast soon on HOW to port something like ARO, i.e., as an example of how to do something similar yourself, to a plugin for NetBeans IDE, as a follow up to Five Simple Ways to Extend NetBeans IDE. Thanks again, Doug.

    Read the article

  • Servlet 3.1, Expression Language 3.0, Bean Validation 1.1, Admin Console Replay: Java EE 7 Launch Webinar Technical Breakouts on YouTube

    - by arungupta
    As stated previously (here, here, here, and here), the On-Demand Replay of Java EE 7 Launch Webinar is already available. You can watch the entire Strategy and Technical Keynote there, and all other Technical Breakout sessions as well. We are releasing the final set of Technical Breakout sessions on GlassFishVideos YouTube channel as well. In this series, we are releasing Servlet 3.1, Expression Language 3.0, Bean Validation 1.1, and Admin Console. Here's the Servlet 3.1 session: Here's the Expression Language 3.0 session: Here's the Bean Validation 1.1 session: And finally the Admin Console session: Enjoy watching all of them together in a consolidated playlist: And don't forget to download Java EE 7 SDK and try the numerous bundled samples.

    Read the article

  • RPi and Java Embedded GPIO: Sensor Hardware for Java Enabled Interface

    - by hinkmond
    Now here's the hardware you'll need to make a Java app interface with a static charge sensor connected to your Raspberry Pi via the GPIO port. It means another Fry's run of course. That's not too bad during Christmas since you can browse all the gadget and toys while doing your shopping for sensor hardware for your RPi. Here's a your shopping list: 1 - NTE312 JFET N-channel transistor (this is in place of the MPF-102) 1 - Set of Jumper Wires 1 - LED 1 - 300 ohm resistor 1 - set of header pins Grab all that from Fry's or your local hobby electronics shop and come back here for how to connect it together. Oh, and don't go too crazy buying all the other electronic toys and gadgets that catch your eye because of the holiday displays at the store. Hinkmond

    Read the article

  • Contribute to GlassFish in Five Different Ways

    - by arungupta
    GlassFish has a lot to offer from Java EE 6 compliance, HA & Clustering, RESTful administration, IDE integration and many other features. However a recent blog by Markus, a GlassFish Champion, said something different: Ask not what GlassFish can do for you, but ask what you can do for GlassFish! Markus explained how you can easily contribute to GlassFish without being a programming genius. The preparatory steps are simple: • First of all: Don't be afraid! • Prepare yourself - Get up to speed! And then specific suggestions with cross-referenced documents: • Review, Suggest and Add Documentation! • Help Others - be a community hero! • Find and File Bugs on Releases! • Test-drive Promoted Builds and Release Candidates! • Work with Code! Get things done! Are you ready to contribute to GlassFish ? Read more details in Markus's blog.

    Read the article

  • SSAS: Utility to check you have the correct data types and sizes in your cube definition

    - by DrJohn
    This blog describes a tool I developed which allows you to compare the data types and data sizes found in the cube’s data source view with the data types/sizes of the corresponding dimensional attribute.  Why is this important?  Well when creating named queries in a cube’s data source view, it is often necessary to use the SQL CAST or CONVERT operation to change the data type to something more appropriate for SSAS.  This is particularly important when your cube is based on an Oracle data source or using custom SQL queries rather than views in the relational database.   The problem with BIDS is that if you change the underlying SQL query, then the size of the data type in the dimension does not update automatically.  This then causes problems during deployment whereby processing the dimension fails because the data in the relational database is wider than that allowed by the dimensional attribute. In particular, if you use some string manipulation functions provided by SQL Server or Oracle in your queries, you may find that the 10 character string you expect suddenly turns into an 8,000 character monster.  For example, the SQL Server function REPLACE returns column with a width of 8,000 characters.  So if you use this function in the named query in your DSV, you will get a column width of 8,000 characters.  Although the Oracle REPLACE function is far more intelligent, the generated column size could still be way bigger than the maximum length of the data actually in the field. Now this may not be a problem when prototyping, but in your production cubes you really should clean up this kind of thing as these massive strings will add to processing times and storage space. Similarly, you do not want to forget to change the size of the dimension attribute if your database columns increase in size. Introducing CheckCubeDataTypes Utiltity The CheckCubeDataTypes application extracts all the data types and data sizes for all attributes in the cube and compares them to the data types and data sizes in the cube’s data source view.  It then generates an Excel CSV file which contains all this metadata along with a flag indicating if there is a mismatch between the DSV and the dimensional attribute.  Note that the app not only checks all the attribute keys but also the name and value columns for each attribute. Another benefit of having the metadata held in a CSV text file format is that you can place the file under source code control.  This allows you to compare the metadata of the previous cube release with your new release to highlight problems introduced by new development. You can download the C# source code from here: CheckCubeDataTypes.zip A typical example of the output Excel CSV file is shown below - note that the last column shows a data size mismatch by TRUE appearing in the column

    Read the article

  • MEB: Taking Incremental Backup using last successful backup

    - by Sagar Jauhari
    Introduction In MySQL Enterprise Backup v3.7.0 (MEB 3.7.0) a new option '–incremental-base' was introduced. Using this option a user can take in incremental backup without specifying the '–start-lsn' option. Description of this option can be found here. Instead of '–start-lsn' the user can provide the location of the last full backup or incremental backup using the 'dir:' prefix. MEB would extract the end LSN of this backup from the mysql.backup_history table as well as the backup_variables.txt file (for verification) to use it as the start LSN of the incremental backup. Because of popular demand, in MEB 3.7.1 the option '-incremental-base' has been extended further. The idea is to allow the user to take an incremental backup as easily as possible using the '–incremental-base' option. With the new option MEB queries the backup_history table for the last successful backup and uses its end LSN as the start LSN for the new incremental backup. It should be noted that the last successful backup is used irrespective of the location of the backup. Details A new prefix 'history:' has been introduced for the –incremental-base option and currently the only permissible value is the string "last_backup". So using the new option an incremental backup can be taken with the following command: $ mysqlbackup --incremental --incremental-backup-dir=/media/mysqlbackup-repo/ --incremental-base=history:last_backup backup When MEB attempts to extract the end LSN of the last successful backup from the mysql.backup_history table, it also scans the corresponding backup destination for the old backup and tries to read the meta files at this backup destination. If a valid backup still exists at the backup destination and the meta files can be read, MEB compares the end LSN found in the mysql.backup_history table with the end LSN found in the backup meta files of the old backup. Assuming that the host MySQL server is alive and mysql.backup_history can be accessed by MEB, the behaviour of MEB with respect to verification of the old end LSN can be summarized as follows: If 'BD' is the backup destination of the last successful backup in mysql.backup_history table and 'BHT' is the mysql.backup_history table if can_read_files_at_BD:     if end_lsn_found_at_BD == end_lsn_of_last_backup_in_BHT:         continue_with_backup()     else         return_with_error() else     continue_with_backup() Advantages Apart from ease of usability an important advantage of this option is that the user can do repeated incremental backups without changing the command line. This is possible using the '–with-timestamp' option along with this new option. For example, the following command $ mysqlbackup --with-timestamp --incremental --incremental-backup-dir=/media/mysqlbackup-repo/ --incremental-base=history:last_backup backup  can be used to perform successive incremental backups in the directory /media/mysqlbackup-repo . Limitations The option '--incremental-base=history:last_backup' should not be used when the user takes different kinds of concurrent backups on the same MySQL server (say different partial backups at multiple locations). should not be used after any temporary or experimental backups performed on the server (which where successful!). needs to be used with precaution since any intermediate successful backup without the –no-connection will be used as the base backup for the next incremental backup.  will give an error in case a valid backup exists at the location of the last successful backup and whose end LSN is different from that of the last successful backup found in the backup_history table. Date: 2012-06-19 HTML generated by org-mode 6.33x in emacs 23

    Read the article

  • Warnings When Undo Isn't Possible

    - by ultan o'broin
    Enjoyed this post Never Use a Warning When you Mean Undo by Aza Raskin. It makes sense never to warn users if an undo option is possible. The examples given are from the web space. Here's the conclusion: Warnings cause us to lose our work, to mistrust our computers, and to blame ourselves. A simple but foolproof design methodology solves the problem: "Never use a warning when you mean undo." And when a user is deleting their work, you always mean undo. However, in enterprise apps you may find that an undo option isn't technically possible or desirable. Objects may be shared, part of a flow elsewhere, or undoing something committed to the database (a rollback I guess) may not be feasible if it becomes locked by another process. Plus, what constitutes user ownership of objects isn't always clear to users. The implications of delete (and other) actions need to be clearly communicated out in advance. Really, warnings are important in the enterprise space. Data has a very high value, and users can perform a wide variety of actions that may risk that data, not always within the application itself (at browser level, for example). That said, throwing warnings all over the place when an undo option is possible is annoying. Instead, treat warnings with respect. When there is no undo option possible, use warning messages to communicate potentially dangerous or irrecoverable actions or the downstream consequences of user actions on the process or task flow. Force the user to respond to a warning message by using a modal dialog with clearly labeled action buttons. Here's a couple of examples. A great article that got me thinking. Let's see more like that. Let's not forget there's more types of messages than just error messages. User assistance and user experience professionals need to understand when best to use confirmation, information, and warning types too!

    Read the article

< Previous Page | 654 655 656 657 658 659 660 661 662 663 664 665  | Next Page >