Search Results

Search found 14966 results on 599 pages for 'oracle commerce'.

Page 534/599 | < Previous Page | 530 531 532 533 534 535 536 537 538 539 540 541  | Next Page >

  • HTML Tidy for NetBeans IDE 7.4

    - by Geertjan
    The NetBeans HTML5 editor is pretty amazing, working on an extensive screencast on that right now, to be published soon. One thing missing is HTML Tidy integration, until now: As you can see, in this particular file, HTML Tidy finds 6 times more problems (OK, some of them maybe false negatives) than the standard NetBeans HTML hint infrastructure does. You can also run the scanner across the whole project or all projects. Only HTML files will be scanned by HTML Tidy (via JTidy) and you can click on items in the window above to jump to the line. Future enhancements will include error annotations and hint integration, some of which has already been addressed in this blog over the years. Download it from here: http://plugins.netbeans.org/plugin/51066/?show=true Sources are here. Contributions more than welcome: https://java.net/projects/nb-api-samples/sources/api-samples/show/versions/7.4/misc/HTMLTidy

    Read the article

  • Multi Level Security via Roles

    - by Geertjan
    I'm simulating a small scenario: Users can be dragged into roles; roles can be dragged into role groups. When a drop is made into a role group, a new role is created (WindowManager.getDefault().setRole("")). Then, when the user logs in, they log into a particular role. Depending on the role they log into, a different role group is assigned, which maps to a certain "role" in NetBeans Platform terms, i.e., the related level of security is applied and the related windows open.

    Read the article

  • Context Sensitive JTable (Part 2)

    - by Geertjan
    Now, having completed part 1, let's add a popup menu to the JTable. However, the menu item in the popup menu should invoke the same Action as invoked from the toolbar button created yesterday. Add this to the constructor created yesterday: Collection<? extends Action> stockActions =         Lookups.forPath("Actions/Stock").lookupAll(Action.class); for (Action action : stockActions) {     popupMenu.add(new JMenuItem(action)); } MouseListener popupListener = new PopupListener(); // Add the listener to the JTable: table.addMouseListener(popupListener); // Add the listener specifically to the header: table.getTableHeader().addMouseListener(popupListener); And here's the standard popup enablement code: private JPopupMenu popupMenu = new JPopupMenu(); class PopupListener extends MouseAdapter { @Override public void mousePressed(MouseEvent e) { showPopup(e); } @Override public void mouseReleased(MouseEvent e) { showPopup(e); } private void showPopup(MouseEvent e) { if (e.isPopupTrigger()) { popupMenu.show(e.getComponent(), e.getX(), e.getY()); } } }

    Read the article

  • Using Bulk Operations with Coherence Off-Heap Storage

    - by jpurdy
    Some NamedCache methods (including clear(), entrySet(Filter), aggregate(Filter, …), invoke(Filter, …)) may generate large intermediate results. The size of these intermediate results may result in out-of-memory exceptions on cache servers, and in some cases on cache clients. This may be particularly problematic if out-of-memory exceptions occur on more than one server (since these operations may be cluster-wide) or if these exceptions cause additional memory use on the surviving servers as they take over partitions from the failed servers. This may be particularly problematic with clusters that use off-heap storage (such as NIO or Elastic Data storage options), since these storage options allow greater than normal cache sizes but do nothing to address the size of intermediate results or final result sets. One workaround is to use a PartitionedFilter, which allows the application to break up a larger operation into a number of smaller operations, each targeting either a set of partitions (useful for reducing the load on each cache server) or a set of members (useful for managing client result set sizes). It is also possible to return a key set, and then pull in the full entries using that key set. This also allows the application to take advantage of near caching, though this may be of limited value if the result is large enough to result in near cache thrashing.

    Read the article

  • enable iptables firewall on linux

    - by user13278061
     Here is a very basic set of instruction to setup a simple iptables firewall configuration on linux (redhat) Enable firewall log as root thenenter the following command, it launch a text gui #> setup first screen: Choose firewall configuration second screen: choose "Enabled" then "Customize" third screen: select you interface in "Trusted Devices", select "Allow Incoming" for "SSH" "Telnet" "FTP" (add eventually other ports, then press "OK" (2 times, then "Quit") At that point the firewall is enabled. You can start/stop/monitor using service iptables start/stop/status Change timeout to changed the tcp established connection timeout #> echo 120 >    /proc/sys/net/ipv4/netfilter/ip_conntrack_tcp_timeout_established Monitor connection in iptables tables for example if you want to track a connection establish from a host  152.68.65.207 #> cat /proc/net/ip_conntrack |grep 152.68.65.207

    Read the article

  • Social Media JSR 357 NOT approved by Executive Committee

    - by alexismp
    JSR 357 (Social Media API) has not passed the initial ballot which means, according to the JCP rules, that "the JSR submitter(s) who may revise the JSR and resubmit it within 14 days". Given the comments associated with the negative votes, it may be challenging for the submitters to address the concerns about the scope assessed by many as being too wide. Standardization is a difficult task and the JCP (the Executive Committee in fact) played its role by pointing out the challenges ahead of such a JSR as it was envisioned by its submitters, and thus the risk of never completing. If anything this proves that the JCP is working as expected. For those disappointed that Java will not get a standard "Social Media API" (for now at least), let me remind you of the recent open-sourcing of DaliCore.

    Read the article

  • Exploring TCP throughput with DTrace (2)

    - by user12820842
    Last time, I described how we can use the overlap in distributions of unacknowledged byte counts and send window to determine whether the peer's receive window may be too small, limiting throughput. Let's combine that comparison with a comparison of congestion window and slow start threshold, all on a per-port/per-client basis. This will help us Identify whether the congestion window or the receive window are limiting factors on throughput by comparing the distributions of congestion window and send window values to the distribution of outstanding (unacked) bytes. This will allow us to get a visual sense for how often we are thwarted in our attempts to fill the pipe due to congestion control versus the peer not being able to receive any more data. Identify whether slow start or congestion avoidance predominate by comparing the overlap in the congestion window and slow start distributions. If the slow start threshold distribution overlaps with the congestion window, we know that we have switched between slow start and congestion avoidance, possibly multiple times. Identify whether the peer's receive window is too small by comparing the distribution of outstanding unacked bytes with the send window distribution (i.e. the peer's receive window). I discussed this here. # dtrace -s tcp_window.d dtrace: script 'tcp_window.d' matched 10 probes ^C cwnd 80 10.175.96.92 value ------------- Distribution ------------- count 1024 | 0 2048 | 4 4096 | 6 8192 | 18 16384 | 36 32768 |@ 79 65536 |@ 155 131072 |@ 199 262144 |@@@ 400 524288 |@@@@@@ 798 1048576 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 3848 2097152 | 0 ssthresh 80 10.175.96.92 value ------------- Distribution ------------- count 268435456 | 0 536870912 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 5543 1073741824 | 0 unacked 80 10.175.96.92 value ------------- Distribution ------------- count -1 | 0 0 | 1 1 | 0 2 | 0 4 | 0 8 | 0 16 | 0 32 | 0 64 | 0 128 | 0 256 | 3 512 | 0 1024 | 0 2048 | 4 4096 | 9 8192 | 21 16384 | 36 32768 |@ 78 65536 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 5391 131072 | 0 swnd 80 10.175.96.92 value ------------- Distribution ------------- count 32768 | 0 65536 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 5543 131072 | 0 Here we are observing a large file transfer via http on the webserver. Comparing these distributions, we can observe: That slow start congestion control is in operation. The distribution of congestion window values lies below the range of slow start threshold values (which are in the 536870912+ range), so the connection is in slow start mode. Both the unacked byte count and the send window values peak in the 65536-131071 range, but the send window value distribution is narrower. This tells us that the peer TCP's receive window is not closing. The congestion window distribution peaks in the 1048576 - 2097152 range while the receive window distribution is confined to the 65536-131071 range. Since the cwnd distribution ranges as low as 2048-4095, we can see that for some of the time we have been observing the connection, congestion control has been a limiting factor on transfer, but for the majority of the time the receive window of the peer would more likely have been the limiting factor. However, we know the window has never closed as the distribution of swnd values stays within the 65536-131071 range. So all in all we have a connection that has been mildly constrained by congestion control, but for the bulk of the time we have been observing it neither congestion or peer receive window have limited throughput. Here's the script: #!/usr/sbin/dtrace -s tcp:::send / (args[4]-tcp_flags & (TH_SYN|TH_RST|TH_FIN)) == 0 / { @cwnd["cwnd", args[4]-tcp_sport, args[2]-ip_daddr] = quantize(args[3]-tcps_cwnd); @ssthresh["ssthresh", args[4]-tcp_sport, args[2]-ip_daddr] = quantize(args[3]-tcps_cwnd_ssthresh); @unacked["unacked", args[4]-tcp_sport, args[2]-ip_daddr] = quantize(args[3]-tcps_snxt - args[3]-tcps_suna); @swnd["swnd", args[4]-tcp_sport, args[2]-ip_daddr] = quantize((args[4]-tcp_window)*(1 tcps_snd_ws)); } One surprise here is that slow start is still in operation - one would assume that for a large file transfer, acknowledgements would push the congestion window up past the slow start threshold over time. The slow start threshold is in fact still close to it's initial (very high) value, so that would suggest we have not experienced any congestion (the slow start threshold is adjusted when congestion occurs). Also, the above measurements were taken early in the connection lifetime, so the congestion window did not get a changes to get bumped up to the level of the slow start threshold. A good strategy when examining these sorts of measurements for a given service (such as a webserver) would be start by examining the distributions above aggregated by port number only to get an overall feel for service performance, i.e. is congestion control or peer receive window size an issue, or are we unconstrained to fill the pipe? From there, the overlap of distributions will tell us whether to drill down into specific clients. For example if the send window distribution has multiple peaks, we may want to examine if particular clients show issues with their receive window.

    Read the article

  • New Java EE 6 Hands-On lab, Devoxx-approved!

    - by alexismp
    A new Java EE 6 HOL (Hands-On Lab) was successfully used yesterday at Devoxx with a room packed with enthusiast conference participants. This is new material which covers a lot of Java EE ground in a single document. As it is the case for most GlassFish-related labs, the list of software requirements is dead-simple and short: a recent JDK (6 or 7) and NetBeans 7.x ("Java EE" or "All") which comes with GlassFish. Of course GlassFish can also be downloaded separately and used from other IDEs such as Eclipse and IntelliJ or even (Emacs). The didactic nature of the HOL document should make it useful for anyone interested in learning Java EE 6 on their own time and pace. If you have feedback about the content or about GlassFish, make sure you voice your concerns (or praises) to the GlassFish Users alias as indicated in the document. Feedback will be taken into account in the form of updates to the document as well as enhancements to GlassFish (ideally in 3.1.2).

    Read the article

  • Screencast: "Unlocking the Java EE Platform with HTML5"

    - by Geertjan
    The Java EE platform aims to increase your productivity and reduce the amount of scaffolding code needed in Java enterprise applications. It encompasses a range of specifications, such as JPA, EJB, JSF, and JAX-RS. How do these specifications fit together in an application, and how do they relate to each other? And how can HTML5 be used to leverage Java EE? In this recording of a session I did last week at Oredev in Malmo, Sweden, you learn how Java EE works and how it can be integrated with HTML5 front ends, via HTML, JavaScript, and CSS.

    Read the article

  • Enterprise 2.0 Conference - this week!

    - by kellsey.ruppel
    We're excited to be at the Enterprise 2.0 Conference in Boston this week! We are looking forward to spending the next few days learning from Enterprise 2.0 thought-leaders and connecting with others who have an interest in social and collaborative technologies. If you are attending the conference, we encourage you to stop by booth #213, as we'd love to speak with you further about your challenges and success when it comes to social business transformation. New to social enterprise? Wondering what it means for your business? Take a look at how Bunny Inc. has transformed to become a social enterprise!

    Read the article

  • JSR updates - First Merged EC Ballots

    - by Heather VanCura
    As the second part of the JCP.Next effort, JCP 2.9 launched 2 weeks ago on 13 November, and the first JCP EC ballots with the Merged EC have concluded.   JSR 339, JAX-RS 2.0: The Java API for RESTful Web Services, passed EC Public Review Ballot and was approved by the EC -- 22 yes votes, 2 abstain, 2 did not vote -- view results. JSR 349, Bean Validation 1.1, passed EC Public Review Ballot and was approved by the EC --17 yes votes, 2 abstain, 5 did not vote --  view results.

    Read the article

  • Hello Again, San Francisco

    - by Geertjan
    From the moment I got to the airport in Amsterdam, I've been bumping into JavaOne pilgrims today. Finally got to my hotel, after a pretty good flight (and KLM provides great meals, which helps a lot), and a rather long wait at customs (serves me right for getting seat 66C in a plane with 68 rows). And, best of all, on Twitter I've been seeing a few remarks around the Duke's Choice Awards for this year. The references all point to the September - October issue of the Java Magazine, where page 24 shows the following: So, from page 24 onwards, you can read all about the above applications. What's especially cool is that three of the above are applications created on top of the NetBeans Platform! That's AgroSense (farm management software), MICE (NATO system for defense and battle-space operations), and Level One Registration Tool (UN Refugee Agency sofware for managing refugees). Congratulations to all the winners, looking forward to learning more about them all during the coming days here at the conference.

    Read the article

  • Recent EC Meetings - RIM forfeits EC seat

    - by heathervc
    Materials and minutes from the JCP EC Face-to-Face Meeting, held September 2012 in Prague, are now available on the EC Meeting Summaries page.  Topics included JCP.Next, a JCP 2.8 progress report, Inactive JSRs, and two Spec Lead presentations. In October 2011, new EC Standing Rules went into effect. The Rules include the following: "Missing five meetings in a row, or missing more than two-thirds of all meetings in any consecutive twelve-month period, results in loss of EC membership."  Last week, the JCP EC met for their October EC teleconference meeting.  RIM missed this meeting, and has now missed five meetings in a row (see the attendance chart); therefore, RIM has forfeited their EC membership. Results from the 2012 EC Elections will be available on 30 October.  The new merged EC will go into effect on 12 November.

    Read the article

  • NASCIO Award for NetBeans Platform Legislative Software

    - by Geertjan
    Two days ago, 23 October 2012, the Kansas Legislative Information System and Services (KLISS) was awarded the 2012 NASCIO Award for Open Government at the NASCIO annual State IT Recognition awards. KLISS is developed by Propylon in partnership with the executive and legislative branches of the Kansas Government involving a complete overhaul of the Legislature's IT systems. This video gives an overview of the system: In other good news, Propylon has recently announced that it will work with the Indiana Legislative Services Agency to implement a complete Legislative Enterprise Architecture. For details on the NetBeans Platform angle to all this, in addition to watching the movie above, see Legislative Software on NetBeans. And note that Java developers with NetBeans Platform experience are welcome to apply to work at Propylon. And congratulations to the Propylon team!

    Read the article

  • New "How do I ..." series

    - by Maria Colgan
    Over the last year or so the Optimizer development team has presented at a number of conferences and we got a lot of questions that start with "How do I ...". Where people were looking for a specific command or set of steps to fix a problem they had encountered. So we thought it would be a good idea to create a series of small posts that deal with these "How do I" question directly. We will use a simple example each time, that shows exactly what commands and procedures should be used to address a given problem. If you have an interesting "How do I .." question you would like to see us answer on the blog please email me and we will do our best to answer them! Watch out for the first post in this series which addresses the problem of "How do I deal with a third party application that has embedded hints that result in a sub-optimal execution plan in my environment?"

    Read the article

  • Enterprise 2.0 - How to

    - by me
    Today I had a very interesting lecture at the  Fachhochschule Nordostschweiz "Hochschule für Wirtschaft" around How to design & implement an Enterprise 2. 0 solution. We had a great (and sometime pretty skeptical)  discussion around Social Value Models. The presentation can be found below. Enterprise 2.0 - How to View more presentations from Peter Reiser Feedback are always welcome.

    Read the article

  • Test JPQL with NetBeans IDE 7.3 Tools

    - by Geertjan
    Since I pretty much messed up this part of the "Unlocking Java EE 6 Platform" demo, which I did together with PrimeFaces lead Çagatay Çivici during JavaOne 2012, I feel obliged to blog about it to clarify what should have happened! In my own defense, I only learned about this feature 15 minutes before the session started. In 7.3 Beta, it works for Java SE projects, while for Maven-based web projects, you need a post 7.3 Beta build, which is what I set up for my demo right before it started. Then I saw that the feature was there, without actually trying it out, which resulted in that part of the demo being a bit messy. And thanks to whoever it was in the audience who shouted out how to use it correctly! Screenshots below show everything related to this new feature, available from 7.3 onwards, which means you can try out your JPQL queries right within the IDE, without deploying the application (you only need to build it since the queries are run on the compiled classes): SQL view: Result view for the above: Here, you see the result of a more specific query, i.e., check that a record with a specific name value is present in the database: Also note that there is code completion within the editor part of the dialog above. I.e., as you press Ctrl-Space, you'll see context-sensitive suggestions for filling out the query. All this is pretty cool stuff! Saves time because now there's no need to deploy the app to check the database connection.

    Read the article

  • Chessin's principles of RAS design

    - by user12608173
    In late 2001 I developed an internal talk on designing hardware for easier error injection, prevention, diagnosis, and correction. (This talk became the basis for my paper on injecting errors for fun and profit.) In that talk (but not in the paper), I articulated 10 principles of RAS design, which I list for you here: Protect everything Correct where you can Detect where you can't Where protection not feasible (e.g., ALUs), duplicate and compare Report everything; never throw away RAS information Allow non-destructive inspection (logging/scrubbing) Allow non-destructive alteration (injection) (that is, only change the bits you want changed, and leave everything else as is) Allow observation of all the bits as they are (logging) Allow alteration of any particular bit or combination of bits (injection) Document everything Of course, it isn't always feasible to follow these rules completely all the time, but I put them out there as a starting point.

    Read the article

  • The Minimalist Approach to Content Governance - Manage Phase

    - by Kellsey Ruppel
    Originally posted by John Brunswick. Most people would probably agree that creating content is the enjoyable part of the content life cycle. Management, on the other hand, is generally not. This is why we thankfully have an opportunity to leverage meta data, security and other settings that have been applied or inherited in the prior parts of our governance process. In the interests of keeping this process pragmatic, there is little day to day activity that needs to happen here. Most of the activity that happens post creation will occur in the final "Retire" phase in which content may be archived or removed. The Manage Phase will focus on updating content and the meta data associated with it - specifically around ownership. Often times the largest issues with content ownership occur when a content creator leaves and organization or changes roles within an organization.   1. Update Content Ownership (as needed and on a quarterly basis) Why - Without updating content to reflect ownership changes it will be impossible to continue to meaningfully govern the content. How - Run reports against the meta data (creator and department to which the creator belongs) on content items for a particular user that may have left the organization or changed job roles. If the content is without and owner connect with the department marked as responsible. The content's ownership should be reassigned or if the content is no longer needed by that department for some reason it can be archived and or deleted. With a minimal investment it is possible to create reports that use an LDAP or Active Directory system to contrast all noted content owners versus the users that exist in the directory. The delta will indicate which content will need new ownership assigned. Impact - This implicitly keeps your repository and search collection clean. A very large percent of content that ends up no longer being useful to an organization falls into this category. This management phase (automated if possible) should be completed every quarter or as needed. The impact of actually following through with this phase is substantial and will provide end users with a better experience when browsing and searching for content.

    Read the article

  • How to trace a function array argument in DTrace

    - by uejio
    I still use dtrace just about every day in my job and found that I had to print an argument to a function which was an array of strings.  The array was variable length up to about 10 items.  I'm not sure if the is the right way to do it, but it seems to work and is not too painful if the array size is small.Here's an example.  Suppose in your application, you have the following function, where n is number of item in the array s.void arraytest(int n, char **s){    /* Loop thru s[0] to s[n-1] */}How do you use DTrace to print out the values of s[i] or of s[0] to s[n-1]?  DTrace does not have if-then blocks or for loops, so you can't do something like:    for i=0; i<arg0; i++        trace arg1[i]; It turns out that you can use probe ordering as a kind of iterator. Probes with the same name will fire in the order that they appear in the script, so I can save the value of "n" in the first probe and then use it as part of the predicate of the next probe to determine if the other probe should fire or not.  So the first probe for tracing the arraytest function is:pid$target::arraytest:entry{    self->n = arg0;}Then, if I want to print out the first few items of the array, I first check the value of n.  If it's greater than the index that I want to print out, then I can print that index.  For example, if I want to print out the 3rd element of the array, I would do something like:pid$target::arraytest:entry/self->n > 2/{    printf("%s",stringof(arg1 + 2 * sizeof(pointer)));}Actually, that doesn't quite work because arg1 is a pointer to an array of pointers and needs to be copied twice from the user process space to the kernel space (which is where dtrace is). Also, the sizeof(char *) is 8, but for some reason, I have to use 4 which is the sizeof(uint32_t). (I still don't know how that works.)  So, the script that prints the 3rd element of the array should look like:pid$target::arraytest:entry{    /* first, save the size of the array so that we don't get            invalid address errors when indexing arg1+n. */    self->n = arg0;}pid$target::arraytest:entry/self->n > 2/{    /* print the 3rd element (index = 2) of the second arg. */    i = 2;    size = 4;    self->a_t = copyin(arg1+size*i,size);    printf("%s: a[%d]=%s",probefunc,i,copyinstr(*(uint32_t *)self->a_t));}If your array is large, then it's quite painful since you have to write one probe for every array index.  For example, here's the full script for printing the first 5 elements of the array:#!/usr/sbin/dtrace -spid$target::arraytest:entry{        /* first, save the size of the array so that we don't get           invalid address errors when indexing arg1+n. */        self->n = arg0;}pid$target::arraytest:entry/self->n > 0/{        i = 0;        size = sizeof(uint32_t);        self->a_t = copyin(arg1+size*i,size);        printf("%s: a[%d]=%s",probefunc,i,copyinstr(*(uint32_t *)self->a_t));}pid$target::arraytest:entry/self->n > 1/{        i = 1;        size = sizeof(uint32_t);        self->a_t = copyin(arg1+size*i,size);        printf("%s: a[%d]=%s",probefunc,i,copyinstr(*(uint32_t *)self->a_t));}pid$target::arraytest:entry/self->n > 2/{        i = 2;        size = sizeof(uint32_t);        self->a_t = copyin(arg1+size*i,size);        printf("%s: a[%d]=%s",probefunc,i,copyinstr(*(uint32_t *)self->a_t));}pid$target::arraytest:entry/self->n > 3/{        i = 3;        size = sizeof(uint32_t);        self->a_t = copyin(arg1+size*i,size);        printf("%s: a[%d]=%s",probefunc,i,copyinstr(*(uint32_t *)self->a_t));}pid$target::arraytest:entry/self->n > 4/{        i = 4;        size = sizeof(uint32_t);        self->a_t = copyin(arg1+size*i,size);        printf("%s: a[%d]=%s",probefunc,i,copyinstr(*(uint32_t *)self->a_t));} If the array is large, then your script will also have to be very long to print out all values of the array.

    Read the article

  • Linux on 8-bit

    - by nospam(at)example.com (Joerg Moellenkamp)
    This is nothing short of extremly cool from a technical perspective. The author has done it by writing an ARM emulator for an AVR controller and running Linux with this emulation : Linux on an 8-bit micro?.This is definitely not the fastest, but I think it may be the cheapest, slowest, simplest to hand assemble, lowest part count, and lowest-end Linux PC. The board is hand-soldered using wires, there is not even a requirement for a printed circuit board

    Read the article

< Previous Page | 530 531 532 533 534 535 536 537 538 539 540 541  | Next Page >