Search Results

Search found 13501 results on 541 pages for 'dimensional model'.

Page 504/541 | < Previous Page | 500 501 502 503 504 505 506 507 508 509 510 511  | Next Page >

  • How-to populate different select list content per table row

    - by frank.nimphius
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} A frequent requirement posted on the OTN forum is to render cells of a table column using instances of af:selectOneChoices with each af:selectOneChoice instance showing different list values. To implement this use case, the select list of the table column is populated dynamically from a managed bean for each row. The table's current rendered row object is accessible in the managed bean using the #{row} expression, where "row" is the value added to the table's var property. <af:table var="row">   ...   <af:column ...>     <af:selectOneChoice ...>         <f:selectItems value="#{browseBean.items}"/>     </af:selectOneChoice>   </af:column </af:table> The browseBean managed bean referenced in the code snippet above has a setItems and getItems method defined that is accessible from EL using the #{browseBean.items} expression. When the table renders, then the var property variable - the #{row} reference - is filled with the data object displayed in the current rendered table row. The managed bean getItems method returns a List<SelectItem>, which is the model format expected by the f:selectItems tag to populate the af:selectOneChoice list. public void setItems(ArrayList<SelectItem> items) {} //this method is executed for each table row public ArrayList<SelectItem> getItems() {   FacesContext fctx = FacesContext.getCurrentInstance();   ELContext elctx = fctx.getELContext();   ExpressionFactory efactory =          fctx.getApplication().getExpressionFactory();          ValueExpression ve =          efactory.createValueExpression(elctx, "#{row}", Object.class);      Row rw = (Row) ve.getValue(elctx);         //use one of the row attributes to determine which list to query and   //show in the current af:selectOneChoice list  // ...  ArrayList<SelectItem> alsi = new ArrayList<SelectItem>();  for( ... ){      SelectItem item = new SelectItem();        item.setLabel(...);        item.setValue(...);        alsi.add(item);   }   return alsi;} For better performance, the ADF Faces table stamps it data rows. Stamping means that the cell renderer component - af:selectOneChoice in this example - is instantiated once for the column and then repeatedly used to display the cell data for individual table rows. This however means that you cannot refresh a single select one choice component in a table to change its list values. Instead the whole table needs to be refreshed, rerunning the managed bean list query. Be aware that having individual list values per table row is an expensive operation that should be used only on small tables for Business Services with low latency data fetching (e.g. ADF Business Components and EJB) and with server side caching strategies for the queried data (e.g. storing queried list data in a managed bean in session scope).

    Read the article

  • Defaulting the HLSL Vertex and Pixel Shader Levels to Feature Level 9_1 in VS 2012

    - by Michael B. McLaughlin
    I love Visual Studio 2012. But this is not a post about that. This is a post about tweaking one particular parameter that I’ve found a bit annoying. Disclaimer: You will be modifying important MSBuild files. If you screw up you will break your build tools. And maybe your computer will catch fire. I’m not responsible. No warranties or guaranties of any sort. This info is provided “as is”. By default, if you add a new vertex shader or pixel shader item to a project, it will be set to build with shader profile 4.0_level_9_3. If you need 9_3 functionality, this is all well and good. But (especially for Windows Store apps) you really want to target the lowest shader profile possible so that your game will run on as many computers as possible. So it’s a good idea to default to 9_1. To do this you could add in new HLSL files via “Add->New Item->Visual C++->HLSL->______ Shader File (.hlsl)” and then edit the shader files’ properties to set them manually to use 9_1 via “Properties->HLSL Compiler->General->Shader Model”. This is fine unless you forget to do this once and then submit your game with 9_3 shaders instead of 9_1 shaders to the Windows Store or to some other game store. Then you’d wind up with either rejection or angry “this doesn’t work on my computer! ripoff!” messages. There’s another option though. In “Program Files (x86)\Microsoft Visual Studio 11.0\Common7\IDE\ItemTemplates\VC\HLSL\1033\VertexShader” (note the path might vary slightly for you if you are using a 32-bit system or have a non-ENU version of Visual Studio 2012) you will find a “VertexShader.vstemplate” file. If you open this file in a text editor (e.g. Notepad++), then inside the CustomParameters tag within the TemplateContent tag you should see a CustomParameter tag for the ShaderType, i.e.: <CustomParameter Name="$ShaderType$" Value="Vertex"/> On a new line, we are going to add another CustomParameter tag to the CustomParameters tag. It will look like this: <CustomParameter Name="$ShaderModel$" Value="4.0_level_9_1"/> such that we now have:     <CustomParameters>       <CustomParameter Name="$ShaderType$" Value="Vertex"/>       <CustomParameter Name="$ShaderModel$" Value="4.0_level_9_1"/>     </CustomParameters> You can then save the file (you will need to be an Administrator or have Administrator access). Back in the 1033 directory (or whatever the number is for your language), go into the “PixelShader” directory. Edit the “PixelShader.vstemplate” file and make the same change (note that this time $ShaderType$ is “Pixel” not “Vertex”; you shouldn’t be changing that line anyway, but if you were to just copy and replace the above four lines then you will wind up creating pixel shaders that the HLSL compiler would try to compile as vertex shaders, with all sort of weird errors as a result). Once you’ve added the $ShaderModel$ line to “PixelShader.vstemplate” and have saved it, everything should be done. Since Feature Level 9_1 and 9_3 don’t support any of the other shader types, those are set to default to their appropriate minimums already (Compute and Geometry are set to “4.0” and Domain and Hull are set to “5.0”, which are their respective minimums (though not all 4.0 cards support Compute shaders; they were an optional feature added with DirectX 10.1 and only became required for DirectX 11 hardware). In case you are wondering where these magic values come from, you can find them all in the “fxc.xml” file in the “\Program Files (x86)\MSBuild\Microsoft.CPP\v4.0\V110\1033” directory (or whatever your language number is; 1033 is ENU and various other product languages have their own respective numbers (see: http://msdn.microsoft.com/en-us/goglobal/bb964664.aspx ) such that Japanese is 1041 (for example), though for all I know MSBuild tasks might be 1033 for everyone). If, like me, you installed VS 2012 to a drive other than the C:\ drive, you will find the vstemplate files in the drive to which you installed VS 2012 (D:\ in my case) but you will find the fxc.xml file on the C:\ drive. You should not edit fxc.xml. You will almost definitely break things by doing that; it’s just something you can look through to see all the other options that the FXC task takes such that you could, if needed, add further CustomParameter tags if you wanted to default to other supported options. I haven’t tried any others though so I don’t have any advice on how to set them.

    Read the article

  • Silverlight Relay Commands

    - by George Evjen
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} I am fairly new at Silverlight development and I usually have an issue that needs research every day. Which I enjoy, since I like the idea of going into a day knowing that I am  going to learn something new. The issue that I am currently working on centers around relay commands. I have a pretty good handle on Relay Commands and how we use them within our applications. <Button Command="{Binding ButtonCommand}" CommandParameter="NewRecruit" Content="New Recruit" /> Here in our xaml we have a button. The button has a Command and a CommandParameter. The command binds to the ButtonCommand that we have in our ViewModel RelayCommand _buttonCommand;         /// <summary>         /// Gets the button command.         /// </summary>         /// <value>The button command.</value>         public RelayCommand ButtonCommand         {             get             {                 if (_buttonCommand == null)                 {                     _buttonCommand = new RelayCommand(                         x => x != null && x.ToString().Length > 0 && CheckCommandAvailable(x.ToString()),                         x => ExecuteCommand(x.ToString()));                 }                 return _buttonCommand;             }         }   In our relay command we then do some checks with a lambda expression. We check if the command  parameter is null, is the length greater than 0 and we have a CheckCommandAvailable method that will tell  us if the button is even enabled. After we check on these three items we then pass the command parameter to an action method. This is all pretty straight forward, the issue that we solved a few days ago centered around having a control that needed to use a Relay Command and this control was a nested control and was using a different DataContext. The example below illustrates how we handled this scenario. In our xaml usercontrol we had to set a name to this control. <Controls3:RadTileViewItem x:Class="RecruitStatusTileView"     xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"     xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"     xmlns:d="http://schemas.microsoft.com/expression/blend/2008"     xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"      xmlns:Controls1="clr-namespace:Telerik.Windows.Controls;assembly=Telerik.Windows.Controls"      xmlns:Controls2="clr-namespace:Telerik.Windows.Controls;assembly=Telerik.Windows.Controls.Input"      xmlns:Controls3="clr-namespace:Telerik.Windows.Controls;assembly=Telerik.Windows.Controls.Navigation"      mc:Ignorable="d" d:DesignHeight="400" d:DesignWidth="800" Header="{Binding Title,Mode=TwoWay}" MinimizedHeight="100"                             x:Name="StatusView"> Here we are using a telerik RadTileViewItem. We set the name of this control to “StatusView”. In our button control we set our command parameters and commands different than the example above. <HyperlinkButton Content="{Binding BigBoardButtonText, Mode=TwoWay}" CommandParameter="{Binding 'Position.PositionName'}" Command="{Binding ElementName=StatusView, Path=DataContext.BigBoardCommand, Mode=TwoWay}" /> This hyperlink button lives in a ListBox control and this listbox has an ItemSource of PositionSelectors. The Command Parameter is binding to the Position.Position property of that PositionSelectors object. This again is pretty straight forward again. What gets a bit tricky is the Command property in the hyperlink. It is binding to the element name we created in the user control (StatusView) Because this hyperlink is in a listbox and is in the item template it doesn’t have a direct handle on the DataContext that the RadTileViewItem has so we have to make sure it does. We do that by binding to the element name of status view then set the path to DataContext.BigBoardCommand. BigBoardCommand is the name of the RelayCommand in the view model. private RelayCommand _bigBoardCommand = null;         /// <summary>         /// Gets the big board command.         /// </summary>         /// <value>The big board command.</value>         public RelayCommand BigBoardCommand         {             get             {                 if (_bigBoardCommand == null)                 {                     _bigBoardCommand = new RelayCommand(x => true, x => AddToBigBoard(x.ToString()));                 }                 return _bigBoardCommand;             }         } From there we check for true again and then call the action and pass in the parameter that we had as the command parameter. What we are working on now is a bit trickier than this second example. In the above example we are only creating this TileViewItem with this name “StatusView” once. In another part of our application we are generating multiple TileViewItems, so we cannot set the name in the control as we cant have multiple controls with the same name. When we run the application we get an error that reads that the value is out of expected range. My searching has led me to think we cannot have multiple controls with the same name. This is today’s problem and Ill post the solution to this once it is found.

    Read the article

  • Migrating Core Data to new UIManagedDocument in iOS 5

    - by samerpaul
    I have an app that has been on the store since iOS 3.1, so there is a large install base out there that still uses Core Data loaded up in my AppDelegate. In the most recent set of updates, I raised the minimum version to 4.3 but still kept the same way of loading the data. Recently, I decided it's time to make the minimum version 5.1 (especially with 6 around the corner), so I wanted to start using the new fancy UIManagedDocument way of using Core Data. The issue with this though is that the old database file is still sitting in the iOS app, so there is no migrating to the new document. You have to basically subclass UIManagedDocument with a new model class, and override a couple of methods to do it for you. Here's a tutorial on what I did for my app TimeTag.  Step One: Add a new class file in Xcode and subclass "UIManagedDocument" Go ahead and also add a method to get the managedObjectModel out of this class. It should look like:   @interface TimeTagModel : UIManagedDocument   - (NSManagedObjectModel *)managedObjectModel;   @end   Step two: Writing the methods in the implementation file (.m) I first added a shortcut method for the applicationsDocumentDirectory, which returns the URL of the app directory.  - (NSURL *)applicationDocumentsDirectory {     return [[[NSFileManagerdefaultManager] URLsForDirectory:NSDocumentDirectoryinDomains:NSUserDomainMask] lastObject]; }   The next step was to pull the managedObjectModel file itself (.momd file). In my project, it's called "minimalTime". - (NSManagedObjectModel *)managedObjectModel {     NSString *path = [[NSBundlemainBundle] pathForResource:@"minimalTime"ofType:@"momd"];     NSURL *momURL = [NSURL fileURLWithPath:path];     NSManagedObjectModel *managedObjectModel = [[NSManagedObjectModel alloc] initWithContentsOfURL:momURL];          return managedObjectModel; }   After that, I need to check for a legacy installation and migrate it to the new UIManagedDocument file instead. This is the overridden method: - (BOOL)configurePersistentStoreCoordinatorForURL:(NSURL *)storeURL ofType:(NSString *)fileType modelConfiguration:(NSString *)configuration storeOptions:(NSDictionary *)storeOptions error:(NSError **)error {     // If legacy store exists, copy it to the new location     NSURL *legacyPersistentStoreURL = [[self applicationDocumentsDirectory] URLByAppendingPathComponent:@"minimalTime.sqlite"];          NSFileManager* fileManager = [NSFileManagerdefaultManager];     if ([fileManager fileExistsAtPath:legacyPersistentStoreURL.path])     {         NSLog(@"Old db exists");         NSError* thisError = nil;         [fileManager replaceItemAtURL:storeURL withItemAtURL:legacyPersistentStoreURL backupItemName:niloptions:NSFileManagerItemReplacementUsingNewMetadataOnlyresultingItemURL:nilerror:&thisError];     }          return [superconfigurePersistentStoreCoordinatorForURL:storeURL ofType:fileType modelConfiguration:configuration storeOptions:storeOptions error:error]; }   Basically what's happening above is that it checks for the minimalTime.sqlite file inside the app's bundle on the iOS device.  If the file exists, it tells you inside the console, and then tells the fileManager to replace the storeURL (inside the method parameter) with the legacy URL. This basically gives your app access to all the existing data the user has generated (otherwise they would load into a blank app, which would be disastrous). It returns a YES if successful (by calling it's [super] method). Final step: Actually load this database Due to how my app works, I actually have to load the database at launch (instead of shortly after, which would be ideal). I call a method called loadDatabase, which looks like this: -(void)loadDatabase {     static dispatch_once_t onceToken;          // Only do this once!     dispatch_once(&onceToken, ^{         // Get the URL         // The minimalTimeDB name is just something I call it         NSURL *url = [[selfapplicationDocumentsDirectory] URLByAppendingPathComponent:@"minimalTimeDB"];         // Init the TimeTagModel (our custom class we wrote above) with the URL         self.timeTagDB = [[TimeTagModel alloc] initWithFileURL:url];           // Setup the undo manager if it's nil         if (self.timeTagDB.undoManager == nil){             NSUndoManager *undoManager = [[NSUndoManager  alloc] init];             [self.timeTagDB setUndoManager:undoManager];         }                  // You have to actually check to see if it exists already (for some reason you can't just call "open it, and if it's not there, create it")         if ([[NSFileManagerdefaultManager] fileExistsAtPath:[url path]]) {             // If it does exist, try to open it, and if it doesn't open, let the user (or at least you) know!             [self.timeTagDB openWithCompletionHandler:^(BOOL success){                 if (!success) {                     // Handle the error.                     NSLog(@"Error opening up the database");                 }                 else{                     NSLog(@"Opened the file--it already existed");                     [self refreshData];                 }             }];         }         else {             // If it doesn't exist, you need to attempt to create it             [self.timeTagDBsaveToURL:url forSaveOperation:UIDocumentSaveForCreatingcompletionHandler:^(BOOL success){                 if (!success) {                     // Handle the error.                     NSLog(@"Error opening up the database");                 }                 else{                     NSLog(@"Created the file--it did not exist");                     [self refreshData];                 }             }];         }     }); }   If you're curious what refreshData looks like, it sends out a NSNotification that the database has been loaded: -(void)refreshData {     NSNotification* refreshNotification = [NSNotificationnotificationWithName:kNotificationCenterRefreshAllDatabaseData object:self.timeTagDB.managedObjectContext  userInfo:nil];     [[NSNotificationCenter defaultCenter] postNotification:refreshNotification];     }   The kNotificationCenterRefreshAllDatabaseData is just a constant I have defined elsewhere that keeps track of all the NSNotification names I use. I pass the managedObjectContext of the newly created file so that my view controllers can have access to it, and start passing it around to one another. The reason we do this as a Notification is because this is being run in the background, so we can't know exactly when it finishes. Make sure you design your app for this! Have some kind of loading indicator, or make sure your user can't attempt to create a record before the database actually exists, because it will crash the app.

    Read the article

  • Using Stored Procedures in SSIS

    - by dataintegration
    The SSIS Data Flow components: the source task and the destination task are the easiest way to transfer data in SSIS. Some data transactions do not fit this model, they are procedural tasks modeled as stored procedures. In this article we show how you can call stored procedures available in RSSBus ADO.NET Providers from SSIS. In this article we will use the CreateJob and the CreateBatch stored procedures available in RSSBus ADO.NET Provider for Salesforce, but the same steps can be used to call a stored procedure in any of our data providers. Step 1: Open Visual Studio and create a new Integration Services Project. Step 2: Add a new Data Flow Task to the Control Flow window. Step 3: Open the Data Flow Task and add a Script Component to the data flow pane. A dialog box will pop-up allowing you to select the Script Component Type: pick the source type as we will be outputting columns from our stored procedure. Step 4: Double click the Script Component to open the editor. Step 5: In the "Inputs and Outputs" settings, enter all the columns you want to output to the data flow. Ensure the correct data type has been set for each output. You can check the data type by selecting the output and then changing the "DataType" property from the property editor. In our example, we'll add the column JobID of type String. Step 6: Select the "Script" option in the left-hand pane and click the "Edit Script" button. This will open a new Visual Studio window with some boiler plate code in it. Step 7: In the CreateOutputRows() function you can add code that executes the stored procedures included with the Salesforce Component. In this example we will be using the CreateJob and CreateBatch stored procedures. You can find a list of the available stored procedures along with their inputs and outputs in the product help. //Configure the connection string to your credentials String connectionString = "Offline=False;user=myusername;password=mypassword;access token=mytoken;"; using (SalesforceConnection conn = new SalesforceConnection(connectionString)) { //Create the command to call the stored procedure CreateJob SalesforceCommand cmd = new SalesforceCommand("CreateJob", conn); cmd.CommandType = CommandType.StoredProcedure; cmd.Parameters.Add(new SalesforceParameter("ObjectName", "Contact")); cmd.Parameters.Add(new SalesforceParameter("Action", "insert")); //Execute CreateJob //CreateBatch requires JobID as input so we store this value for later SalesforceDataReader rdr = cmd.ExecuteReader(); String JobID = ""; while (rdr.Read()) { JobID = (String)rdr["JobID"]; } //Create the command for CreateBatch, for this example we are adding two new rows SalesforceCommand batCmd = new SalesforceCommand("CreateBatch", conn); batCmd.CommandType = CommandType.StoredProcedure; batCmd.Parameters.Add(new SalesforceParameter("JobID", JobID)); batCmd.Parameters.Add(new SalesforceParameter("Aggregate", "<Contact><Row><FirstName>Bill</FirstName>" + "<LastName>White</LastName></Row><Row><FirstName>Bob</FirstName><LastName>Black</LastName></Row></Contact>")); //Execute CreateBatch SalesforceDataReader batRdr = batCmd.ExecuteReader(); } Step 7b: If you had specified output columns earlier, you can now add data into them using the UserComponent Output0Buffer. For example, we had set an output column called JobID of type String so now we can set a value for it. We will modify the DataReader that contains the output of CreateJob like so:. while (rdr.Read()) { Output0Buffer.AddRow(); JobID = (String)rdr["JobID"]; Output0Buffer.JobID = JobID; } Step 8: Note: You will need to modify the connection string to include your credentials. Also ensure that the System.Data.RSSBus.Salesforce assembly is referenced and include the following using statements to the top of the class: using System.Data; using System.Data.RSSBus.Salesforce; Step 9: Once you are done editing your script, save it, and close the window. Click OK in the Script Transformation window to go back to the main pane. Step 10: If had any outputs from the Script Component you can use them in your data flow. For example we will use a Flat File Destination. Configure the Flat File Destination to output the results to a file, and you should see the JobId in the file. Step 11: Your project should be ready to run.

    Read the article

  • await, WhenAll, WaitAll, oh my!!

    - by cibrax
    If you are dealing with asynchronous work in .NET, you might know that the Task class has become the main driver for wrapping asynchronous calls. Although this class was officially introduced in .NET 4.0, the programming model for consuming tasks was much more simplified in C# 5.0 in .NET 4.5 with the addition of the new async/await keywords. In a nutshell, you can use these keywords to make asynchronous calls as if they were sequential, and avoiding in that way any fork or callback in the code. The compiler takes care of the rest. I was yesterday writing some code for making multiple asynchronous calls to backend services in parallel. The code looked as follow, var allResults = new List<Result>(); foreach(var provider in providers) { var results = await provider.GetResults(); allResults.AddRange(results); } return allResults; You see, I was using the await keyword to make multiple calls in parallel. Something I did not consider was the overhead this code implied after being compiled. I started an interesting discussion with some smart folks in twitter. One of them, Tugberk Ugurlu, had the brilliant idea of actually write some code to make a performance comparison with another approach using Task.WhenAll. There are two additional methods you can use to wait for the results of multiple calls in parallel, WhenAll and WaitAll. WhenAll creates a new task and waits for results in that new task, so it does not block the calling thread. WaitAll, on the other hand, blocks the calling thread. This is the code Tugberk initially wrote, and I modified afterwards to also show the results of WaitAll. class Program { private static Func<Stopwatch, Task>[] funcs = new Func<Stopwatch, Task>[] { async (watch) => { watch.Start(); await Task.Delay(1000); Console.WriteLine("1000 one has been completed."); }, async (watch) => { await Task.Delay(1500); Console.WriteLine("1500 one has been completed."); }, async (watch) => { await Task.Delay(2000); Console.WriteLine("2000 one has been completed."); watch.Stop(); Console.WriteLine(watch.ElapsedMilliseconds + "ms has been elapsed."); } }; static void Main(string[] args) { Console.WriteLine("Await in loop work starts..."); DoWorkAsync().ContinueWith(task => { Console.WriteLine("Parallel work starts..."); DoWorkInParallelAsync().ContinueWith(t => { Console.WriteLine("WaitAll work starts..."); WaitForAll(); }); }); Console.ReadLine(); } static async Task DoWorkAsync() { Stopwatch watch = new Stopwatch(); foreach (var func in funcs) { await func(watch); } } static async Task DoWorkInParallelAsync() { Stopwatch watch = new Stopwatch(); await Task.WhenAll(funcs[0](watch), funcs[1](watch), funcs[2](watch)); } static void WaitForAll() { Stopwatch watch = new Stopwatch(); Task.WaitAll(funcs[0](watch), funcs[1](watch), funcs[2](watch)); } } After running this code, the results were very concluding. Await in loop work starts... 1000 one has been completed. 1500 one has been completed. 2000 one has been completed. 4532ms has been elapsed. Parallel work starts... 1000 one has been completed. 1500 one has been completed. 2000 one has been completed. 2007ms has been elapsed. WaitAll work starts... 1000 one has been completed. 1500 one has been completed. 2000 one has been completed. 2009ms has been elapsed. The await keyword in a loop does not really make the calls in parallel.

    Read the article

  • #OOW 2012 @PARIS...talking Oracle and Clouds, and Optimized Datacenter

    - by Eric Bezille
    For those of you who want to get most out of Oracle technologies to evolve your IT to the Next Wave, I encourage you to register to the up coming Oracle Optimized Datacenter event that will take place in Paris on November 28th. You will get the opportunity to exchange with Oracle experts and customers having successfully evolve their IT by leveraging Oracle technologies. You will also get the latest news on some of the Oracle systems announcements made during OOW 2012. During this event we will make an update about Oracle and Clouds, from private to public and hybrid models. So in preparing this session, I thought it was a good start to make a status of Cloud Computing in France, and CIO requirements in particular. Starting in 2009 with the first Cloud Camp in Paris, the market has evolved, but the basics are still the same : think hybrid. From Traditional IT to Clouds One size doesn't fit all, and for big companies having already an IT in place, there will be parts eligible to external (public) cloud, and parts that would be required to stay inside the firewalls, so ability to integrate both side is key.  None the less, one of the major impact of Cloud Computing trend on IT, reported by Forrester, is the pressure it makes on CIO to evolve towards the same model that end-users are now used to in their day to day life, where self-service and flexibility are paramount. This is what is driving IT to transform itself toward "a Global Service Provider", or for some as "IT "is" the Business" (see : Gartner Identifies Four Futures for IT and CIO), and for both models toward a Private Cloud Service Provider. In this journey, there is still a big difference between most of existing external Cloud and a firm IT : the number of applications that a CIO has to manage. Most cloud providers today are overly specialized, but at the end of the day, there are really few business processes that rely on only one application. So CIOs has to combine everything together external and internal. And for the internal parts that they will have to make them evolve to a Private Cloud, the scope can be very large. This will often require CIOs to evolve from their traditional approach to more disruptive ones, the time has come to introduce new standards and processes, if they want to succeed. So let's have a look at the different Cloud models, what type of users they are addressing, what value they bring and most importantly what needs to be done by the  Cloud Provider, and what is left over to the user. IaaS, PaaS, SaaS : what's provided and what needs to be done First of all the Cloud Provider will have to provide all the infrastructure needed to deliver the service. And the more value IT will want to provide, the more IT will have to deliver and integrate : from disks to applications. As we can see in the above picture, providing pure IaaS, left a lot to cover for the end-user, that’s why the end-user targeted by this Cloud Service is IT people. If you want to bring more value to developers, you need to provide to them a development platform ready to use, which is what PaaS is standing for, by providing not only the processors power, storage and OS, but also the Database and Middleware platform. SaaS being the last mile of the Cloud, providing an application ready to use by business users, the remaining part for the end-users being configuring and specifying the application for their specific usage. In addition to that, there are common challenges encompassing all type of Cloud Services : Security : covering all aspect, not only of users management but also data flows and data privacy Charge back : measuring what is used and by whom Application management : providing capabilities not only to deploy, but also to upgrade, from OS for IaaS, Database, and Middleware for PaaS, to a full Business Application for SaaS. Scalability : ability to evolve ALL the components of the Cloud Provider stack as needed Availability : ability to cover “always on” requirements Efficiency : providing a infrastructure that leverage shared resources in an efficient way and still comply to SLA (performances, availability, scalability, and ability to evolve) Automation : providing the orchestration of ALL the components in all service life-cycle (deployment, growth & shrink (elasticity), upgrades,...) Management : providing monitoring, configuring and self-service up to the end-users Oracle Strategy and Clouds For CIOs to succeed in their Private Cloud implementation, means that they encompass all those aspects for each component life-cycle that they selected to build their Cloud. That’s where a multi-vendors layered approach comes short in terms of efficiency. That’s the reason why Oracle focus on taking care of all those aspects directly at Engineering level, to truly provide efficient Cloud Services solutions for IaaS, PaaS and SaaS. We are going as far as embedding software functions in hardware (storage, processor level,...) to ensure the best SLA with the highest efficiency. The beauty of it, as we rely on standards, is that the Oracle components that you are running today in-house, are exactly the same that we are using to build Clouds, bringing you flexibility, reversibility and fast path to adoption. With Oracle Engineered Systems (Exadata, Exalogic & SPARC SuperCluster, more specifically, when talking about Cloud), we are delivering all those components hardware and software already engineered together at Oracle factory, with a single pane of glace for the management of ALL the components through Oracle Enterprise Manager, and with high-availability, scalability and ability to evolve by design. To give you a feeling of what does that bring in terms just of implementation project timeline, for example with Oracle SPARC SuperCluster, we have a consistent track of record to have the system plug into existing Datacenter and ready in a week. This includes Oracle Database, OS, virtualization, Database Storage (Exadata Storage Cells in this case), Application Storage, and all network configuration. This strategy enable CIOs to very quickly build Cloud Services, taking out not only the complexity of integrating everything together but also taking out the automation and evolution complexity and cost. I invite you to discuss all those aspect in regards of your particular context face2face on November 28th.

    Read the article

  • How do I cleanly design a central render/animation loop?

    - by mtoast
    I'm learning some graphics programming, and am in the midst of my first such project of any substance. But, I am really struggling at the moment with how to architect it cleanly. Let me explain. To display complicated graphics in my current language of choice (JavaScript -- have you heard of it?), you have to draw graphical content onto a <canvas> element. And to do animation, you must clear the <canvas> after every frame (unless you want previous graphics to remain). Thus, most canvas-related JavaScript demos I've seen have a function like this: function render() { clearCanvas(); // draw stuff here requestAnimationFrame(render); } render, as you may surmise, encapsulates the drawing of a single frame. What a single frame contains at a specific point in time, well... that is determined by the program state. So, in order for my program to do its thing, I just need to look at the state, and decide what to render. Right? Right. But that is more complicated than it seems. My program is called "Critter Clicker". In my program, you see several cute critters bouncing around the screen. Clicking on one of them agitates it, making it bounce around even more. There is also a start screen, which says "Click to start!" prior to the critters being displayed. Here are a few of the objects I'm working with in my program: StartScreenView // represents the start screen CritterTubView // represents the area in which the critters live CritterList // a collection of all the critters Critter // a single critter model CritterView // view of a single critter Nothing too egregious with this, I think. Yet, when I set out to flesh out my render function, I get stuck, because everything I write seems utterly ugly and reminiscent of a certain popular Italian dish. Here are a couple of approaches I've attempted, with my internal thought process included, and unrelated bits excluded for clarity. Approach 1: "It's conditions all the way down" // "I'll just write the program as I think it, one frame at a time." if (assetsLoaded) { if (userClickedToStart) { if (critterTubDisplayed) { if (crittersDisplayed) { forEach(crittersList, function(c) { if (c.wasClickedRecently) { c.getAgitated(); } }); } else { displayCritters(); } } else { displayCritterTub(); } } else { displayStartScreen(); } } That's a very much simplified example. Yet even with only a fraction of all the rendering conditions visible, render is already starting to get out of hand. So, I dispense with that and try another idea: Approach 2: Under the Rug // "Each view object shall be responsible for its own rendering. // "I'll pass each object the program state, and each can render itself." startScreen.render(state); critterTub.render(state); critterList.render(state); In this setup, I've essentially just pushed those crazy nested conditions to a deeper level in the code, hiding them from view. In other words, startScreen.render would check state to see if it needed actually to be drawn or not, and take the correct action. But this seems more like it only solves a code-aesthetic problem. The third and final approach I'm considering that I'll share is the idea that I could invent my own "wheel" to take care of this. I'm envisioning a function that takes a data structure that defines what should happen at any given point in the render call -- revealing the conditions and dependencies as a kind of tree. Approach 3: Mad Scientist renderTree({ phases: ['startScreen', 'critterTub', 'endCredits'], dependencies: { startScreen: ['assetsLoaded'], critterTub: ['startScreenClicked'], critterList ['critterTubDisplayed'] // etc. }, exclusions: { startScreen: ['startScreenClicked'], // etc. } }); That seems kind of cool. I'm not exactly sure how it would actually work, but I can see it being a rather nifty way to express things, especially if I flex some of JavaScript's events. In any case, I'm a little bit stumped because I don't see an obvious way to do this. If you couldn't tell, I'm coming to this from the web development world, and finding that doing animation is a bit more exotic than arranging an MVC application for handling simple requests - responses. What is the clean, established solution to this common-I-would-think problem?

    Read the article

  • Seizing the Moment with Mobility

    - by Divya Malik
    Empowering people to work where they want to work is becoming more critical now with the consumerisation of technology. Employees are bringing their own devices to the workplace and expecting to be productive wherever they are. Sales people welcome the ability to run their critical business applications where they can be most effective which is typically on the road and when they are still with the customer. Oracle has invested many years of research in understanding customer's Mobile requirements. “The keys to building the best user experience were building in a lot of flexibility in ways to support sales, and being useful,” said Arin Bhowmick, Director, CRM, for the Applications UX team. “We did that by talking to and analyzing the needs of a lot of people in different roles.” The team studied real-life sales teams. “We wanted to study salespeople in context with their work,” Bhowmick said. “We studied all user types in the CRM world because we wanted to build a user interface and user experience that would cater to sales representatives, marketing managers, sales managers, and more. Not only did we do studies in our labs, but also we did studies in the field and in mobile environments because salespeople are always on the go.” Here is a recent post from Hernan Capdevila, Vice President, Oracle Fusion Apps which was featured on the Oracle Applications Blog.  Mobile devices are forcing a paradigm shift in the workplace – they’re changing the way businesses can do business and the type of cultures they can nurture. As our customers talk about their mobile needs, we hear them saying they want instant-on access to enterprise data so workers can be more effective at their jobs anywhere, anytime. They also are interested in being more cost effective from an IT point of view. The mobile revolution – with the idea of BYOD (bring your own device) – has added an interesting dynamic because previously IT was driving the employee device strategy and ecosystem. That's been turned on its head with the consumerization of IT. Now employees are figuring out how to use their personal devices for work purposes and IT has to figure out how to adapt. Blurring the Lines between Work and Personal Life My vision of where businesses will be five years from now is that our work lives and personal lives will be more interwoven together. In turn, enterprises will have to determine how to make employees’ work lives fit more into the fabric of their personal lives. And personal devices like smartphones are going to drive significant business value because they let us accomplish things very incrementally. I can be sitting on a train or in a taxi and be productive. At the end of any meeting, I can capture ideas and tasks or follow up with people in real time. Mobile devices enable this notion of seizing the moment – capitalizing on opportunities that might otherwise have slipped away because we're not connected. For the industry shapers out there, this is game changing. The lean and agile workforce is definitely the future. This notion of the board sitting down with the executive team to lay out strategic objectives for a three- to five-year plan, bringing in HR to determine how they're going to staff the strategic activities, kicking off the execution, and then revisiting the plan in three to five years to create another three- to five-year plan is yesterday's model. Businesses that continue to approach innovating in that way are in the dinosaur age. Today it's about incremental planning and incremental execution, which requires a lot of cohesion and synthesis within the workforce. There needs to be this interweaving notion within the workforce about how ideas cascade down, how people engage, how they stay connected, and how insights are shared. How to Survive and Thrive in Today’s Marketplace The notion of Facebook isn’t new. We lived it pre-Internet days with America Online and Prodigy – Facebook is just the renaissance of these services in a more viral and pervasive way. And given the trajectory of the consumerization of IT with people bringing their personal tooling to work, the enterprise has no option but to adapt. The sooner that businesses realize this from a top-down point of view the sooner that they will be able to really drive significant innovation and adapt to the marketplace. There are a small number of companies right now (I think it's closer to 20% rather than 80%, but the number is expanding) that are able to really innovate in this incremental marketplace. So from a competitive point of view, there's no choice but to be social and stay connected. By far the majority of users on Facebook and LinkedIn are mobile users – people on iPhones, smartphones, Android phones, and tablets. It's not the couch people, right? It's the on-the-go people – those people at the coffee shops. Usually when you're sitting at your desk on a big desktop computer, typically you have better things to do than to be on Facebook. This is a topic I'm extremely passionate about because I think mobile devices are game changing. Mobility delivers significant value to businesses – it also brings dramatic simplification from a functional point of view and transforms our work life experience. Hernan Capdevila Vice President, Oracle Applications Development

    Read the article

  • Add Widget via Action in Toolbar

    - by Geertjan
    The question of the day comes from Vadim, who asks on the NetBeans Platform mailing list: "Looking for example showing how to add Widget to Scene, e.g. by toolbar button click." Well, the solution is very similar to this blog entry, where you see a solution provided by Jesse Glick for VisiTrend in Boston: https://blogs.oracle.com/geertjan/entry/zoom_capability Other relevant articles to read are as follows: http://netbeans.dzone.com/news/which-netbeans-platform-action http://netbeans.dzone.com/how-to-make-context-sensitive-actions Let's go through it step by step, with this result in the end, a solution involving 4 classes split (optionally, since a central feature of the NetBeans Platform is modularity) across multiple modules: The Customer object has a "name" String and the Droppable capability has a method "doDrop" which takes a Customer object: public interface Droppable {    void doDrop(Customer c);} In the TopComponent, we use "TopComponent.associateLookup" to publish an instance of "Droppable", which creates a new LabelWidget and adds it to the Scene in the TopComponent. Here's the TopComponent constructor: public CustomerCanvasTopComponent() {    initComponents();    setName(Bundle.CTL_CustomerCanvasTopComponent());    setToolTipText(Bundle.HINT_CustomerCanvasTopComponent());    final Scene scene = new Scene();    final LayerWidget layerWidget = new LayerWidget(scene);    Droppable d = new Droppable(){        @Override        public void doDrop(Customer c) {            LabelWidget customerWidget = new LabelWidget(scene, c.getTitle());            customerWidget.getActions().addAction(ActionFactory.createMoveAction());            layerWidget.addChild(customerWidget);            scene.validate();        }    };    scene.addChild(layerWidget);    jScrollPane1.setViewportView(scene.createView());    associateLookup(Lookups.singleton(d));} The Action is displayed in the toolbar and is enabled only if a Droppable is currently in the Lookup: @ActionID(        category = "Tools",        id = "org.customer.controler.AddCustomerAction")@ActionRegistration(        iconBase = "org/customer/controler/icon.png",        displayName = "#AddCustomerAction")@ActionReferences({    @ActionReference(path = "Toolbars/File", position = 300)})@NbBundle.Messages("AddCustomerAction=Add Customer")public final class AddCustomerAction implements ActionListener {    private final Droppable context;    public AddCustomerAction(Droppable droppable) {        this.context = droppable;    }    @Override    public void actionPerformed(ActionEvent ev) {        NotifyDescriptor.InputLine inputLine = new NotifyDescriptor.InputLine("Name:", "Data Entry");        Object result = DialogDisplayer.getDefault().notify(inputLine);        if (result == NotifyDescriptor.OK_OPTION) {            Customer customer = new Customer(inputLine.getInputText());            context.doDrop(customer);        }    }} Therefore, when the Properties window, for example, is selected, the Action will be disabled. (See the Zoomable example referred to in the link above for another example of this.) As you can see above, when the Action is invoked, a Droppable must be available (otherwise the Action would not have been enabled). The Droppable is obtained in the Action and a new Customer object is passed to its "doDrop" method. The above in pictures, take note of the enablement of the toolbar button with the red dot, on the extreme left of the toolbar in the screenshots below: The above shows the JButton is only enabled if the relevant TopComponent is active and, when the Action is invoked, the user can enter a name, after which a new LabelWidget is created in the Scene. The source code of the above is here: http://java.net/projects/nb-api-samples/sources/api-samples/show/versions/7.3/misc/WidgetCreationFromAction Note: Showing this as an MVC example is slightly misleading because, depending on which model object ("Customer" and "Droppable") you're looking at, the V and the C are different. From the point of view of "Customer", the TopComponent is the View, while the Action is the Controler, since it determines when the M is displayed. However, from the point of view of "Droppable", the TopComponent is the Controler, since it determines when the Action, i.e., which is in this case the View, displays the presence of the M.

    Read the article

  • JavaDay Taipei 2014 Trip Report

    - by reza_rahman
    JavaDay Taipei 2014 was held at the Taipei International Convention Center on August 1st. Organized by Oracle University, it is one of the largest Java developer events in Taiwan. This was another successful year for JavaDay Taipei with a fully sold out venue packed with youthful, energetic developers (this was my second time at the event and I have already been invited to speak again next year!). In addition to Oracle speakers like me, Steve Chin and Naveen Asrani, the event also featured a bevy of local speakers including Taipei Java community leaders. Topics included Java SE, Java EE, JavaFX, cloud and Big Data. It was my pleasure and privilege to present one of the opening keynotes for the event. I presented my session on Java EE titled "JavaEE.Next(): Java EE 7, 8, and Beyond". I covered the changes in Java EE 7 as well as what's coming in Java EE 8. I demoed the Cargo Tracker Java EE BluePrints. I also briefly talked about Adopt-a-JSR for Java EE 8. The slides for the keynote are below (click here to download and view the actual PDF): It appears your Web browser is not configured to display PDF files. No worries, just click here to download the PDF file. In the afternoon I did my JavaScript + Java EE 7 talk titled "Using JavaScript/HTML5 Rich Clients with Java EE 7". This talk is basically about aligning EE 7 with the emerging JavaScript ecosystem (specifically AngularJS). The talk was completely packed. The slide deck for the talk is here: JavaScript/HTML5 Rich Clients Using Java EE 7 from Reza Rahman The demo application code is posted on GitHub. The code should be a helpful resource if this development model is something that interests you. Do let me know if you need help with it but the instructions should be fairly self-explanatory. I am delivering this material at JavaOne 2014 as a two-hour tutorial. This should give me a little more bandwidth to dig a little deeper, especially on the JavaScript end. I finished off Java Day Taipei with my talk titled "Using NoSQL with ~JPA, EclipseLink and Java EE" (this was the last session of the conference). The talk covers an interesting gap that there is surprisingly little material on out there. The talk has three parts -- a birds-eye view of the NoSQL landscape, how to use NoSQL via a JPA centric facade using EclipseLink NoSQL, Hibernate OGM, DataNucleus, Kundera, Easy-Cassandra, etc and how to use NoSQL native APIs in Java EE via CDI. The slides for the talk are here: Using NoSQL with ~JPA, EclipseLink and Java EE from Reza Rahman The JPA based demo is available here, while the CDI based demo is available here. Both demos use MongoDB as the data store. Do let me know if you need help getting the demos up and running. After the event the Oracle University folks hosted a reception in the evening which was very well attended by organizers, speakers and local Java community leaders. I am extremely saddened by the fact that this otherwise excellent trip was scarred by terrible tragedy. After the conference I joined a few folks for a hike on the Maokong Mountain on Saturday. The group included friends in the Taiwanese Java community including Ian and Robbie Cheng. Without warning, fatal tragedy struck on a remote part of the trail. Despite best efforts by us, the excellent Taiwanese Emergency Rescue Team and World class Taiwanese physicians we were unable to save our friend Robbie Cheng's life. Robbie was just thirty-four years old and is survived by his younger brother, mother and father. Being the father of a young child myself, I can only imagine the deep sorrow that this senseless loss unleashes. Robbie was a key member of the Taiwanese Java community and a Java Evangelist at Sun at one point. Ironically the only picture I was able to take of the trail was mere moments before tragedy. I thought I should place him in that picture in profoundly respectful memoriam: Perhaps there is some solace in the fact that there is something inherently honorable in living a bright life, dying young and meeting one's end on a beautiful remote mountain trail few venture to behold let alone attempt to ascend in a long and tired lifetime. Perhaps I'd even say it's a fate I would not entirely regret facing if it were my own. With that thought in mind it seems appropriate to me to quote some lyrics from the song "Runes to My Memory" by legendary Swedish heavy metal band Amon Amarth idealizing a fallen Viking warrior cut down in his prime: "Here I lie on wet sand I will not make it home I clench my sword in my hand Say farewell to those I love When I am dead Lay me in a mound Place my weapons by my side For the journey to Hall up high When I am dead Lay me in a mound Raise a stone for all to see Runes carved to my memory" I submit my deepest condolences to Robbie's family and hope my next trip to Taiwan ends in a less somber note.

    Read the article

  • #OOW 2012 : IaaS, Private Cloud, Multitenant Database, and X3H2M2

    - by Eric Bezille
    The title of this post is a summary of the 4 announcements made by Larry Ellison today, during the opening session of Oracle Open World 2012... To know what's behind X3H2M2, you will have to wait a little, as I will go in order, beginning with the IaaS - Infrastructure as a Service - announcement. Oracle IaaS goes Public... and Private... Starting in 2004 with Fusion development, Oracle Cloud was launch last year to provide not only SaaS Application, based on standard development, but also the underlying PaaS, required to build the specifics, and required interconnections between applications, in and outside of the Cloud. Still, to cover the end-to-end Cloud  Services spectrum, we had to provide an Infrastructure as a Service, leveraging our Servers, Storage, OS, and Virtualization Technologies, all "Engineered Together". This Cloud Infrastructure, was already available for our customers to build rapidly their own Private Cloud either on SPARC/Solaris or x86/Linux... The second announcement made today bring that proposition a big step further : for cautious customers (like Banks, or sensible industries) who would like to benefits from the Cloud value of "as a Service", but don't want their Data out in the Cloud... We propose to them to operate the same systems, Exadata, Exalogic & SuperCluster, that are providing our Public Cloud Infrastructure, behind their firewall, in a Private Cloud model. Oracle 12c Multitenant Database This is also a major announcement made today, on what's coming with Oracle Database 12c : the ability to consolidate multiple databases with no extra additional  cost especially in terms of memory needed on the server node, which is often THE consolidation limiting factor. The principle could be compare to Solaris Zones, where, you will have a Database Container, who is "owning" the memory and Database background processes, and "Pluggable" Database in this Database Container. This particular feature is a strong compelling event to evaluate rapidly Oracle Database 12c once it will be available, as this is major step forward into true Database consolidation with Multitenancy on a shared (optimized) infrastructure. X3H2M2, enabling the new Exadata X3 in-Memory Database Here we are :  X3H2M2 stands for X3 (the new version of Exadata announced also today) Heuristic Hierarchical Mass Memory, providing the capability to keep most if not all the Data in the memory cache hierarchy. Of course, this is the major software enhancement of the new X3 Exadata machine, but as this is a software, our current customers would be able to benefit from it on their existing systems by upgrading to the new release. But that' not the only thing that we did with X3, at the same time we have upgraded everything : the CPUs, adding more cores per server node (16 vs. 12, with the arrival of Intel E5 / Sandy Bridge), the memory with 512GB memory as well per node,  and the new Flash Fire card, bringing now up to 22 TB of Flash cache. All of this 4TB of RAM + 22TB of Flash being use cleverly not only for read but also for write by the X3H2M2 algorithm... making a very big difference compare to traditional storage flash extension. But what does those extra performances brings to you on an already very efficient system: double your performances compare to the fastest storage array on the market today (including flash) and divide you storage price x10 at the same time... Something to consider closely this days... Especially that we also announced the availability of a new Exadata X3-2 8th rack : a good starting point. As you have seen a major opening for this year again with true innovation. But that was not the only thing that we saw today, as before Larry's talk, Fujitsu did introduce more in deep the up coming new SPARC processor, that they are co-developing with us. And as such Andrew Mendelsohn - Senior Vice President Database Server Technologies came on stage to explain that the next step after I/O optimization for Database with Exadata, was to accelerate the Database at execution level by bringing functions in the SPARC processor silicium. All in all, to process more and more Data... The big theme of the day... and of the Oracle User Groups Conferences that were also happening today and where I had the opportunity to attend some interesting sessions on practical use cases of Big Data one in Finances and Fraud profiling and the other one on practical deployment of Oracle Exalytics for Data Analytics. In conclusion, one picture to try to size Oracle Open World ... and you can understand why, with such a rich content... and this only the first day !

    Read the article

  • Impressions and Reactions from Alliance 2012

    - by user739873
    Alliance 2012 has come to a conclusion.  What strikes me about every Alliance conference is the amazing amount of collaboration and cooperation I see across higher education in the sharing of best practices around the entire Oracle PeopleSoft software suite, not just the student information system (Oracle’s PeopleSoft Campus Solutions).  In addition to the vibrant U.S. organization, it's gratifying to see the growth in the international attendance again this year, with an EMEA HEUG organizing to complement the existing groups in the Netherlands, South Africa, and the U.K.  Their first meeting is planned for London in October, and I suspect they'll be surprised at the amount of interest and attendance. In my discussions with higher education IT and functional leadership at Alliance there were a number of instances where concern was expressed about Oracle's commitment to higher education as an industry, primarily because of a lack of perceived innovation in the applications that Oracle develops for this market. Here I think perception and reality are far apart, and I'd like to explain why I believe this to be true. First let me start with what I think drives this perception. Predominately it's in two areas. The first area is the user interface, both for students and faculty that interact with the system as "customers", and for those employees of the institution (faculty, staff, and sometimes students as well) that use the system in some kind of administrative role. Because the UI hasn't changed all that much from the PeopleSoft days, individuals perceive this as a dead product with little innovation and therefore Oracle isn't investing. The second area is around the integration of the higher education suite of applications (PeopleSoft Campus Solutions) and the rest of the Oracle software assets. Whether grown organically or acquired, there is an impressive array of middleware and other software products that could be leveraged much more significantly by the higher education applications than is currently the case today. This is also perceived as lack of investment. Let me address these two points.  First the UI.  More is being done here than ever before, and the PAG and other groups where this was discussed at Alliance 2012 were more numerous than I've seen in any past meeting. Whether it's Oracle development leveraging web services or some extremely early but very promising work leveraging the recent Endeca acquisition (see some cool examples here) there are a lot of resources aimed at this issue.  There are also some amazing prototypes being developed by our UX (user experience team) that will eventually make their way into the higher education applications realm - they had an impressive setup at Alliance.  Hopefully many of you that attended found this group. If not, the senior leader for that team Jeremy Ashley will be a significant contributor of content to our summer Industry Strategy Council meeting in Washington in June. In the area of integration with other elements of the Oracle stack, this is also an area of focus for the company and my team.  We're making this a priority especially in the areas of identity management and security, leveraging WebCenter more effectively for content, imaging, and mobility, and driving towards the ultimate objective of WebLogic Suite as our platform for SOA, links to learning management systems (SAIP), and content. There is also much work around business intelligence centering on OBI applications. But at the end of the day we get enormous value from the HEUG (higher education user group) and the various subgroups formed as a part of this community that help us align and prioritize our investments, whether it's around better integration with other Oracle products or integration with partner offerings.  It's one of the healthiest, mutually beneficial relationships between customers and an Education IT concern that exists on the globe. And I can't avoid mentioning that this kind of relationship between higher education and the corporate IT community that can truly address the problems of efficiency and effectiveness, institutional excellence (which starts with IT) and student success.  It's not (in my opinion) going to be solved through community source - cost and complexity only increase in that model and in the end higher education doesn't ultimately focus on core competencies: educating, developing, and researching.  While I agree with some of what Michael A. McRobbie wrote in his EDUCAUSE Review article (Information Technology: A View from Both Sides of the President’s Desk), I take strong issue with his assertion that the "the IT marketplace is just the opposite of long-term stability...."  Sure there has been healthy, creative destruction in the past 2-3 decades, but this has had the effect of, in the aggregate, benefiting education with greater efficiency, more innovation and increased stability as larger, more financially secure firms acquire and develop integrated solutions. Cole

    Read the article

  • CPU Usage in Very Large Coherence Clusters

    - by jpurdy
    When sizing Coherence installations, one of the complicating factors is that these installations (by their very nature) tend to be application-specific, with some being large, memory-intensive caches, with others acting as I/O-intensive transaction-processing platforms, and still others performing CPU-intensive calculations across the data grid. Regardless of the primary resource requirements, Coherence sizing calculations are inherently empirical, in that there are so many permutations that a simple spreadsheet approach to sizing is rarely optimal (though it can provide a good starting estimate). So we typically recommend measuring actual resource usage (primarily CPU cycles, network bandwidth and memory) at a given load, and then extrapolating from those measurements. Of course there may be multiple types of load, and these may have varying degrees of correlation -- for example, an increased request rate may drive up the number of objects "pinned" in memory at any point, but the increase may be less than linear if those objects are naturally shared by concurrent requests. But for most reasonably-designed applications, a linear resource model will be reasonably accurate for most levels of scale. However, at extreme scale, sizing becomes a bit more complicated as certain cluster management operations -- while very infrequent -- become increasingly critical. This is because certain operations do not naturally tend to scale out. In a small cluster, sizing is primarily driven by the request rate, required cache size, or other application-driven metrics. In larger clusters (e.g. those with hundreds of cluster members), certain infrastructure tasks become intensive, in particular those related to members joining and leaving the cluster, such as introducing new cluster members to the rest of the cluster, or publishing the location of partitions during rebalancing. These tasks have a strong tendency to require all updates to be routed via a single member for the sake of cluster stability and data integrity. Fortunately that member is dynamically assigned in Coherence, so it is not a single point of failure, but it may still become a single point of bottleneck (until the cluster finishes its reconfiguration, at which point this member will have a similar load to the rest of the members). The most common cause of scaling issues in large clusters is disabling multicast (by configuring well-known addresses, aka WKA). This obviously impacts network usage, but it also has a large impact on CPU usage, primarily since the senior member must directly communicate certain messages with every other cluster member, and this communication requires significant CPU time. In particular, the need to notify the rest of the cluster about membership changes and corresponding partition reassignments adds stress to the senior member. Given that portions of the network stack may tend to be single-threaded (both in Coherence and the underlying OS), this may be even more problematic on servers with poor single-threaded performance. As a result of this, some extremely large clusters may be configured with a smaller number of partitions than ideal. This results in the size of each partition being increased. When a cache server fails, the other servers will use their fractional backups to recover the state of that server (and take over responsibility for their backed-up portion of that state). The finest granularity of this recovery is a single partition, and the single service thread can not accept new requests during this recovery. Ordinarily, recovery is practically instantaneous (it is roughly equivalent to the time required to iterate over a set of backup backing map entries and move them to the primary backing map in the same JVM). But certain factors can increase this duration drastically (to several seconds): large partitions, sufficiently slow single-threaded CPU performance, many or expensive indexes to rebuild, etc. The solution of course is to mitigate each of those factors but in many cases this may be challenging. Larger clusters also lead to the temptation to place more load on the available hardware resources, spreading CPU resources thin. As an example, while we've long been aware of how garbage collection can cause significant pauses, it usually isn't viewed as a major consumer of CPU (in terms of overall system throughput). Typically, the use of a concurrent collector allows greater responsiveness by minimizing pause times, at the cost of reducing system throughput. However, at a recent engagement, we were forced to turn off the concurrent collector and use a traditional parallel "stop the world" collector to reduce CPU usage to an acceptable level. In summary, there are some less obvious factors that may result in excessive CPU consumption in a larger cluster, so it is even more critical to test at full scale, even though allocating sufficient hardware may often be much more difficult for these large clusters.

    Read the article

  • Performance triage

    - by Dave
    Folks often ask me how to approach a suspected performance issue. My personal strategy is informed by the fact that I work on concurrency issues. (When you have a hammer everything looks like a nail, but I'll try to keep this general). A good starting point is to ask yourself if the observed performance matches your expectations. Expectations might be derived from known system performance limits, prototypes, and other software or environments that are comparable to your particular system-under-test. Some simple comparisons and microbenchmarks can be useful at this stage. It's also useful to write some very simple programs to validate some of the reported or expected system limits. Can that disk controller really tolerate and sustain 500 reads per second? To reduce the number of confounding factors it's better to try to answer that question with a very simple targeted program. And finally, nothing beats having familiarity with the technologies that underlying your particular layer. On the topic of confounding factors, as our technology stacks become deeper and less transparent, we often find our own technology working against us in some unexpected way to choke performance rather than simply running into some fundamental system limit. A good example is the warm-up time needed by just-in-time compilers in Java Virtual Machines. I won't delve too far into that particular hole except to say that it's rare to find good benchmarks and methodology for java code. Another example is power management on x86. Power management is great, but it can take a while for the CPUs to throttle up from low(er) frequencies to full throttle. And while I love "turbo" mode, it makes benchmarking applications with multiple threads a chore as you have to remember to turn it off and then back on otherwise short single-threaded runs may look abnormally fast compared to runs with higher thread counts. In general for performance characterization I disable turbo mode and fix the power governor at "performance" state. Another source of complexity is the scheduler, which I've discussed in prior blog entries. Lets say I have a running application and I want to better understand its behavior and performance. We'll presume it's warmed up, is under load, and is an execution mode representative of what we think the norm would be. It should be in steady-state, if a steady-state mode even exists. On Solaris the very first thing I'll do is take a set of "pstack" samples. Pstack briefly stops the process and walks each of the stacks, reporting symbolic information (if available) for each frame. For Java, pstack has been augmented to understand java frames, and even report inlining. A few pstack samples can provide powerful insight into what's actually going on inside the program. You'll be able to see calling patterns, which threads are blocked on what system calls or synchronization constructs, memory allocation, etc. If your code is CPU-bound then you'll get a good sense where the cycles are being spent. (I should caution that normal C/C++ inlining can diffuse an otherwise "hot" method into other methods. This is a rare instance where pstack sampling might not immediately point to the key problem). At this point you'll need to reconcile what you're seeing with pstack and your mental model of what you think the program should be doing. They're often rather different. And generally if there's a key performance issue, you'll spot it with a moderate number of samples. I'll also use OS-level observability tools to lock for the existence of bottlenecks where threads contend for locks; other situations where threads are blocked; and the distribution of threads over the system. On Solaris some good tools are mpstat and too a lesser degree, vmstat. Try running "mpstat -a 5" in one window while the application program runs concurrently. One key measure is the voluntary context switch rate "vctx" or "csw" which reflects threads descheduling themselves. It's also good to look at the user; system; and idle CPU percentages. This can give a broad but useful understanding if your threads are mostly parked or mostly running. For instance if your program makes heavy use of malloc/free, then it might be the case you're contending on the central malloc lock in the default allocator. In that case you'd see malloc calling lock in the stack traces, observe a high csw/vctx rate as threads block for the malloc lock, and your "usr" time would be less than expected. Solaris dtrace is a wonderful and invaluable performance tool as well, but in a sense you have to frame and articulate a meaningful and specific question to get a useful answer, so I tend not to use it for first-order screening of problems. It's also most effective for OS and software-level performance issues as opposed to HW-level issues. For that reason I recommend mpstat & pstack as my the 1st step in performance triage. If some other OS-level issue is evident then it's good to switch to dtrace to drill more deeply into the problem. Only after I've ruled out OS-level issues do I switch to using hardware performance counters to look for architectural impediments.

    Read the article

  • High Availability for IaaS, PaaS and SaaS in the Cloud

    - by BuckWoody
    Outages, natural disasters and unforeseen events have proved that even in a distributed architecture, you need to plan for High Availability (HA). In this entry I'll explain a few considerations for HA within Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS) and Software-as-a-Service (SaaS). In a separate post I'll talk more about Disaster Recovery (DR), since each paradigm has a different way to handle that. Planning for HA in IaaS IaaS involves Virtual Machines - so in effect, an HA strategy here takes on many of the same characteristics as it would on-premises. The primary difference is that the vendor controls the hardware, so you need to verify what they do for things like local redundancy and so on from the hardware perspective. As far as what you can control and plan for, the primary factors fall into three areas: multiple instances, geographical dispersion and task-switching. In almost every cloud vendor I've studied, to ensure your application will be protected by any level of HA, you need to have at least two of the Instances (VM's) running. This makes sense, but you might assume that the vendor just takes care of that for you - they don't. If a single VM goes down (for whatever reason) then the access to it is lost. Depending on multiple factors, you might be able to recover the data, but you should assume that you can't. You should keep a sync to another location (perhaps the vendor's storage system in another geographic datacenter or to a local location) to ensure you can continue to serve your clients. You'll also need to host the same VM's in another geographical location. Everything from a vendor outage to a network path problem could prevent your users from reaching the system, so you need to have multiple locations to handle this. This means that you'll have to figure out how to manage state between the geo's. If the system goes down in the middle of a transaction, you need to figure out what part of the process the system was in, and then re-create or transfer that state to the second set of systems. If you didn't write the software yourself, this is non-trivial. You'll also need a manual or automatic process to detect the failure and re-route the traffic to your secondary location. You could flip a DNS entry (if your application can tolerate that) or invoke another process to alias the first system to the second, such as load-balancing and so on. There are many options, but all of them involve coding the state into the application layer. If you've simply moved a state-ful application to VM's, you may not be able to easily implement an HA solution. Planning for HA in PaaS Implementing HA in PaaS is a bit simpler, since it's built on the concept of stateless applications deployment. Once again, you need at least two copies of each element in the solution (web roles, worker roles, etc.) to remain available in a single datacenter. Also, you need to deploy the application again in a separate geo, but the advantage here is that you could work out a "shared storage" model such that state is auto-balanced across the world. In fact, you don't have to maintain a "DR" site, the alternate location can be live and serving clients, and only take on extra load if the other site is not available. In Windows Azure, you can use the Traffic Manager service top route the requests as a type of auto balancer. Even with these benefits, I recommend a second backup of storage in another geographic location. Storage is inexpensive; and that second copy can be used for not only HA but DR. Planning for HA in SaaS In Software-as-a-Service (such as Office 365, or Hadoop in Windows Azure) You have far less control over the HA solution, although you still maintain the responsibility to ensure you have it. Since each SaaS is different, check with the vendor on the solution for HA - and make sure you understand what they do and what you are responsible for. They may have no HA for that solution, or pin it to a particular geo, or perhaps they have a massive HA built in with automatic load balancing (which is often the case).   All of these options (with the exception of SaaS) involve higher costs for the design. Do not sacrifice reliability for cost - that will always cost you more in the end. Build in the redundancy and HA at the very outset of the project - if you try to tack it on later in the process the business will push back and potentially not implement HA. References: http://www.bing.com/search?q=windows+azure+High+Availability  (each type of implementation is different, so I'm routing you to a search on the topic - look for the "Patterns and Practices" results for the area in Azure you're interested in)

    Read the article

  • Understanding the 'High Performance' meaning in Extreme Transaction Processing

    - by kyap
    Despite my previous blogs entries on SOA/BPM and Identity Management, the domain where I'm the most passionated is definitely the Extreme Transaction Processing, commonly called XTP.I came across XTP back to 2007 while I was still FMW Product Manager in EMEA. At that time Oracle acquired a company called Tangosol, which owned an unique product called Coherence that we renamed to Oracle Coherence. Beside this innovative renaming of the product, to be honest, I didn't know much about it, except being a "distributed in-memory cache for Extreme Transaction Processing"... not very helpful still.In general when people doesn't fully understand a technology or a concept, they tend to find some shortcuts, either correct or not, to justify their lack-of understanding... and of course I was part of this category of individuals. And the shortcut was "Oracle Coherence Cache helps to improve Performance". Excellent marketing slogan... but not very meaningful still. By chance I was able to get away quickly from that group in July 2007* at Thames Valley Park (UK), after I attended one of the most interesting workshops, in my 10 years career in Oracle, delivered by Brian Oliver. The biggest mistake I made was to assume that performance improvement with Coherence was related to the response time. Which can be considered as legitimus at that time, because after-all caches help to reduce latency on cached data access, hence reduce the response-time. But like all caches, you need to define caching and expiration policies, thinking about the cache-missed strategy, and most of the time you have to re-write partially your application in order to work with the cache. At a result, the expected benefit vanishes... so, not very useful then?The key mistake I made was my perception or obsession on how performance improvement should be driven, but I strongly believe this is still a common problem to most of the developers. In fact we all know the that the performance of a system is generally presented by the Capacity (or Throughput), with the 2 important dimensions Speed (response-time) and Volume (load) :Capacity (TPS) = Volume (T) / Speed (S)To increase the Capacity, we can either reduce the Speed(in terms of response-time), or to increase the Volume. However we tend to only focus on reducing the Speed dimension, perhaps it is more concrete and tangible to measure, and nicer to present to our management because there's a direct impact onto the end-users experience. On the other hand, we assume the Volume can be addressed by the underlying hardware or software stack, so if we need more capacity (scale out), we just add more hardware or software. Unfortunately, the reality proves that IT is never as ideal as we assume...The challenge with Speed improvement approach is that it is generally difficult and costly to make things already fast... faster. And by adding Coherence will not necessarily help either. Even though we manage to do so, the Capacity can not increase forever because... the Speed can be influenced by the Volume. For all system, we always have a performance illustration as follow: In all traditional system, the increase of Volume (Transaction) will also increase the Speed (Response-Time) as some point. The reason is simple: most of the time the Application logics were not designed to scale. As an example, if you have a while-loop in your application, it is natural to conceive that parsing 200 entries will require double execution-time compared to 100 entries. If you need to "Speed-up" the execution, you can only upgrade your hardware (scale-up) with faster CPU and/or network to reduce network latency. It is technically limited and economically inefficient. And this is exactly where XTP and Coherence kick in. The primary objective of XTP is about designing applications which can scale-out for increasing the Volume, by applying coding techniques to keep the execution-time as constant as possible, independently of the number of runtime data being manipulated. It is actually not just about having an application running as fast as possible, but about having a much more predictable system, with constant response-time and linearly scale, so we can easily increase throughput by adding more hardwares in parallel. It is in general combined with the Low Latency Programming model, where we tried to optimize the network usage as much as possible, either from the programmatic angle (less network-hoops to complete a task), and/or from a hardware angle (faster network equipments). In this picture, Oracle Coherence can be considered as software-level XTP enabler, via the Distributed-Cache because it can guarantee: - Constant Data Objects access time, independently from the number of Objects and the Coherence Cluster size - Data Objects Distribution by Affinity for in-memory data grouping - In-place Data Processing for parallel executionTo summarize, Oracle Coherence is indeed useful to improve your application performance, just not in the way we commonly think. It's not about the Speed itself, but about the overall Capacity with Extreme Load while keeping consistant Speed. In the future I will keep adding new blog entries around this topic, with some sample codes experiences sharing that I capture in the last few years. In the meanwhile if you want to know more how Oracle Coherence, I strongly suggest you to start with checking how our worldwide customers are using Oracle Coherence first, then you can start playing with the product through our tutorial.Have Fun !

    Read the article

  • "Guiding" a Domain Expert to Retire from Programming

    - by James Kolpack
    I've got a friend who does IT at a local non-profit where they're using a custom web application which is no longer supported by the company who built it. (out of business, support was too expensive, I'm not sure...) Development on this app started around 10+ years ago so the technologies being harnessed are pretty out of date now - classic asp using vbscript and SQL Server 2000. The application domain is in the realm of government bookkeeping - so even though the development team is long gone, there are often new requirements of this software. Enter the... The domain expert. This is an middle aged accounting whiz without much (or any?) prior development experience. He studied the pages, code and queries and learned how to ape the style of the original team which, believe me, is mediocre at best. He's very clever and very tenacious but has no experience in software beyond what he's picked up from this app. Otherwise, he's a pleasant guy to talk to and definitely knows his domain. My friend in IT, and probably his superiors in the company, want him out of the code. They view him as wasting his expertise on coding tasks he shouldn't be doing. My friend got me involved with a few small contracts which I handled without much problem - other than somewhat of a communication barrier with the domain expert. He explained the requirements very quickly, assuming prior knowledge of the domain which I do not have. This is partially his normal style, and I think maybe a bit of resentment from my involvement. So, I think he feels like the owner of the code and has entrenched himself in a development position. So... his coding technique. One of his latest endeavors was to make a page that only he could reach (theoretically - the security model for the system is wretched) where he can enter a raw SQL query, run it, and save the query to run again later. A report that I worked on had been originally implemented by him using 6 distinct queries, 3 or 4 temp tables to coordinate the data between the queries, and the final result obtained by importing the data from the final query into Access and doing a pivot and some formatting. It worked - well, some of the results were incorrect - but at what a cost! (I implemented the report in a single query with at least 1/10th the amount of code.) He edits code in notepad. He doesn't seem to know about online reference material for the languages. I recently read an article on Dr. Dobbs titled "What Makes Bad Programmers Different" - and instantly thought of our domain expert. From the article: Their code is large, messy, and bug laden. They have very superficial knowledge of their problem domain and their tools. Their code has a lot of copy/paste and they have very little interest in techniques that reduce it. The fail to account for edge cases, while inefficiently dealing with the general case. They never have time to comment their code or break it into smaller pieces. Empirical evidence plays no little role in their decisions. 5.5 out of 6. My friend is wanting me to argue the case to their management - specifically, I got this email from their manager to respond to: ...Also, I need to talk to you about what effect there is from Domain Expert continuing to make edits to the live environment. If that is a problem for you I need to know so I can have his access blocked. Some examples would help. In my opinion, from a technical standpoint, it's dangerous to have him making changes without any oversight. On the other hand, I'm just doing one-off contracts at this point and don't have much desire to get involved deeply enough that I'm essentially arguing as one of the Bobs from Office Space. I'd like to help my friend out - but I feel like I'm getting in the middle of a political battle. More importantly - if I do get involved and suggest that his editing privileges be removed, it needs to be handled carefully so that doesn't feel belittled. He is beyond a doubt the foremost expert on this system. I'm hoping this is familiar territory for some other stackechangers, because I'm feeling a little bewildered. How should I respond? Should I argue that he shouldn't be allowed to touch the code? Should I phrase it as "no single developer, no matter how experienced, should be working on production code unchecked"? Should I argue to keep him involved with the code, but with a review process? Should I say "glad I could help, but uh, I'm busy now!" Other options? Thanks a bunch!

    Read the article

  • Cloud Deployment Models

    - by B R Clouse
    Normal 0 false false false EN-US X-NONE X-NONE As the cloud paradigm grows in depth and breadth, more readers are approaching the topic for the first time, or from a new perspective.  This blog is a basic review of  cloud deployment models, to help orient newcomers and neophytes. Most cloud deployments today are either private or public. It is also possible to connect a private cloud and a public cloud to form a hybrid cloud. A private cloud is for the exclusive use of an organization. Enterprises, universities and government agencies throughout the world are using private clouds. Some have designed, built and now manage their private clouds. Others use a private cloud that was built by and is now managed by a provider, hosted either onsite or at the provider’s datacenter. Because private clouds are for exclusive use, they are usually the option chosen by organizations with concerns about data security and guaranteed performance. Public clouds are open to anyone with an Internet connection. Because they require no capital investment from their users, they are particularly attractive to companies with limited resources in less regulated environments and for temporary workloads such as development and test environments. Public clouds offer a range of products, from end-user software packages to more basic services such as databases or operating environments. Public clouds may also offer cloud services such as a disaster recovery for a private cloud, or the ability to “cloudburst” a temporary workload spike from a private cloud to a public cloud. These are examples of a hybrid cloud. These are most feasible when the private and public clouds are built with similar technologies. Usually people think of a public cloud in terms of a user role, e.g., “Which public cloud should I consider using?” But someone needs to own and manage that public cloud. The company who owns and operates a public cloud is known as a public cloud provider. Oracle Database Cloud Service, Amazon RDS, database.com and Savvis Symphony Database are examples of public cloud database services. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} When evaluating deployment models, be aware that you can use any or all of the available options. Some workloads may be best-suited for a private cloud, some for a public or hybrid cloud. And you might deploy multiple private clouds in your organization. If you are going to combine multiple clouds, then you want to make sure that each cloud is based on a consistent technology portfolio and architecture. This simplifies management and gives you the greatest flexibility in moving resources and workloads among your different clouds. Oracle’s portfolio of cloud products and services enables both deployment models. Oracle can manage either model. Universities, government agencies and companies in all types of business everywhere in the world are using clouds built with the Oracle portfolio. By employing a consistent portfolio, these customers are able to run all of their workloads – from test and development to the most mission-critical -- in a consistent manner: One Enterprise Cloud, powered by Oracle.   /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Concurrency Utilities for Java EE Early Draft (JSR 236)

    - by arungupta
    Concurrency Utilities for Java EE is being worked as JSR 236 and has released an Early Draft. It provides concurrency capabilities to Java EE application components without compromising container integrity. Simple (common) and advanced concurrency patterns are easily supported without sacrificing usability. Using Java SE concurrency utilities such as java.util.concurrent API, java.lang.Thread and java.util.Timer in a Java EE application component such as EJB or Servlet are problematic since the container and server have no knowledge of these resources. JSR 236 enables concurrency largely by extending the Concurrency Utilities API developed under JSR-166. This also allows a consistency between Java SE and Java EE concurrency programming model. There are four main programming interfaces available: ManagedExecutorService ManagedScheduledExecutorService ContextService ManagedThreadFactory ManagedExecutorService is a managed version of java.util.concurrent.ExecutorService. The implementations of this interface are provided by the container and accessible using JNDI reference: <resource-env-ref>  <resource-env-ref-name>    concurrent/BatchExecutor  </resource-env-ref-name>  <resource-env-ref-type>    javax.enterprise.concurrent.ManagedExecutorService  </resource-env-ref-type><resource-env-ref> and available as: @Resource(name="concurrent/BatchExecutor")ManagedExecutorService executor; Its recommended to bind the JNDI references in the java:comp/env/concurrent subcontext. The asynchronous tasks that need to be executed need to implement java.lang.Runnable or java.util.concurrent.Callable interface as: public class MyTask implements Runnable { public void run() { // business logic goes here }} OR public class MyTask2 implements Callable<Date> {  public Date call() { // business logic goes here   }} The task is then submitted to the executor using one of the submit method that return a Future instance. The Future represents the result of the task and can also be used to check if the task is complete or wait for its completion. Future<String> future = executor.submit(new MyTask(), String.class);. . .String result = future.get(); Another example to submit tasks is: class MyTask implements Callback<Long> { . . . }class MyTask2 implements Callback<Date> { . . . }ArrayList<Callable> tasks = new ArrayList<();tasks.add(new MyTask());tasks.add(new MyTask2());List<Future<Object>> result = executor.invokeAll(tasks); The ManagedExecutorService may be configured for different properties such as: Hung Task Threshold: Time in milliseconds that a task can execute before it is considered hung Pool Info Core Size: Number of threads to keep alive Maximum Size: Maximum number of threads allowed in the pool Keep Alive: Time to allow threads to remain idle when # of threads > Core Size Work Queue Capacity: # of tasks that can be stored in inbound buffer Thread Use: Application intend to run short vs long-running tasks, accordingly pooled or daemon threads are picked ManagedScheduledExecutorService adds delay and periodic task running capabilities to ManagedExecutorService. The implementations of this interface are provided by the container and accessible using JNDI reference: <resource-env-ref>  <resource-env-ref-name>    concurrent/BatchExecutor  </resource-env-ref-name>  <resource-env-ref-type>    javax.enterprise.concurrent.ManagedExecutorService  </resource-env-ref-type><resource-env-ref> and available as: @Resource(name="concurrent/timedExecutor")ManagedExecutorService executor; And then the tasks are submitted using submit, invokeXXX or scheduleXXX methods. ScheduledFuture<?> future = executor.schedule(new MyTask(), 5, TimeUnit.SECONDS); This will create and execute a one-shot action that becomes enabled after 5 seconds of delay. More control is possible using one of the newly added methods: MyTaskListener implements ManagedTaskListener {  public void taskStarting(...) { . . . }  public void taskSubmitted(...) { . . . }  public void taskDone(...) { . . . }  public void taskAborted(...) { . . . } }ScheduledFuture<?> future = executor.schedule(new MyTask(), 5, TimeUnit.SECONDS, new MyTaskListener()); Here, ManagedTaskListener is used to monitor the state of a task's future. ManagedThreadFactory provides a method for creating threads for execution in a managed environment. A simple usage is: @Resource(name="concurrent/myThreadFactory")ManagedThreadFactory factory;. . .Thread thread = factory.newThread(new Runnable() { . . . }); concurrent/myThreadFactory is a JNDI resource. There is lot of interesting content in the Early Draft, download it, and read yourself. The implementation will be made available soon and also be integrated in GlassFish 4 as well. Some references for further exploring ... Javadoc Early Draft Specification concurrency-ee-spec.java.net [email protected]

    Read the article

  • Implementing a Custom Coherence PartitionAssignmentStrategy

    - by jpurdy
    A recent A-Team engagement required the development of a custom PartitionAssignmentStrategy (PAS). By way of background, a PAS is an implementation of a Java interface that controls how a Coherence partitioned cache service assigns partitions (primary and backup copies) across the available set of storage-enabled members. While seemingly straightforward, this is actually a very difficult problem to solve. Traditionally, Coherence used a distributed algorithm spread across the cache servers (and as of Coherence 3.7, this is still the default implementation). With the introduction of the PAS interface, the model of operation was changed so that the logic would run solely in the cache service senior member. Obviously, this makes the development of a custom PAS vastly less complex, and in practice does not introduce a significant single point of failure/bottleneck. Note that Coherence ships with a default PAS implementation but it is not used by default. Further, custom PAS implementations are uncommon (this engagement was the first custom implementation that we know of). The particular implementation mentioned above also faced challenges related to managing multiple backup copies but that won't be discussed here. There were a few challenges that arose during design and implementation: Naive algorithms had an unreasonable upper bound of computational cost. There was significant complexity associated with configurations where the member count varied significantly between physical machines. Most of the complexity of a PAS is related to rebalancing, not initial assignment (which is usually fairly simple). A custom PAS may need to solve several problems simultaneously, such as: Ensuring that each member has a similar number of primary and backup partitions (e.g. each member has the same number of primary and backup partitions) Ensuring that each member carries similar responsibility (e.g. the most heavily loaded member has no more than one partition more than the least loaded). Ensuring that each partition is on the same member as a corresponding local resource (e.g. for applications that use partitioning across message queues, to ensure that each partition is collocated with its corresponding message queue). Ensuring that a given member holds no more than a given number of partitions (e.g. no member has more than 10 partitions) Ensuring that backups are placed far enough away from the primaries (e.g. on a different physical machine or a different blade enclosure) Achieving the above goals while ensuring that partition movement is minimized. These objectives can be even more complicated when the topology of the cluster is irregular. For example, if multiple cluster members may exist on each physical machine, then clearly the possibility exists that at certain points (e.g. following a member failure), the number of members on each machine may vary, in certain cases significantly so. Consider the case where there are three physical machines, with 3, 3 and 9 members each (respectively). This introduces complexity since the backups for the 9 members on the the largest machine must be spread across the other 6 members (to ensure placement on different physical machines), preventing an even distribution. For any given problem like this, there are usually reasonable compromises available, but the key point is that objectives may conflict under extreme (but not at all unlikely) circumstances. The most obvious general purpose partition assignment algorithm (possibly the only general purpose one) is to define a scoring function for a given mapping of partitions to members, and then apply that function to each possible permutation, selecting the most optimal permutation. This would result in N! (factorial) evaluations of the scoring function. This is clearly impractical for all but the smallest values of N (e.g. a partition count in the single digits). It's difficult to prove that more efficient general purpose algorithms don't exist, but the key take away from this is that algorithms will tend to either have exorbitant worst case performance or may fail to find optimal solutions (or both) -- it is very important to be able to show that worst case performance is acceptable. This quickly leads to the conclusion that the problem must be further constrained, perhaps by limiting functionality or by using domain-specific optimizations. Unfortunately, it can be very difficult to design these more focused algorithms. In the specific case mentioned, we constrained the solution space to very small clusters (in terms of machine count) with small partition counts and supported exactly two backup copies, and accepted the fact that partition movement could potentially be significant (preferring to solve that issue through brute force). We then used the out-of-the-box PAS implementation as a fallback, delegating to it for configurations that were not supported by our algorithm. Our experience was that the PAS interface is quite usable, but there are intrinsic challenges to designing PAS implementations that should be very carefully evaluated before committing to that approach.

    Read the article

  • With Its MySQL Database-as-a-Service CERN Empowers Scientists

    - by Bertrand Matthelié
    The European Organization for Nuclear Research (CERN) is one of the world’s largest and most respected centers for scientific research. Founded in 1954 and located near Geneva on the Franco-Swiss border, CERN was one of Europe’s first joint ventures. Today, it has 20 member states. The organization uses the world’s largest and most complex scientific instruments to study fundamental particles and the origin of the universe. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Cambria","serif"; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;} Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Cambria","serif"; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;} Challenges Better support the scientists associated with a CERN research program who selected MySQL as their database. Empower users, enabling them to be as self-reliant as possible. Minimize complexity and costs for the CERN IT department to support the growing number of MySQL deployments. Solution Delivered a MySQL Database-as-a-Service offering to the CERN employees and the scientists associated with the organization. Allowed researchers selecting MySQL for their project to get access to a database instance hosted by the CERN IT department, either from the start or once their application has become critical. Implemented the service using Oracle’s server virtualization software, Oracle VM, for increased flexibility and reduced costs. Empowered users with a self-service approach, providing them with tools to manage MySQL themselves while handling backups and other basic database administration tasks for them. Enabled scientists to rely on MySQL with increased reliability, security and manageability while reducing complexity and minimizing costs. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Cambria","serif"; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;} "The Cloud model has allowed us to deliver a self-service platform to our MySQL users, empowering them while minimizing costs for CERN." Tony Cass, Database Services Group Leader, IT department, CERN.

    Read the article

  • What's New in SGD 5.1?

    - by Fat Bloke
    Oracle announced the latest version of Secure Global Desktop (SGD) this week with 3 major themes: Support for Android devices; Support for Desktop Chrome clients;  Support for Oracle Unified Directory. I'll talk about the new features in a moment, but a bit of context first: Oracle SGD - what, how and why?  Oracle Secure Global Desktop is Oracle's secure remote access product which allows users on almost any device, to access almost any type application which  is hosted in the data center, from almost any location. And it does this by sitting on the edge of the datacenter, between the user and the applications: This is actually a really smart environment for an increasing number of use cases where: Users need mobility of location AND device (i.e. work from anywhere); IT needs to ensure security of applications and data (of course!) The application requires an end-user environment which can't be guaranteed and IT may not own the client platform (e.g. BYOD, working from home, partners or contractors). Oracle has a a specific interest in this of course. As the leading supplier of enterprise applications, many of Oracle's customers, and indeed Oracle itself, fit these criteria. So, as an IT guy rolling out an application to your employees, if one of your apps absolutely needs, say,  IE10 with Java 6 update 32, how can you be sure that the user population has this, especially when they're using their own devices? In the SGD model you, the IT guy, can set up, say, a Windows Server running the exact environment required, and then use SGD to publish this app, without needing to worry any further about the device the end user is using. What's new?  So back to SGD 5.1 and what is new there: Android devices Since we introduced our support for iPad tablets in SGD 5.0 we've had a big demand from customers to extend this to Android tablets too, and so we're pleased to announce that 5.1 supports Android 4.x tablets such as Nexus 7 and 10, and the Galaxy Tab. Here's how it works, with screenshots from my Nexus 7: Simply point your browser to the SGD server URL and login; The workspace is the list of apps that the admin has deemed ok for you to run. You click on an application to run it (here's Excel and Oracle E-Business Suite): There's an extended on-screen keyboard (extended because desktop apps need keys that don't appear on a tablet keyboard such as ctrl, WIndow key, etc) and touch gestures can be mapped to desktop events (such as tap and hold to right click) All in all a pretty nice implementation for Android tablet users. Desktop Chrome Browsers SGD has always been designed around using a browser to access your applications. But traditionally, this has involved using Java to deliver the SGD client component. With HTML5 and Javascript engines becoming so powerful, we thought we'd see how well a pure web client could perform with desktop apps. And the answer was, surprisingly well. So with this release we now offer this additional way of working, which can be enabled by a simple bit of configuration. Here's a Linux desktop running in a tab in Chrome. And if you resize the browser window, the Linux desktop is resized by SGD too. Very cool! Oracle Unified Directory As I mentioned above, a lot of Oracle users already benefit from SGD. And a lot of Oracle customers use Oracle Unified Directory as their Enterprise and Carrier grade user directory. So it makes a lot of sense that SGD now supports this LDAP directory for both Authentication and as a means to determine which users get which applications, e.g. publish the engineering app to the guys in the Development group, but give everyone E-Business Suite to let them do their expenses. Summary With new devices, and faster 4G networking becoming more prevalent, the pressure for businesses to move to a increasingly mobile enterprise is stronger than ever. SGD is good for users, and even better for IT. By offering the user the ability to work from anywhere, and IT the control and security they need, everyone wins with SGD. To try this for yourself, download SGD 5.1 (look under Desktop Virtualization Products) from the Oracle Software Delivery Cloud or if you're an existing customer, get it from My Oracle Support.  -FB 

    Read the article

  • Plagued by multithreaded bugs

    - by koncurrency
    On my new team that I manage, the majority of our code is platform, TCP socket, and http networking code. All C++. Most of it originated from other developers that have left the team. The current developers on the team are very smart, but mostly junior in terms of experience. Our biggest problem: multi-threaded concurrency bugs. Most of our class libraries are written to be asynchronous by use of some thread pool classes. Methods on the class libraries often enqueue long running taks onto the thread pool from one thread and then the callback methods of that class get invoked on a different thread. As a result, we have a lot of edge case bugs involving incorrect threading assumptions. This results in subtle bugs that go beyond just having critical sections and locks to guard against concurrency issues. What makes these problems even harder is that the attempts to fix are often incorrect. Some mistakes I've observed the team attempting (or within the legacy code itself) includes something like the following: Common mistake #1 - Fixing concurrency issue by just put a lock around the shared data, but forgetting about what happens when methods don't get called in an expected order. Here's a very simple example: void Foo::OnHttpRequestComplete(statuscode status) { m_pBar->DoSomethingImportant(status); } void Foo::Shutdown() { m_pBar->Cleanup(); delete m_pBar; m_pBar=nullptr; } So now we have a bug in which Shutdown could get called while OnHttpNetworkRequestComplete is occuring on. A tester finds the bug, captures the crash dump, and assigns the bug to a developer. He in turn fixes the bug like this. void Foo::OnHttpRequestComplete(statuscode status) { AutoLock lock(m_cs); m_pBar->DoSomethingImportant(status); } void Foo::Shutdown() { AutoLock lock(m_cs); m_pBar->Cleanup(); delete m_pBar; m_pBar=nullptr; } The above fix looks good until you realize there's an even more subtle edge case. What happens if Shutdown gets called before OnHttpRequestComplete gets called back? The real world examples my team has are even more complex, and the edge cases are even harder to spot during the code review process. Common Mistake #2 - fixing deadlock issues by blindly exiting the lock, wait for the other thread to finish, then re-enter the lock - but without handling the case that the object just got updated by the other thread! Common Mistake #3 - Even though the objects are reference counted, the shutdown sequence "releases" it's pointer. But forgets to wait for the thread that is still running to release it's instance. As such, components are shutdown cleanly, then spurious or late callbacks are invoked on an object in an state not expecting any more calls. There are other edge cases, but the bottom line is this: Multithreaded programming is just plain hard, even for smart people. As I catch these mistakes, I spend time discussing the errors with each developer on developing a more appropriate fix. But I suspect they are often confused on how to solve each issue because of the enormous amount of legacy code that the "right" fix will involve touching. We're going to be shipping soon, and I'm sure the patches we're applying will hold for the upcoming release. Afterwards, we're going to have some time to improve the code base and refactor where needed. We won't have time to just re-write everything. And the majority of the code isn't all that bad. But I'm looking to refactor code such that threading issues can be avoided altogether. One approach I am considering is this. For each significant platform feature, have a dedicated single thread where all events and network callbacks get marshalled onto. Similar to COM apartment threading in Windows with use of a message loop. Long blocking operations could still get dispatched to a work pool thread, but the completion callback is invoked on on the component's thread. Components could possibly even share the same thread. Then all the class libraries running inside the thread can be written under the assumption of a single threaded world. Before I go down that path, I am also very interested if there are other standard techniques or design patterns for dealing with multithreaded issues. And I have to emphasize - something beyond a book that describes the basics of mutexes and semaphores. What do you think? I am also interested in any other approaches to take towards a refactoring process. Including any of the following: Literature or papers on design patterns around threads. Something beyond an introduction to mutexes and semaphores. We don't need massive parallelism either, just ways to design an object model so as to handle asynchronous events from other threads correctly. Ways to diagram the threading of various components, so that it will be easy to study and evolve solutions for. (That is, a UML equivalent for discussing threads across objects and classes) Educating your development team on the issues with multithreaded code. What would you do?

    Read the article

  • Segfaulting Java process

    - by zenmonkey
    I've a java process that is working on some large data set in memory. I've seen it crash with a SIGSEGV signal sometimes, so i was wondering some potential causes and fixes could do. Caues: - JVM bug - Native library bug (e.g pthreads etc) - JNI bug in user code Fixes: - Upgrade to new JVM In my particular case, this is the output form the log file (pruned) A fatal error has been detected by the Java Runtime Environment: # SIGSEGV (0xb) at pc=0x00002aaaaacd1b94, pid=32116, tid=1086544208 # JRE version: 6.0_14-b08 Java VM: Java HotSpot(TM) 64-Bit Server VM (14.0-b16 mixed mode linux-amd64 ) Problematic frame: C [libpthread.so.0+0xab94] pthread_cond_timedwait+0x154 # If you would like to submit a bug report, please visit: http://java.sun.com/webapps/bugreport/crash.jsp # --------------- T H R E A D --------------- Current thread (0x00002aacaad41000): WatcherThread [stack: 0x0000000040b35000,0x0000000040c36000] [id=32141] siginfo:si_signo=SIGSEGV: si_errno=0, si_code=1 (SEGV_MAPERR), si_addr=0x00002aabc40008c0 Registers: RAX=0x0000000000000000, RBX=0x0000000000000000, RCX=0x0000000000000000, RDX=0x0000000000000002 RSP=0x0000000040c34cc0, RBP=0x0000000040c34d80, RSI=0x0000000000000001, RDI=0x00002aabc40008c0 R8 =0x00002aacaad42528, R9 =0x0000000000000000, R10=0x0000000040c34cd8, R11=0x0000000000000202 R12=0x0000000000000001, R13=0x0000000040c34d40, R14=0xffffffffffffff92, R15=0x00002aacaad42550 RIP=0x00002aaaaacd1b94, EFL=0x0000000000010246, CSGSFS=0x000000000000e033, ERR=0x0000000000000006 TRAPNO=0x000000000000000e Top of Stack: (sp=0x0000000040c34cc0) 0x0000000040c34cc0: 0000000000000000 00002aabc40008c0 0x0000000040c34cd0: 00002aacaad42528 0000000000000000 0x0000000040c34ce0: 0000000002fae0e0 0000000000000000 0x0000000040c34cf0: 00002aaaaacd1750 0000000040c34cc0 0x0000000040c34d00: 00002aacaad42528 0000000000000000 0x0000000040c34d10: 00002aacaad42528 00002aacaad42500 0x0000000040c34d20: 0000000000000032 00002aaaabadf876 0x0000000040c34d30: fffffffdaad40e80 0000000040c34d40 0x0000000040c34d40: 000000004bbb7166 0000000015f07098 0x0000000040c34d50: 0000000040c34d80 00138cd32df59cce 0x0000000040c34d60: 431bde82d7b634db 00002aacaad429c0 0x0000000040c34d70: 0000000000000032 00002aacaad429c0 0x0000000040c34d80: 0000000040c34e00 00002aaaabadda6d 0x0000000040c34d90: 0000000040c34da0 00002aacaad42500 0x0000000040c34da0: 00002aacaad429c0 00002aaa00000002 0x0000000040c34db0: 0000000000000001 0000000000000002 0x0000000040c34dc0: 0000000040c34dd0 00002aaaabb6f613 0x0000000040c34dd0: 0000000040c34e00 00002aacaad41000 0x0000000040c34de0: 0000000000000032 00002aacaad429c0 0x0000000040c34df0: 00002aacaad41000 0000000000001000 0x0000000040c34e00: 0000000040c34e60 00002aaaabbc39fb 0x0000000040c34e10: 0000000040c34e40 00002aaaabab868f 0x0000000040c34e20: 00002aacaad41000 00002aacaad42aa0 0x0000000040c34e30: 00002aacaad42aa0 00002aaaabe10630 0x0000000040c34e40: 00002aaaabe10630 00002aacaad42aa0 0x0000000040c34e50: 00002aacaad429c0 00002aacaad41000 0x0000000040c34e60: 0000000040c35130 00002aaaabadff9f 0x0000000040c34e70: 0000000000000000 0000000000000000 0x0000000040c34e80: 0000000000000000 0000000000000000 0x0000000040c34e90: 0000000000000000 0000000000000000 0x0000000040c34ea0: 0000000000000000 0000000000000000 0x0000000040c34eb0: 0000000000000000 0000000000000000 Instructions: (pc=0x00002aaaaacd1b94) 0x00002aaaaacd1b84: 88 22 00 00 48 8b 7c 24 08 be 01 00 00 00 31 c0 0x00002aaaaacd1b94: f0 0f b1 37 0f 85 e8 00 00 00 8b 57 2c 48 8b 47 Stack: [0x0000000040b35000,0x0000000040c36000], sp=0x0000000040c34cc0, free space=1023k Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code) C [libpthread.so.0+0xab94] pthread_cond_timedwait+0x154 V [libjvm.so+0x594a6d] V [libjvm.so+0x67a9fb] V [libjvm.so+0x596f9f] --------------- P R O C E S S --------------- Java Threads: ( = current thread ) 0x00002aacaad3f000 JavaThread "Low Memory Detector" daemon [_thread_blocked, id=32140, stack(0x0000000040a34000,0x0000000040b35000)] 0x00002aacaad3c000 JavaThread "CompilerThread1" daemon [_thread_blocked, id=32139, stack(0x0000000040933000,0x0000000040a34000)] 0x00002aacaad37800 JavaThread "CompilerThread0" daemon [_thread_blocked, id=32138, stack(0x0000000040832000,0x0000000040933000)] 0x00002aacaad36800 JavaThread "Signal Dispatcher" daemon [_thread_blocked, id=32137, stack(0x0000000040731000,0x0000000040832000)] 0x00002aacaab7d800 JavaThread "Finalizer" daemon [_thread_blocked, id=32136, stack(0x0000000040630000,0x0000000040731000)] 0x00002aacaab7b800 JavaThread "Reference Handler" daemon [_thread_blocked, id=32135, stack(0x000000004052f000,0x0000000040630000)] 0x0000000040115800 JavaThread "main" [_thread_blocked, id=32117, stack(0x000000004012b000,0x000000004022c000)] Other Threads: 0x00002aacaab75000 VMThread [stack: 0x000000004042e000,0x000000004052f000] [id=32134] =0x00002aacaad41000 WatcherThread [stack: 0x0000000040b35000,0x0000000040c36000] [id=32141] VM state:at safepoint (normal execution) VM Mutex/Monitor currently owned by a thread: ([mutex/lock_event]) [0x0000000040112e80] Threads_lock - owner thread: 0x00002aacaab75000 [0x0000000040113380] Heap_lock - owner thread: 0x0000000040115800 Heap PSYoungGen total 1854528K, used 1029248K [0x00002aac025a0000, 0x00002aaca8340000, 0x00002aaca9040000) eden space 1029248K, 100% used [0x00002aac025a0000,0x00002aac412c0000,0x00002aac412c0000) from space 825280K, 0% used [0x00002aac412c0000,0x00002aac412c0000,0x00002aac738b0000) to space 812800K, 0% used [0x00002aac76980000,0x00002aac76980000,0x00002aaca8340000) PSOldGen total 4423680K, used 4423651K [0x00002aaab5040000, 0x00002aabc3040000, 0x00002aac025a0000) object space 4423680K, 99% used [0x00002aaab5040000,0x00002aabc3038fe8,0x00002aabc3040000) PSPermGen total 21248K, used 5848K [0x00002aaaafc40000, 0x00002aaab1100000, 0x00002aaab5040000) object space 21248K, 27% used [0x00002aaaafc40000,0x00002aaab01f61f0,0x00002aaab1100000) Dynamic libraries: 40000000-40009000 r-xp 00000000 08:01 313415 /usr/java/jdk1.6.0_14/bin/java 40108000-4010a000 rwxp 00008000 08:01 313415 /usr/java/jdk1.6.0_14/bin/java 4010a000-4012b000 rwxp 4010a000 00:00 0 [heap] 4012b000-4012e000 ---p 4012b000 00:00 0 4012e000-4022c000 rwxp 4012e000 00:00 0 4022c000-4022d000 ---p 4022c000 00:00 0 4022d000-4032d000 rwxp 4022d000 00:00 0 4032d000-4032e000 ---p 4032d000 00:00 0 4032e000-4042e000 rwxp 4032e000 00:00 0 4042e000-4042f000 ---p 4042e000 00:00 0 4042f000-4052f000 rwxp 4042f000 00:00 0 4052f000-40532000 ---p 4052f000 00:00 0 40532000-40630000 rwxp 40532000 00:00 0 40630000-40633000 ---p 40630000 00:00 0 40633000-40731000 rwxp 40633000 00:00 0 40731000-40734000 ---p 40731000 00:00 0 40734000-40832000 rwxp 40734000 00:00 0 40832000-40835000 ---p 40832000 00:00 0 40835000-40933000 rwxp 40835000 00:00 0 40933000-40936000 ---p 40933000 00:00 0 40936000-40a34000 rwxp 40936000 00:00 0 40a34000-40a37000 ---p 40a34000 00:00 0 40a37000-40b35000 rwxp 40a37000 00:00 0 40b35000-40b36000 ---p 40b35000 00:00 0 40b36000-40c36000 rwxp 40b36000 00:00 0 2aaaaaaab000-2aaaaaac6000 r-xp 00000000 08:01 49198 /lib64/ld-2.7.so 2aaaaaac6000-2aaaaaac7000 rwxp 2aaaaaac6000 00:00 0 2aaaaaac7000-2aaaaaad0000 r-xs 0006d000 08:10 29851669 /mnt/home/jatten/workspace/common/build/lib/common.jar 2aaaaaad2000-2aaaaaad3000 rwxp 2aaaaaad2000 00:00 0 2aaaaaad3000-2aaaaaae0000 r-xp 00000000 08:01 315357 /usr/java/jdk1.6.0_14/jre/lib/amd64/libverify.so 2aaaaaae0000-2aaaaabdf000 ---p 0000d000 08:01 315357 /usr/java/jdk1.6.0_14/jre/lib/amd64/libverify.so 2aaaaabdf000-2aaaaabe2000 rwxp 0000c000 08:01 315357 /usr/java/jdk1.6.0_14/jre/lib/amd64/libverify.so 2aaaaabe2000-2aaaaac0a000 rwxp 2aaaaabe2000 00:00 0 2aaaaac0a000-2aaaaac0f000 r-xs 0003a000 08:10 30326840 /mnt/home/jatten/workspace/common_ml20010405/build/lib/common_ml.jar 2aaaaac0f000-2aaaaac12000 r-xs 00020000 08:10 29786222 /mnt/home/jatten/pagescorer.jar 2aaaaacc5000-2aaaaacc6000 r-xp 0001a000 08:01 49198 /lib64/ld-2.7.so 2aaaaacc6000-2aaaaacc7000 rwxp 0001b000 08:01 49198 /lib64/ld-2.7.so 2aaaaacc7000-2aaaaacdd000 r-xp 00000000 08:01 49280 /lib64/libpthread-2.7.so 2aaaaacdd000-2aaaaaedc000 ---p 00016000 08:01 49280 /lib64/libpthread-2.7.so 2aaaaaedc000-2aaaaaedd000 r-xp 00015000 08:01 49280 /lib64/libpthread-2.7.so 2aaaaaedd000-2aaaaaede000 rwxp 00016000 08:01 49280 /lib64/libpthread-2.7.so 2aaaaaede000-2aaaaaee2000 rwxp 2aaaaaede000 00:00 0 2aaaaaee2000-2aaaaaee9000 r-xp 00000000 08:01 315360 /usr/java/jdk1.6.0_14/jre/lib/amd64/jli/libjli.so 2aaaaaee9000-2aaaaafea000 ---p 00007000 08:01 315360 /usr/java/jdk1.6.0_14/jre/lib/amd64/jli/libjli.so 2aaaaafea000-2aaaaafec000 rwxp 00008000 08:01 315360 /usr/java/jdk1.6.0_14/jre/lib/amd64/jli/libjli.so 2aaaaafec000-2aaaaafee000 r-xp 00000000 08:01 49240 /lib64/libdl-2.7.so 2aaaaafee000-2aaaab1ee000 ---p 00002000 08:01 49240 /lib64/libdl-2.7.so 2aaaab1ee000-2aaaab1ef000 r-xp 00002000 08:01 49240 /lib64/libdl-2.7.so 2aaaab1ef000-2aaaab1f0000 rwxp 00003000 08:01 49240 /lib64/libdl-2.7.so 2aaaab1f0000-2aaaab1f1000 rwxp 2aaaab1f0000 00:00 0 2aaaab1f1000-2aaaab33e000 r-xp 00000000 08:01 49219 /lib64/libc-2.7.so 2aaaab33e000-2aaaab53e000 ---p 0014d000 08:01 49219 /lib64/libc-2.7.so 2aaaab53e000-2aaaab542000 r-xp 0014d000 08:01 49219 /lib64/libc-2.7.so 2aaaab542000-2aaaab543000 rwxp 00151000 08:01 49219 /lib64/libc-2.7.so 2aaaab543000-2aaaab549000 rwxp 2aaaab543000 00:00 0 2aaaab549000-2aaaabca7000 r-xp 00000000 08:01 315371 /usr/java/jdk1.6.0_14/jre/lib/amd64/server/libjvm.so 2aaaabca7000-2aaaabda6000 ---p 0075e000 08:01 315371 /usr/java/jdk1.6.0_14/jre/lib/amd64/server/libjvm.so 2aaaabda6000-2aaaabf1e000 rwxp 0075d000 08:01 315371 /usr/java/jdk1.6.0_14/jre/lib/amd64/server/libjvm.so 2aaaabf1e000-2aaaabf5c000 rwxp 2aaaabf1e000 00:00 0 2aaaabf67000-2aaaabfe9000 r-xp 00000000 08:01 49263 /lib64/libm-2.7.so 2aaaabfe9000-2aaaac1e8000 ---p 00082000 08:01 49263 /lib64/libm-2.7.so 2aaaac1e8000-2aaaac1e9000 r-xp 00081000 08:01 49263 /lib64/libm-2.7.so 2aaaac1e9000-2aaaac1ea000 rwxp 00082000 08:01 49263 /lib64/libm-2.7.so 2aaaac1ea000-2aaaac1f2000 r-xp 00000000 08:01 49283 /lib64/librt-2.7.so 2aaaac1f2000-2aaaac3f1000 ---p 00008000 08:01 49283 /lib64/librt-2.7.so 2aaaac3f1000-2aaaac3f2000 r-xp 00007000 08:01 49283 /lib64/librt-2.7.so 2aaaac3f2000-2aaaac3f3000 rwxp 00008000 08:01 49283 /lib64/librt-2.7.so 2aaaac3f3000-2aaaac41c000 r-xp 00000000 08:01 315336 /usr/java/jdk1.6.0_14/jre/lib/amd64/libjava.so 2aaaac41c000-2aaaac51b000 ---p 00029000 08:01 315336 /usr/java/jdk1.6.0_14/jre/lib/amd64/libjava.so 2aaaac51b000-2aaaac522000 rwxp 00028000 08:01 315336 /usr/java/jdk1.6.0_14/jre/lib/amd64/libjava.so 2aaaac522000-2aaaac523000 ---p 2aaaac522000 00:00 0 2aaaac523000-2aaaac524000 rwxp 2aaaac523000 00:00 0 2aaaac52d000-2aaaac542000 r-xp 00000000 08:01 49265 /lib64/libnsl-2.7.so 2aaaac542000-2aaaac741000 ---p 00015000 08:01 49265 /lib64/libnsl-2.7.so 2aaaac741000-2aaaac742000 r-xp 00014000 08:01 49265 /lib64/libnsl-2.7.so 2aaaac742000-2aaaac743000 rwxp 00015000 08:01 49265 /lib64/libnsl-2.7.so 2aaaac743000-2aaaac745000 rwxp 2aaaac743000 00:00 0 2aaaac745000-2aaaac74c000 r-xp 00000000 08:01 315362 /usr/java/jdk1.6.0_14/jre/lib/amd64/native_threads/libhpi.so 2aaaac74c000-2aaaac84d000 ---p 00007000 08:01 315362 /usr/java/jdk1.6.0_14/jre/lib/amd64/native_threads/libhpi.so 2aaaac84d000-2aaaac84f000 rwxp 00008000 08:01 315362 /usr/java/jdk1.6.0_14/jre/lib/amd64/native_threads/libhpi.so 2aaaac84f000-2aaaac850000 rwxp 2aaaac84f000 00:00 0 2aaaac850000-2aaaac858000 rwxs 00000000 08:01 229379 /tmp/hsperfdata_jatten/32116 2aaaac85b000-2aaaac865000 r-xp 00000000 08:01 49269 /lib64/libnss_files-2.7.so 2aaaac865000-2aaaaca64000 ---p 0000a000 08:01 49269 /lib64/libnss_files-2.7.so 2aaaaca64000-2aaaaca65000 r-xp 00009000 08:01 49269 /lib64/libnss_files-2.7.so 2aaaaca65000-2aaaaca66000 rwxp 0000a000 08:01 49269 /lib64/libnss_files-2.7.so 2aaaaca66000-2aaaaca74000 r-xp 00000000 08:01 315358 /usr/java/jdk1.6.0_14/jre/lib/amd64/libzip.so 2aaaaca74000-2aaaacb76000 ---p 0000e000 08:01 315358 /usr/java/jdk1.6.0_14/jre/lib/amd64/libzip.so 2aaaacb76000-2aaaacb79000 rwxp 00010000 08:01 315358 /usr/java/jdk1.6.0_14/jre/lib/amd64/libzip.so 2aaaacb79000-2aaaacdea000 rwxp 2aaaacb79000 00:00 0 2aaaacdea000-2aaaafb7a000 rwxp 2aaaacdea000 00:00 0 2aaaafb7a000-2aaaafb84000 rwxp 2aaaafb7a000 00:00 0 2aaaafb84000-2aaaafc3a000 rwxp 2aaaafb84000 00:00 0 2aaaafc40000-2aaab1100000 rwxp 2aaaafc40000 00:00 0 2aaab1100000-2aaab5040000 rwxp 2aaab1100000 00:00 0 2aaab5040000-2aabc3040000 rwxp 2aaab5040000 00:00 0 2aac025a0000-2aaca8340000 rwxp 2aac025a0000 00:00 0 2aaca8340000-2aaca9040000 rwxp 2aaca8340000 00:00 0 2aaca9040000-2aaca904b000 rwxp 2aaca9040000 00:00 0 2aaca904b000-2aaca906a000 rwxp 2aaca904b000 00:00 0 2aaca906a000-2aaca98da000 rwxp 2aaca906a000 00:00 0 2aaca98da000-2aaca9ad4000 rwxp 2aaca98da000 00:00 0 2aaca9ad4000-2aacaa004000 rwxp 2aaca9ad4000 00:00 0 2aacaa004000-2aacaa00a000 rwxp 2aacaa004000 00:00 0 2aacaa00a000-2aacaa87b000 rwxp 2aacaa00a000 00:00 0 2aacaa87b000-2aacaaa76000 rwxp 2aacaa87b000 00:00 0 2aacaaa76000-2aacaaa81000 rwxp 2aacaaa76000 00:00 0 2aacaaa81000-2aacaaaa0000 rwxp 2aacaaa81000 00:00 0 2aacaaaa0000-2aacaaba0000 rwxp 2aacaaaa0000 00:00 0 2aacaaba0000-2aacaad36000 r-xs 02fb1000 08:01 315318 /usr/java/jdk1.6.0_14/jre/lib/rt.jar 2aacaad36000-2aacaaf36000 rwxp 2aacaad36000 00:00 0 2aacaaf36000-2aacaaf49000 r-xp 00000000 08:01 315349 /usr/java/jdk1.6.0_14/jre/lib/amd64/libnet.so 2aacaaf49000-2aacab04a000 ---p 00013000 08:01 315349 /usr/java/jdk1.6.0_14/jre/lib/amd64/libnet.so 2aacab04a000-2aacab04d000 rwxp 00014000 08:01 315349 /usr/java/jdk1.6.0_14/jre/lib/amd64/libnet.so 2aacab058000-2aacab05c000 r-xp 00000000 08:01 49268 /lib64/libnss_dns-2.7.so 2aacab05c000-2aacab25b000 ---p 00004000 08:01 49268 /lib64/libnss_dns-2.7.so 2aacab25b000-2aacab25c000 r-xp 00003000 08:01 49268 /lib64/libnss_dns-2.7.so 2aacab25c000-2aacab25d000 rwxp 00004000 08:01 49268 /lib64/libnss_dns-2.7.so 2aacab25d000-2aacab26e000 r-xp 00000000 08:01 49282 /lib64/libresolv-2.7.so 2aacab26e000-2aacab46e000 ---p 00011000 08:01 49282 /lib64/libresolv-2.7.so 2aacab46e000-2aacab46f000 r-xp 00011000 08:01 49282 /lib64/libresolv-2.7.so 2aacab46f000-2aacab470000 rwxp 00012000 08:01 49282 /lib64/libresolv-2.7.so 2aacab470000-2aacab572000 rwxp 2aacab470000 00:00 0 2aacab572000-2aacab57e000 r-xs 00081000 08:10 29851828 /mnt/home/jatten/workspace/common/lib/google-collect-1.0.jar 2aacab57e000-2aacab585000 r-xs 000aa000 08:10 29851946 /mnt/home/jatten/workspace/common/lib/mysql-connector-java-5.1.8-bin.jar 2aacab585000-2aacab58d000 r-xs 00028000 08:10 29851949 /mnt/home/jatten/workspace/common/lib/xml-apis.jar 2aacab58d000-2aacab591000 r-xs 0002f000 08:10 29851947 /mnt/home/jatten/workspace/common/lib/commons-beanutils-core-1.8.2.jar 2aacab591000-2aacab59e000 r-xs 0007f000 08:10 29851943 /mnt/home/jatten/workspace/common/lib/commons-collections-3.2.jar 2aacab59e000-2aacab5a3000 r-xs 00026000 08:10 29851942 /mnt/home/jatten/workspace/common/lib/httpcore-4.0.jar 2aacab5a3000-2aacab5a9000 r-xs 00030000 08:10 29851932 /mnt/home/jatten/workspace/common/lib/junit-dep-4.8.1.jar 2aacab5a9000-2aacab5ac000 r-xs 00011000 08:10 29851922 /mnt/home/jatten/workspace/common/lib/servlet.jar 2aacab5ac000-2aacab5ae000 r-xs 00009000 08:10 29851937 /mnt/home/jatten/workspace/common/lib/gsb.jar 2aacab5ae000-2aacab5b5000 r-xs 00059000 08:10 29851930 /mnt/home/jatten/workspace/common/lib/log4j-1.2.15.jar 2aacab5b5000-2aacab6b5000 rwxp 2aacab5b5000 00:00 0 2aacab6b5000-2aacab6b7000 r-xs 00009000 08:10 29851956 /mnt/home/jatten/workspace/common/lib/gsb-src.jar 2aacab6b7000-2aacab7b7000 rwxp 2aacab6b7000 00:00 0 2aacab7b7000-2aacab7cf000 r-xs 00115000 08:10 29851938 /mnt/home/jatten/workspace/common/lib/xercesImpl.jar 2aacab7cf000-2aacab7d1000 r-xs 00009000 08:10 29851957 /mnt/home/jatten/workspace/common/lib/velocity-tools-view-1.0.jar 2aacab7d1000-2aacab7d3000 r-xs 00009000 08:10 29851939 /mnt/home/jatten/workspace/common/lib/commons-cli-1.2.jar 2aacab7d3000-2aacab7d9000 r-xs 00034000 08:10 29851955 /mnt/home/jatten/workspace/common/lib/junit-4.8.1.jar 2aacab7d9000-2aacab7db000 r-xs 0000e000 08:10 29851917 /mnt/home/jatten/workspace/common/lib/jakarta-oro-2.0.8.jar 2aacab7db000-2aacab858000 r-xs 0031d000 08:10 29851916 /mnt/home/jatten/workspace/common/lib/poi-ooxml-schemas-3.6-20091214.jar 2aacab858000-2aacab85c000 r-xs 00028000 08:10 29851936 /mnt/home/jatten/workspace/common/lib/httpcore-nio-4.0.jar 2aacab85c000-2aacab85e000 r-xs 00005000 08:10 29851940 /mnt/home/jatten/workspace/common/lib/commons-beanutils-bean-collections-1.8.2.jar 2aacab85e000-2aacab864000 r-xs 00059000 08:10 29851919 /mnt/home/jatten/workspace/common/lib/mail-1.4.jar 2aacab864000-2aacab866000 r-xs 0000d000 08:10 29851950 /mnt/home/jatten/workspace/common/lib/commons-logging-1.1.1.jar 2aacab866000-2aacab86c000 r-xs 00045000 08:10 29851924 /mnt/home/jatten/workspace/common/lib/commons-httpclient-3.1.jar 2aacab86c000-2aacab877000 r-xs 00074000 08:10 29851931 /mnt/home/jatten/workspace/common/lib/velocity-dep-1.4.jar 2aacab877000-2aacab87f000 r-xs 00051000 08:10 29851954 /mnt/home/jatten/workspace/common/lib/velocity-1.4.jar 2aacab87f000-2aacab884000 r-xs 00034000 08:10 29851958 /mnt/home/jatten/workspace/common/lib/commons-beanutils-1.8.2.jar 2aacab884000-2aacab889000 r-xs 00048000 08:10 29851918 /mnt/home/jatten/workspace/common/lib/dom4j-1.6.1.jar 2aacab889000-2aacab8c6000 r-xs 0024f000 08:10 29851914 /mnt/home/jatten/workspace/common/lib/xmlbeans-2.3.0.jar 2aacab8c6000-2aacab8cb000 r-xs 00033000 08:10 29851929 /mnt/home/jatten/workspace/common/lib/xmemcached-1.2.3.jar 2aacab8cb000-2aacab8cd000 r-xs 00005000 08:10 29851928 /mnt/home/jatten/workspace/common/lib/org.hamcrest.core_1.1.0.v20090501071000.jar 2aacab8cd000-2aacab8d0000 r-xs 0000a000 08:10 29851944 /mnt/home/jatten/workspace/common/lib/persistence-api-1.0.jar 2aacab8d0000-2aacab8d6000 r-xs 0005f000 08:10 29851926 /mnt/home/jatten/workspace/common/lib/poi-ooxml-3.6-20091214.jar 2aacab8d6000-2aacab8d7000 r-xs 0002b000 08:10 29851951 /mnt/home/jatten/workspace/common/lib/maxmind.jar 2aacab8d7000-2aacab8d8000 r-xs 00002000 08:10 29851935 /mnt/home/jatten/workspace/common/lib/jackson-jaxrs-1.2.0.jar 2aacab8d8000-2aacab8d9000 r-xs 00002000 08:10 29851913 /mnt/home/jatten/workspace/common/lib/slf4j-log4j12-1.5.6.jar 2aacab8d9000-2aacab8dd000 r-xs 00025000 08:10 29851945 /mnt/home/jatten/workspace/common/lib/yanf4j-1.1.1.jar 2aacab8dd000-2aacab8df000 r-xs 00003000 08:10 29851952 /mnt/home/jatten/workspace/common/lib/clickstream-1.0.2.jar 2aacab8df000-2aacab8e1000 r-xs 00004000 08:10 29851953 /mnt/home/jatten/workspace/common/lib/slf4j-api-1.5.6.jar 2aacab8e1000-2aacab8e9000 r-xs 0004d000 08:10 29851920 /mnt/home/jatten/workspace/common/lib/jackson-mapper-asl-1.2.0.jar 2aacab8e9000-2aacab8ed000 r-xs 0001f000 08:10 29851925 /mnt/home/jatten/workspace/common/lib/jackson-core-asl-1.2.0.jar 2aacab8ed000-2aacab8f1000 r-xs 0001b000 08:10 29851912 /mnt/home/jatten/workspace/common/lib/oscache-2.3.jar 2aacab8f1000-2aacab90c000 r-xs 0015d000 08:10 29851927 /mnt/home/jatten/workspace/common/lib/poi-3.6-20091214.jar 2aacab90c000-2aacab911000 r-xs 00040000 08:10 29851831 /mnt/home/jatten/workspace/common/lib/commons-lang-2.5.jar 2aacab911000-2aacab914000 r-xs 00012000 08:10 29851923 /mnt/home/jatten/workspace/common/lib/jgooglesafebrowser-0.1a.2.jar 2aacab914000-2aacab918000 r-xs 00023000 08:10 29851933 /mnt/home/jatten/workspace/common/lib/gson-1.3.jar 2aacab918000-2aacabb18000 rwxp 2aacab918000 00:00 0 2aacabb82000-2aacabd82000 rwxp 2aacabb82000 00:00 0 2aacabe05000-2aacaf204000 rwxp 2aacabe05000 00:00 0 7fffaa12a000-7fffaa141000 rwxp 7fffaa12a000 00:00 0 [stack] ffffffffff600000-ffffffffff601000 r-xp 00000000 00:00 0 [vdso] VM Arguments: jvm_args: -Xmx8000M java_command: com.scorers.ModelImplementingPageScorer -t data/data/golds/adult.all.json -b 18 -s data/models/pagetext.binary. adult.april6.all.model -m com.models.MultiClassUpdateableModel -p 30 --goldsilver -v --cat adult --fakeinput -e /mnt/tmp/xyz.15647.pageo bjects.txt -o Launcher Type: SUN_STANDARD Environment Variables: JAVA_HOME=/usr/java/jdk1.6.0_14 PATH=/usr/kerberos/bin:/usr/local/bin:/bin:/usr/bin:/home/jatten/bin LD_LIBRARY_PATH=/usr/java/jdk1.6.0_14/jre/lib/amd64/server:/usr/java/jdk1.6.0_14/jre/lib/amd64:/usr/java/jdk1.6.0_14/jre/../lib/amd64 SHELL=/bin/bash Signal Handlers: SIGSEGV: [libjvm.so+0x6bd980], sa_mask[0]=0x7ffbfeff, sa_flags=0x10000004 SIGBUS: [libjvm.so+0x6bd980], sa_mask[0]=0x7ffbfeff, sa_flags=0x10000004 SIGFPE: [libjvm.so+0x594cc0], sa_mask[0]=0x7ffbfeff, sa_flags=0x10000004 SIGPIPE: [libjvm.so+0x594cc0], sa_mask[0]=0x7ffbfeff, sa_flags=0x10000004 SIGXFSZ: [libjvm.so+0x594cc0], sa_mask[0]=0x7ffbfeff, sa_flags=0x10000004 SIGILL: [libjvm.so+0x594cc0], sa_mask[0]=0x7ffbfeff, sa_flags=0x10000004 SIGUSR1: SIG_DFL, sa_mask[0]=0x00000000, sa_flags=0x00000000 SIGUSR2: [libjvm.so+0x597480], sa_mask[0]=0x00000000, sa_flags=0x10000004 SIGHUP: [libjvm.so+0x5971d0], sa_mask[0]=0x7ffbfeff, sa_flags=0x10000004 SIGINT: [libjvm.so+0x5971d0], sa_mask[0]=0x7ffbfeff, sa_flags=0x10000004 SIGTERM: [libjvm.so+0x5971d0], sa_mask[0]=0x7ffbfeff, sa_flags=0x10000004 SIGQUIT: [libjvm.so+0x5971d0], sa_mask[0]=0x7ffbfeff, sa_flags=0x10000004 --------------- S Y S T E M --------------- OS:Fedora release 8 (Werewolf) uname:Linux 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:34:28 EST 2008 x86_64 libc:glibc 2.7 NPTL 2.7 rlimit: STACK 10240k, CORE 0k, NPROC 61504, NOFILE 1024, AS infinity load average:2.83 2.73 2.78 CPU:total 2 (4 cores per cpu, 1 threads per core) family 6 model 23 stepping 10, cmov, cx8, fxsr, mmx, sse, sse2, sse3, ssse3, sse4.1 Memory: 4k page, physical 7872040k(14540k free), swap 0k(0k free) vm_info: Java HotSpot(TM) 64-Bit Server VM (14.0-b16) for linux-amd64 JRE (1.6.0_14-b08), built on May 21 2009 01:11:11 by "java_re" with gcc 3.2.2 (SuSE Lin ux) [error occurred during error reporting (printing date and time), id 0xb]

    Read the article

< Previous Page | 500 501 502 503 504 505 506 507 508 509 510 511  | Next Page >