Search Results

Search found 62199 results on 2488 pages for 'first order logic'.

Page 28/2488 | < Previous Page | 24 25 26 27 28 29 30 31 32 33 34 35  | Next Page >

  • Webcast: Oracle Transportation Management Installation

    - by ChristineS
    Webcast: Oracle Transportation Management Installation Date:  November 19, 2013 at 9:30 pm India Time (Mumbai, GMT+05:30), 11:00 am ET, 10:00 am CT, 9:00 am MT, 8:00 PT This one-hour session is recommended for Technical Users, System Administrators, and DBAs who will be installing Oracle Transportation Management. This webcast walks through the steps to install WebLogic, OTM Installer and OHS Installer. We are covering following topics in this Webcast : Review required steps before doing them Ask questions to live OTM Expert while going through the steps Reduce the number of errors while installing Reduce the need to log an SR during the installation process Details & Registration : Doc ID 1591674.1.Direct registration link If you have a suggestion for an Advisor Webcast to be planned in future, please post in our Community Forum What Order Management Advisor Webcast topics do YOU want to see presented?. Remember that you can access a full listing of all future webcasts as well as replays from Doc ID 740966.1. 

    Read the article

  • EF4 Code First Control Unicode and Decimal Precision, Scale with Attributes

    - by Dane Morgridge
    There are several attributes available when using code first with the Entity Framework 4 CTP5 Code First option.  When working with strings you can use [MaxLength(length)] to control the length and [Required] will work on all properties.  But there are a few things missing. By default all string will be created using unicode so you will get nvarchar instead of varchar.  You can change this using the fluent API or you can create an attribute to make the change.  If you have a lot of properties, the attribute will be much easier and require less code. You will need to add two classes to your project to create the attribute itself: 1: public class UnicodeAttribute : Attribute 2: { 3: bool _isUnicode; 4:  5: public UnicodeAttribute(bool isUnicode) 6: { 7: _isUnicode = isUnicode; 8: } 9:  10: public bool IsUnicode { get { return _isUnicode; } } 11: } 12:  13: public class UnicodeAttributeConvention : AttributeConfigurationConvention<PropertyInfo, StringPropertyConfiguration, UnicodeAttribute> 14: { 15: public override void Apply(PropertyInfo memberInfo, StringPropertyConfiguration configuration, UnicodeAttribute attribute) 16: { 17: configuration.IsUnicode = attribute.IsUnicode; 18: } 19: } The UnicodeAttribue class gives you a [Unicode] attribute that you can use on your properties and the UnicodeAttributeConvention will tell EF how to handle the attribute. You will need to add a line to the OnModelCreating method inside your context for EF to recognize the attribute: 1: protected override void OnModelCreating(System.Data.Entity.ModelConfiguration.ModelBuilder modelBuilder) 2: { 3: modelBuilder.Conventions.Add(new UnicodeAttributeConvention()); 4: base.OnModelCreating(modelBuilder); 5: } Once you have this done, you can use the attribute in your classes to make sure that you get database types of varchar instead of nvarchar: 1: [Unicode(false)] 2: public string Name { get; set; }   Another option that is missing is the ability to set the precision and scale on a decimal.  By default decimals get created as (18,0).  If you need decimals to be something like (9,2) then you can once again use the fluent API or create a custom attribute.  As with the unicode attribute, you will need to add two classes to your project: 1: public class DecimalPrecisionAttribute : Attribute 2: { 3: int _precision; 4: private int _scale; 5:  6: public DecimalPrecisionAttribute(int precision, int scale) 7: { 8: _precision = precision; 9: _scale = scale; 10: } 11:  12: public int Precision { get { return _precision; } } 13: public int Scale { get { return _scale; } } 14: } 15:  16: public class DecimalPrecisionAttributeConvention : AttributeConfigurationConvention<PropertyInfo, DecimalPropertyConfiguration, DecimalPrecisionAttribute> 17: { 18: public override void Apply(PropertyInfo memberInfo, DecimalPropertyConfiguration configuration, DecimalPrecisionAttribute attribute) 19: { 20: configuration.Precision = Convert.ToByte(attribute.Precision); 21: configuration.Scale = Convert.ToByte(attribute.Scale); 22:  23: } 24: } Add your line to the OnModelCreating: 1: protected override void OnModelCreating(System.Data.Entity.ModelConfiguration.ModelBuilder modelBuilder) 2: { 3: modelBuilder.Conventions.Add(new UnicodeAttributeConvention()); 4: modelBuilder.Conventions.Add(new DecimalPrecisionAttributeConvention()); 5: base.OnModelCreating(modelBuilder); 6: } Now you can use the following on your properties: 1: [DecimalPrecision(9,2)] 2: public decimal Cost { get; set; } Both these options use the same concepts so if there are other attributes that you want to use, you can create them quite simply.  The key to it all is the PropertyConfiguration classes.   If there is a class for the datatype, then you should be able to write an attribute to set almost everything you need.  You could also create a single attribute to encapsulate all of the possible string combinations instead of having multiple attributes on each property. All in all, I am loving code first and having attributes to control database generation instead of using the fluent API is huge and saves me a great deal of time.

    Read the article

  • First Impressions of a MacBook (from a PC guy)

    - by dgreen
    Disclaimer: I've been a PC guy my entire working career. I'd probably characterize myself as a power user. Never afraid to bust out the console line. But working with a Mac is totally foreign to me. So for those Mac guys who are curious, this is how your world appears from the outside to a computer literate person :)My Macbook Air has arrived! And it's a thing of beauty:First, the specs: 13" MacBook Air, 2.0GHz Core i7 processor. Upgraded to 8GB of RAM for an additional $100, SSD flash storage  = 256GB. The plan is ultimately to use this baby for some iOS development but also some decent lifting in Windows with Visual Studio. Done a lot of reading  and between VMWare Fusion, Parallels and Bootcamp...I'm going to go with VMWare Fusion for $49.99And now my impressions (please re-read disclaimer before proceeding!):I open the box and am trying to understand exactly how the magsafe connector works (and how to disconnect it).  Why does it have two socket outlet plugs? Who knows.  I feel like Hansel in Zoolander. The files are "in" the computer.Stuck in my external hard drive (usb). So how do I get to the files? To the Googles!Argh...it can't read my external NTFS drive. Fat32 can't support field over 4GB…problematic since some of my existing VMWare image files are much larger than 4GB. Didn't see this coming.Three year old loves iPhoto. Super easy to use. Don't even know what I'm doing but I've already (accidentally) discovered the image filtering options. Fun stuff.First thing I downloaded ever => Chrome. I need something to ground me, something familiar. My token, if you will (sorry, gratuitous Inception joke).Ok, I get it… Finder == windows explorer. But where is my hierarchical structure? I miss the tree :(On that note, yeah…how do I see what "path" my files reside in? I'm afraid to know the answer. You know what scares more though…this notion of a smart folder. Feel like the godfather - just get the job done, I don't care how you handle it, I don't want to know...just get it done. What the hell is AirDrop?Mail…just worked. Still in shock that they have a free client for yahoo mail (please no yahoo jokes).mail -> deleting a message takes 5 seconds. Have they heard of async?"Command" key instead of "Control" ok, then what the $%&^! is the control key for then"aliases" == shortcuts I thinkI don't see the file system. And I'm scared. All these things I'm downloading…these .dmg files (bad name) where are they going? Can't seem to delete when they're doneUgh...realized need to buy a mini-to-vga adaptor if I want to use my external monitor ($13 on ebay, $39 in apple store).Windows docking is trickiest for me…this notion of detached windows with a menu bar at the top. I don't like this paradigm, it's confusing. But maybe because I've been using Windows for too long.Evernote, Dropbox desktop clients seem almost identical…few quirks here and there I need to get used to.iTunes is still a bit gross. In a weird way it's actually worse on a Mac if thats possible. This is not the MacBook's fault…this is a software design issue. Overall: UI will take some getting used to. Can't decide if this represents the future and I'm stuck in the past…or this is the past and I've been spoiled by the future (which would be Windows…don't be hating I happen to be very productive in Win7)  So there you go - my 90 minute first impression of the MacBook universe.

    Read the article

  • Sharing logic across different platforms

    - by Pranz
    Hello all, We have a business logic that works with the file systems on OS that we want to implement on both Linux and Windows platforms. The language we have selected is Python for Linux and C# for Windows. GUI is not a priority for now. We were looking for ways to abstract the business logic in a way that we dont have to repeat the business logic (ofcourse I understand since it is related to file system, some code will differ from platform to platform). Any ideas on how to implement it? Is C/C++ the only option. We dont want to use Java. Thanks, Pranz

    Read the article

  • strange segmentation fault during function return

    - by Kyle
    I am running a program on 2 different machines. On one it works fine without issue. On the other it results in a segmentation fault. Through debugging, I have figured out where the fault occurs, but I can't figure out a logical reason for it to happen. In one function I have the following code: pass_particles(particle_grid, particle_properties, input_data, coll_eros_track, collision_number_part, world, grid_rank_lookup, grid_locations); cout<<"done passing particles"<<endl; The function pass_particles looks like: void pass_particles(map<int,map<int,Particle> > & particle_grid, std::vector<Particle_props> & particle_properties, User_input& input_data, data_tracking & coll_eros_track, vector<int> & collision_number_part, mpi::communicator & world, std::map<int,int> & grid_rank_lookup, map<int,std::vector<double> > & grid_locations) { //cout<<"east-west"<<endl; //east-west exchange (x direction) map<int, vector<Particle> > particles_to_be_sent_east; map<int, vector<Particle> > particles_to_be_sent_west; vector<Particle> particles_received_east; vector<Particle> particles_received_west; int counter_x_sent=0; int counter_x_received=0; for(grid_iter=particle_grid.begin();grid_iter!=particle_grid.end();grid_iter++) { map<int,Particle>::iterator part_iter; for (part_iter=grid_iter->second.begin();part_iter!=grid_iter->second.end();) { if (particle_properties[part_iter->second.global_part_num()].particle_in_box()[grid_iter->first]) { //decide if a particle has left the box...need to consider whether particle was already outside the box if ((part_iter->second.position().x()<(grid_locations[grid_iter->first][0]) && part_iter->second.position().x()>(grid_locations[grid_iter->first-input_data.z_numboxes()][0])) || (input_data.periodic_walls_x() && (grid_iter->first-floor(grid_iter->first/(input_data.xz_numboxes()))*input_data.xz_numboxes()<input_data.z_numboxes()) && (part_iter->second.position().x()>(grid_locations[input_data.total_boxes()-1][0])))) { particles_to_be_sent_west[grid_iter->first].push_back(part_iter->second); particle_properties[particle_grid[grid_iter->first][part_iter->first].global_part_num()].particle_in_box()[grid_iter->first]=false; counter_sent++; counter_x_sent++; } else if ((part_iter->second.position().x()>(grid_locations[grid_iter->first][1]) && part_iter->second.position().x()<(grid_locations[grid_iter->first+input_data.z_numboxes()][1])) || (input_data.periodic_walls_x() && (grid_iter->first-floor(grid_iter->first/(input_data.xz_numboxes()))*input_data.xz_numboxes())>input_data.xz_numboxes()-input_data.z_numboxes()-1) && (part_iter->second.position().x()<(grid_locations[0][1]))) { particles_to_be_sent_east[grid_iter->first].push_back(part_iter->second); particle_properties[particle_grid[grid_iter->first][part_iter->first].global_part_num()].particle_in_box()[grid_iter->first]=false; counter_sent++; counter_x_sent++; } //select particles in overlap areas to send to neighboring cells else if ((part_iter->second.position().x()>(grid_locations[grid_iter->first][0]) && part_iter->second.position().x()<(grid_locations[grid_iter->first][0]+input_data.diam_large()))) { particles_to_be_sent_west[grid_iter->first].push_back(part_iter->second); counter_sent++; counter_x_sent++; } else if ((part_iter->second.position().x()<(grid_locations[grid_iter->first][1]) && part_iter->second.position().x()>(grid_locations[grid_iter->first][1]-input_data.diam_large()))) { particles_to_be_sent_east[grid_iter->first].push_back(part_iter->second); counter_sent++; counter_x_sent++; } ++part_iter; } else if (particles_received_current[grid_iter->first].find(part_iter->first)!=particles_received_current[grid_iter->first].end()) { if ((part_iter->second.position().x()>(grid_locations[grid_iter->first][0]) && part_iter->second.position().x()<(grid_locations[grid_iter->first][0]+input_data.diam_large()))) { particles_to_be_sent_west[grid_iter->first].push_back(part_iter->second); counter_sent++; counter_x_sent++; } else if ((part_iter->second.position().x()<(grid_locations[grid_iter->first][1]) && part_iter->second.position().x()>(grid_locations[grid_iter->first][1]-input_data.diam_large()))) { particles_to_be_sent_east[grid_iter->first].push_back(part_iter->second); counter_sent++; counter_x_sent++; } part_iter++; } else { particle_grid[grid_iter->first].erase(part_iter++); counter_removed++; } } } world.barrier(); mpi::request reqs_x_send[particles_to_be_sent_west.size()+particles_to_be_sent_east.size()]; vector<multimap<int,int> > box_sent_x_info; box_sent_x_info.resize(world.size()); vector<multimap<int,int> > box_received_x_info; box_received_x_info.resize(world.size()); int counter_x_reqs=0; //send particles for(grid_iter_vec=particles_to_be_sent_west.begin();grid_iter_vec!=particles_to_be_sent_west.end();grid_iter_vec++) { if (grid_iter_vec->second.size()!=0) { //send a particle. 50 will be "west" tag if (input_data.periodic_walls_x() && (grid_iter_vec->first-floor(grid_iter_vec->first/(input_data.xz_numboxes()))*input_data.xz_numboxes()<input_data.z_numboxes())) { reqs_x_send[counter_x_reqs++]=world.isend(grid_rank_lookup[grid_iter_vec->first + input_data.z_numboxes()*(input_data.x_numboxes()-1)], grid_iter_vec->first + input_data.z_numboxes()*(input_data.x_numboxes()-1), particles_to_be_sent_west[grid_iter_vec->first]); box_sent_x_info[grid_rank_lookup[grid_iter_vec->first + input_data.z_numboxes()*(input_data.x_numboxes()-1)]].insert(pair<int,int>(world.rank(), grid_iter_vec->first + input_data.z_numboxes()*(input_data.x_numboxes()-1))); } else if (!(grid_iter_vec->first-floor(grid_iter_vec->first/(input_data.xz_numboxes()))*input_data.xz_numboxes()<input_data.z_numboxes())) { reqs_x_send[counter_x_reqs++]=world.isend(grid_rank_lookup[grid_iter_vec->first - input_data.z_numboxes()], grid_iter_vec->first - input_data.z_numboxes(), particles_to_be_sent_west[grid_iter_vec->first]); box_sent_x_info[grid_rank_lookup[grid_iter_vec->first - input_data.z_numboxes()]].insert(pair<int,int>(world.rank(),grid_iter_vec->first - input_data.z_numboxes())); } } } for(grid_iter_vec=particles_to_be_sent_east.begin();grid_iter_vec!=particles_to_be_sent_east.end();grid_iter_vec++) { if (grid_iter_vec->second.size()!=0) { //send a particle. 60 will be "east" tag if (input_data.periodic_walls_x() && (grid_iter_vec->first-floor(grid_iter_vec->first/(input_data.xz_numboxes())*input_data.xz_numboxes())>input_data.xz_numboxes()-input_data.z_numboxes()-1)) { reqs_x_send[counter_x_reqs++]=world.isend(grid_rank_lookup[grid_iter_vec->first - input_data.z_numboxes()*(input_data.x_numboxes()-1)], 2000000000-(grid_iter_vec->first - input_data.z_numboxes()*(input_data.x_numboxes()-1)), particles_to_be_sent_east[grid_iter_vec->first]); box_sent_x_info[grid_rank_lookup[grid_iter_vec->first - input_data.z_numboxes()*(input_data.x_numboxes()-1)]].insert(pair<int,int>(world.rank(),2000000000-(grid_iter_vec->first - input_data.z_numboxes()*(input_data.x_numboxes()-1)))); } else if (!(grid_iter_vec->first-floor(grid_iter_vec->first/(input_data.xz_numboxes())*input_data.xz_numboxes())>input_data.xz_numboxes()-input_data.z_numboxes()-1)) { reqs_x_send[counter_x_reqs++]=world.isend(grid_rank_lookup[grid_iter_vec->first + input_data.z_numboxes()], 2000000000-(grid_iter_vec->first + input_data.z_numboxes()), particles_to_be_sent_east[grid_iter_vec->first]); box_sent_x_info[grid_rank_lookup[grid_iter_vec->first + input_data.z_numboxes()]].insert(pair<int,int>(world.rank(), 2000000000-(grid_iter_vec->first + input_data.z_numboxes()))); } } } counter=0; for (int i=0;i<world.size();i++) { //if (world.rank()!=i) //{ reqs[counter++]=world.isend(i,1000000000,box_sent_x_info[i]); reqs[counter++]=world.irecv(i,1000000000,box_received_x_info[i]); //} } mpi::wait_all(reqs, reqs + world.size()*2); //receive particles //receive west particles for (int j=0;j<world.size();j++) { multimap<int,int>::iterator received_info_iter; for (received_info_iter=box_received_x_info[j].begin();received_info_iter!=box_received_x_info[j].end();received_info_iter++) { //receive the message if (received_info_iter->second<1000000000) { //receive the message world.recv(received_info_iter->first,received_info_iter->second,particles_received_west); //loop through all the received particles and add them to the particle_grid for this processor for (unsigned int i=0;i<particles_received_west.size();i++) { particle_grid[received_info_iter->second].insert(pair<int,Particle>(particles_received_west[i].global_part_num(),particles_received_west[i])); if(particles_received_west[i].position().x()>grid_locations[received_info_iter->second][0] && particles_received_west[i].position().x()<grid_locations[received_info_iter->second][1]) { particle_properties[particles_received_west[i].global_part_num()].particle_in_box()[received_info_iter->second]=true; } counter_received++; counter_x_received++; } } else { //receive the message world.recv(received_info_iter->first,received_info_iter->second,particles_received_east); //loop through all the received particles and add them to the particle_grid for this processor for (unsigned int i=0;i<particles_received_east.size();i++) { particle_grid[2000000000-received_info_iter->second].insert(pair<int,Particle>(particles_received_east[i].global_part_num(),particles_received_east[i])); if(particles_received_east[i].position().x()>grid_locations[2000000000-received_info_iter->second][0] && particles_received_east[i].position().x()<grid_locations[2000000000-received_info_iter->second][1]) { particle_properties[particles_received_east[i].global_part_num()].particle_in_box()[2000000000-received_info_iter->second]=true; } counter_received++; counter_x_received++; } } } } mpi::wait_all(reqs_y_send, reqs_y_send + particles_to_be_sent_bottom.size()+particles_to_be_sent_top.size()); mpi::wait_all(reqs_z_send, reqs_z_send + particles_to_be_sent_south.size()+particles_to_be_sent_north.size()); mpi::wait_all(reqs_x_send, reqs_x_send + particles_to_be_sent_west.size()+particles_to_be_sent_east.size()); cout<<"x sent "<<counter_x_sent<<" and received "<<counter_x_received<<" from rank "<<world.rank()<<endl; cout<<"rank "<<world.rank()<<" sent "<<counter_sent<<" and received "<<counter_received<<" and removed "<<counter_removed<<endl; cout<<"done passing"<<endl; } I only posted some of the code (so ignore the fact that some variables may appear to be undefined, as they are in a portion of the code I didn't post) When I run the code (on the machine in which it fails), I get done passing but not done passing particles I am lost as to what could possibly cause a segmentation fault between the end of the called function and the next line in the calling function and why it would happen on one machine and not another.

    Read the article

  • Using Entity Framework 4.0 with Code-First and POCO: How to Get Parent Object with All its Children

    - by SirEel
    I'm new to EF 4.0, so maybe this is an easy question. I've got VS2010 RC and the latest EF CTP. I'm trying to implement the "Foreign Keys" code-first example on the EF Team's Design Blog, http://blogs.msdn.com/efdesign/archive/2009/10/12/code-only-further-enhancements.aspx. public class Customer { public int Id { get; set; public string CustomerDescription { get; set; public IList<PurchaseOrder> PurchaseOrders { get; set; } } public class PurchaseOrder { public int Id { get; set; } public int CustomerId { get; set; } public Customer Customer { get; set; } public DateTime DateReceived { get; set; } } public class MyContext : ObjectContext { public RepositoryContext(EntityConnection connection) : base(connection){} public IObjectSet<Customer> Customers { get {return base.CreateObjectSet<Customer>();} } } I use a ContextBuilder to configure MyContext: { var builder = new ContextBuilder<MyContext>(); var customerConfig = _builder.Entity<Customer>(); customerConfig.Property(c => c.Id).IsIdentity(); var poConfig = _builder.Entity<PurchaseOrder>(); poConfig.Property(po => po.Id).IsIdentity(); poConfig.Relationship(po => po.Customer) .FromProperty(c => c.PurchaseOrders) .HasConstraint((po, c) => po.CustomerId == c.Id); ... } This works correctly when I'm adding new Customers, but not when I try to retrieve existing Customers. This code successfully saves a new Customer and all its child PurchaseOrders: using (var context = builder.Create(connection)) { context.Customers.AddObject(customer); context.SaveChanges(); } But this code only retrieves Customer objects; their PurchaseOrders lists are always empty. using (var context = _builder.Create(_conn)) { var customers = context.Customers.ToList(); } What else do I need to do to the ContextBuilder to make MyContext always retrieve all the PurchaseOrders with each Customer?

    Read the article

  • First Stable Version of Opera 15 has been Released

    - by Akemi Iwaya
    Opera has just released the first stable version of their revamped browser and will be proceeding at a rapid pace going forward. There is also news concerning the three development streams they will maintain along with news of an update for the older 12.x series for those who are not ready to update to 15.x just yet. The day is full of good news for Opera users whether they have already switched to the new Blink/Webkit Engine version or are still using the older Presto Engine version. First, news of the new development streams… Opera has released details outlining their three new release streams: Opera (Stable) – Released every couple of weeks, this is the most solid version, ready for mission-critical daily use. Opera Next – Updated more frequently than Stable, this is the feature-complete candidate for the Stable version. While it should be ready for daily use, you can expect some bugs there. Opera Developer – A bleeding edge version, you can expect a lot of fancy stuff there; however, some nasty bugs might also appear from time to time. From the Opera Desktop Team blog post: When you install Opera from a particular stream, your installation will stick to it, so Opera Stable will be always updated to Opera Stable, Opera Next to Opera Next and so on. You can choose for yourself which stream is the best for you. You can even follow a couple of them at the same time! Of particular interest is the announcement of continued development for the 12.x series. A new version (12.16) is due to be released soon to help keep the older series up to date and secure while the transition process from 12.x to 15.x continues.    

    Read the article

  • New Trusted Status awarded to first Mobile Java Developer

    - by Jacob Lehrbaum
    Java Verified has just announced that GameLoft is the first developer to receive its new Trusted Status!  Java Verified is an industry-recognized Java testing and signing program backed and funded by companies such as AT&T, LG, Motorola, Nokia, Oracle, Orange, Samsung and Vodafone, and chartered with making it easier for mobile developers to certify and deploy applications for use across the billions of mobile handsets that run the Java ME.  Because of its breadth and diversity, Java ME provides an unmatched opportunity to reach more than 3 billions consumers, but at the same time, developers are faced with the challenge of working with multiple distribution channels and a range of handsets. To this end, the Java Verified program provides a suite of tests that help to validate identity, functionality, integrity, and quality.  Since its rebirth in 2010 as an independent organization, the Java Verified program has been actively working to make it even easier to create and distribute Java ME apps.  Example initiatives include updates to the Unified Testing Criteria to make it easier to test "Simple Apps," community outreach to better understand and address developer pain-points  and a new "Trusted Status."  In the words of the Java Verified Program, Trusted Status is:a privileged status to be granted to developers who will have proven that the quality of their Java ME apps is of a consistently high standard. These are developers who will have earned the trust of Java Verified by demonstrating unfailingly that testing to the UTC standard is a crucial part of their product development activityThe first developer to be awarded this status is GameLoft.  By achieving Trusted Status Gameloft can now test their applications to the Java Verified standard without needing to provide Java Verified with the evidence.  The apps then automatically get signed with the Java Verified signature enabling GameLoft to benefit from reduced costs and time-to-market for their new Java ME applications from here on out.  Learn more about the exciting news or apply now for Trusted Status!

    Read the article

  • Oracle OpenWorld 2013: First glimpses of the new SOA Suite 12c by Lucas Jellema

    - by JuergenKress
    During this week’s Oracle OpenWorld Conference, we were given some sneak peeks into the short term future of the Oracle SOA Suite. During various roadmap sessions, on the demo grounds as well as in the keynote session by Thomas Kurian (the replay of which you can see here, new features were described and demonstrated, allowing us to get a fairly good overview of what is going to come for SOA Suite - later in 2013 and sometime in 2014 (probably the first half of that year). The SOA Suite plays an important part in the three themes Thomas Kurian set down for the Fusion Middleware suite of products: support for mobility, cloud and business user empowerment. Some of the highlighted new aspects of Oracle SOA Suite are: Adapters to connect from on-premise to in-the-cloud – specifically targeting SalesForce, RightNow and also providing an SDK to create custom integrations into the cloud (the first cloud adapters will be released on 11g, before the end of the year) Mobile enablement by exposing RESTful services that communicate using JSON as well as adding the capability to call out to such services (12c functionality) Enhanced functionality on Exalogic (of course it runs faster on Exalogic, up to 20 times) Modular runtime with a lighter footprint. A brief demonstration of the Cloud Adapter was given by Demed L’Her during said keynote. The next screenshot shows the Adapter wizard for the Cloud Adapter. It allows the developer to pick a specific operation for a specific business object exposed by RightNow (or SalesForce) (the adapter knows about the APIs exposed by RightNow and SalesForce): This next screenshot shows the adapter that is used in SOA Suite 12c to expose a RESTful service on top of an SCA Composite or a Service Bus service: Read the full article here. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Mix Forum Technorati Tags: Amis,Lucas Jellema,SOA Suite 12c,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • First JSRs Proposed for Java EE 7

    - by Jacob Lehrbaum
    With the approval of Java SE 7 and Java SE 8 JSRs last month, attention is now shifting towards the Java EE platform.  While functionality pegged for Java EE 7 was previewed at least as early as Devoxx, the filing of these JSRs marks the first, officially proposed, specifications for the next generation of the popular application server standard.  Let's take a quick look at the proposed new functionality.Java Persistence API 2.1The first of the new proposed specifications is JSR 338: Java Persistence API (JPA) 2.1. JPA is designed for use with both Java EE and Java SE and: "deals with the way relational data is mapped to Java objects ("persistent entities"), the way that these objects are stored in a relational database so that they can be accessed at a later time, and the continued existence of an entity's state even after the application that uses it ends. In addition to simplifying the entity persistence model, the Java Persistence API standardizes object-relational mapping." (more about JPA)JAX-RS 2.0The second of the new Java specifications that have been proposed is JSR 339, otherwise known as JAX-RS 2.0. JAX-RS provides an API that enables the easy creation of web services using the Representational State Transfer (REST) architecture.  Key features proposed in the new JSR include a Client API, improved support for URIs, a Model-View-Controller architecture and much more!More informationOfficial proposal for Java Persistence 2.1 (jcp.org)Official proposal for JAX-RS 2.0 (jcp.org)Kicking off Java EE 7 with 2 JSRs: JAX-RS 2.0 / JPA 2.1 (the Aquarium)

    Read the article

  • Almost at our first year anniversary!

    - by Vizioz Limited
    It has been a hectic first year at Vizioz and things are still going from strength to strength. 11 months ago I started Vizioz with zero capital investment in the middle of a recession, which to some may seem a daunting prospect but to others including myself it was the challenge I needed to make me want to get up in the morning :) I wanted to prove that even in the curent financial climate it is still possible to start a new business.We are still experiencing the normal growing pains of a small business but this is something we just need to work our way through, it is amazing how much paperwork and administration there is running a small business, office admin, insurance, vat and for the last few months PAYE.For the last 9 months we have shared an office with another small business called Little Big Ideas. They are a design agency working across a broad spectrum of design from branding, print and digital. Last month we decided to move offices to a larger office and now have room for 8 of us, so now we need a couple more clients to help produce enough work to fill the space and grow to the next level.As well as moving office 2 months ago I blogged about my first employee Colin starting work for me, he has picked up Umbraco very well and has mastered the art of good CSS design, as the majority of our clients are large multi-nationals they still require support for IE6 which as all web developers know is the nightmare of all web browsers.This month has seen the next step in the growth of Vizioz as I have taken on another PhD graduate called Pricilla, welcome to the team!This month we plan to launch our own website to enable us to showcase some of the sites we have built over the past 11 months and to allow potential clients to see what we can offer. We might still be relatively small but we have some great case studies to show and with two PhD graduates on the team we have great talent capable of producing complex and innovative solutions for our clients. As soon as we have launched out new website I will blog again about what the future holds for Vizioz and what we can offer our prospective clients as well as e obvious Umbraco CMS solutions.

    Read the article

  • EF4 CPT5 Code First Remove Cascading Deletes

    - by Dane Morgridge
    I have been using EF4 CTP5 with code first and I really like the new code.  One issue I was having however, was cascading deletes is on by default.  This may come as a surprise as using Entity Framework with anything but code first, this is not the case.  I ran into an exception with some one-to-many relationships I had: Introducing FOREIGN KEY constraint 'ProjectAuthorization_UserProfile' on table 'ProjectAuthorizations' may cause cycles or multiple cascade paths. Specify ON DELETE NO ACTION or ON UPDATE NO ACTION, or modify other FOREIGN KEY constraints. Could not create constraint. See previous errors. To get around this, you can use the fluent API and put some code in the OnModelCreating: 1: protected override void OnModelCreating(System.Data.Entity.ModelConfiguration.ModelBuilder modelBuilder) 2: { 3: modelBuilder.Entity<UserProfile>() 4: .HasMany(u => u.ProjectAuthorizations) 5: .WithRequired(a => a.UserProfile) 6: .WillCascadeOnDelete(false); 7: } This will work to remove the cascading delete, but I have to use the fluent API and it has to be done for every one-to-many relationship that causes the problem. I am personally not a fan of cascading deletes in general (for several reasons) and I’m not a huge fan of fluent APIs.  However, there is a way to do this without using the fluent API.  You can in the OnModelCreating, remove the convention that creates the cascading deletes altogether. 1: protected override void OnModelCreating(System.Data.Entity.ModelConfiguration.ModelBuilder modelBuilder) 2: { 3: modelBuilder.Conventions.Remove<OneToManyCascadeDeleteConvention>(); 4: } Thanks to Jeff Derstadt from Microsoft for the info on removing the convention all together.  There is a way to build a custom attribute to remove it on a case by case basis and I’ll have a post on how to do this in the near future.

    Read the article

  • First Minecraft mod not working: make a new sword

    - by yamikoWebs
    I am making my first mod and cannot see what is wrong with it. I am using MCP and Modloader. For my first mod I was going to make swords. I started with making a new EnumToolMaterials WOOD(0, 59, 2.0F, 0, 15), STONE(1, 131, 4.0F, 1, 5), IRON(2, 250, 6.0F, 2, 14), LAPIS(3, 750, 7.0F, 2, 14), OBSIDIAN(3, 1000, 7.5F, 3, 12), EMERALD(3, 1561, 8.0F, 3, 10),//diamond GREEN(3, 2000, 9.0F, 4, 10),//emerald GOLD(0, 200, 12.0F, 0, 22); then here is the mod class public class _Mod_Yamiko extends BaseMod{ /* mod itemts */ public static final Item swordLapis = (new ItemSword(600, EnumToolMaterial.LAPIS)).setItemName("swordLapis"); public static final Item swordObsidian = (new ItemSword(601, EnumToolMaterial.OBSIDIAN)).setItemName("swordObsidian"); public static final Item swordGreen = (new ItemSword(602, EnumToolMaterial.GREEN)).setItemName("swordGreen"); public void load(){ //set images swordLapis.iconIndex = ModLoader.addOverride("/gui/items.png","/gui/swordLapis.png"); ModLoader.addName(swordLapis, "Lapis Sword"); //craft ModLoader.addRecipe(new ItemStack(_Mod_Yamiko.swordLapis, 1), new Object[]{ " X ", " X ", " Y ", 'X', Block.dirt, 'Y', Item.stick }); } public String getVersion(){ return "0.1"; } } Then I made a 16×16 .png image. I am not sure where to save it so I recompiled and reobfuscated, took the mod files and put it in my local Minecraft install, added the image where it be should be. No problems when playing but I cannot make the new sword.

    Read the article

  • Woman Is the World's First Computer Programmer? [closed]

    - by Sveta Bondarenko
    This week, on 10th December, we celebrate the 197th birth anniversary of Ada Lovelace, often considered as the world's first computer programmer. Ada became famous not only as a daughter of romantic poet Lord Byron but also as an outstanding 19th century mathematician. Her works on analytical engine are recognized as the first algorithm intended to be processed by a machine. Women always played a crucial role in the computer science evolution, but unfortunately, they are considered to be not so good at programming and engineering as men. Even though the fair sex makes up a growing portion of computer and Internet users, there is still a large gender gap in the field of Computer Science. But all is not lost! According to the study women's enrollment in the computer science raised from 7 percent in 1995 to 42 percent in 2000. And it is still increasing. Soon women will take a well-deserved position among the world's top computer programmers. After all, a number of notable female computer pioneers such as Ada Lovelace, Grace Hopper, and Anita Borg have proven that women make great computer scientists. But will women make great contributions to the modern technologies industry? Or successful and famous female computer programmer is just a pipe dream?

    Read the article

  • Were the first assemblers written in machine code?

    - by The111
    I am reading the book The Elements of Computing Systems: Building a Modern Computer from First Principles, which contains projects encompassing the build of a computer from boolean gates all the way to high level applications (in that order). The current project I'm working on is writing an assembler using a high level language of my choice, to translate from Hack assembly code to Hack machine code (Hack is the name of the hardware platform built in the previous chapters). Although the hardware has all been built in a simulator, I have tried to pretend that I am really constructing each level using only the tools available to me at that point in the real process. That said, it got me thinking. Using a high level language to write my assembler is certainly convenient, but for the very first assembler ever written (i.e. in history), wouldn't it need to be written in machine code, since that's all that existed at the time? And a correlated question... how about today? If a brand new CPU architecture comes out, with a brand new instruction set, and a brand new assembly syntax, how would the assembler be constructed? I'm assuming you could still use an existing high level language to generate binaries for the assembler program, since if you know the syntax of both the assembly and machine languages for your new platform, then the task of writing the assembler is really just a text analysis task and is not inherently related to that platform (i.e. needing to be written in that platform's machine language)... which is the very reason I am able to "cheat" while writing my Hack assembler in 2012, and use some preexisting high level language to help me out.

    Read the article

  • First ATMs programming language

    - by revo
    First ATMs performed tasks like a cash dispenser, they were offline machines which worked with punch cards impregnated with Carbon and a 6-digit PIN code. Maximum withdrawal with a card was 10 pounds and each one was a one-time use card - ATM swallowed cards! The first ATM was installed in London in the year 1967, as I looked at time line of programming languages, there were many programming languages made before that decade. I don't know about the hardware neither, but in which programming language it was written? *I didn't find a detailed biography of John Shepherd-Barron (ATM inventor at 70s) Update I found this picture, which is taken from a newspaper back to the year 1972 in Iran. Translated PS : Shows Mr. Rad-lon (if spelled correctly), The manager of Barros (if spelled correctly) International Educational Institute in United Kingdom at the right, and Mr. Jim Sutherland - Expert of Computer Kiosks. In the rest of the text I found on this paper, these kind of ATMs which called "Automated Computer Kiosk" were advertised with this: Mr. Rad-lon (if spelled correctly) puts his card to one specific location of Automated Computer Kiosk and after 10 seconds he withdraws his cash. Two more questions are: 1- How those ATMs were so fast? (withdrawal in 10 seconds in that year) 2- I didn't find any text on Internet which state about "Automated Computer Kiosk", Is it valid or were they being called Computer in that time?

    Read the article

  • Visual Studio 2012 first impressions...no Macros!

    - by bconlon
    Yesterday I installed Microsoft Visual Studio 2012 for the first time (all 8.5GB) and after 20 years of (mostly) happy times using VS they have removed Macros, one of the most handy features.The first thing I wanted to do when I upgraded my VS2010 project was to add a #elseif block to each file. This would usually be simple case of find in files of the previous #elseif and then Ctrl+Shift+R to record a macro which would be: F8 (to select the next file from find list), F3 (to find the correct position in file), Ctrl+V to paste the new code. Then all I would need to do is keep Ctrl+Shift+P (Play Macro) pressed until all the files were processed.But alas Ctrl+Shift+R does nothing! I won’t say that I use Macros every day but it was a very useful feature.To continue my moaning a little more, I also don't like the bland interface. This has been well documented by others, but now I have used it myself, I find it difficult to tell one grey area of screen from another and the lack of colour makes the icons unclear.I also don't see why the menus now need to SHOUT in capital letters?On the plus side, they have now added the ability to see WPF properties in the debugger...a bit of an oversight in Visual Studio 2010. Oh, but you still can't edit and continue on files that contain templated code.Whilst Visual Studio 2012 is not a complete disaster like Windows 8 (why develop a desk top OS to be the same as a Smart device OS), it does not float my boat.Rant over.#

    Read the article

  • How to sync client and server at the first frame

    - by wheelinlight
    I'm making a game where an authoritative server sends information to all clients about states and positions for objects in a 3d world. The player can control his character by clicking on the screen to set a destination for the character, much like in the Diablo series. I've read most information I can find online about interpolation, reconciliation, and general networking architecture (Valve's for instance). I think I understand everything but one thing seems to be missing in every article I read. Let say we have an interpolation delay of 100ms, server tickrate=50ms, latency=200ms; How do I know when 100ms has past on the client? If the server sends the first update on t=0, can I assume it arrives at t=200, therefore assuming that all packets takes the same amount of time to reach the client? What if the first packet arrives a little quick, for instance at t=150. I would then be starting the client with t=150 and at t=250 it will think it has past 100ms since its connect to the server when it in fact only 50ms has past. Hopefully the above paragraph is understandable. The summarized question would be: How do I know at what tick to start simulating the client? EDIT: This is how I ended up doing it: The client keeps a clock (approximately) in sync with the server. The client then simulates the world at simulationTime = syncedTime - avg(RTT)/2 - interpolationTime The round-trip time can fluctuate so therefore I average it out over time. By only keeping the most recent values when calculating the average I hope to adapt to more permanent changes in latency. It's still to early to draw any conclusion. I'm currently simulating bad network connections, but it's looking good so far. Anyone see any possible problems?

    Read the article

  • SQL Developer Quick Tip: Reordering Columns

    - by thatjeffsmith
    Do you find yourself always scrolling and scrolling and scrolling to get to the column you want to see when looking at a table or view’s data? Don’t do that! Instead, just right-click on the column headers, select ‘Columns’, and reorder as desired. Access the Manage Columns dialog Then move up the columns you want to see first… Put them in the order you want – it won’t affect the database. Now I see the data I want to see, when I want to see it – no scrolling. This will only change how the data is displayed for you, and SQL Developer will remember this ordering until you ‘Delete Persisted Settings…’ What IS Remembered Via These ‘Persisted Settings?’ Column Widths Column Sorts Column Positions Find/Highlights This means if you manipulate one of these settings, SQL Developer will remember them the next time you open the tool and go to that table or view. Don’t know what I mean by ‘Find/Highlight?’ Find and highlight values in a grid with Ctrl+F

    Read the article

  • ASP.NET MVC CRUD Validation

    - by Ricardo Peres
    One thing I didn’t refer on my previous post on ASP.NET MVC CRUD with AJAX was how to retrieve model validation information into the client. We want to send any model validation errors to the client in the JSON object that contains the ProductId, RowVersion and Success properties, specifically, if there are any errors, we will add an extra Errors collection property. Here’s how: 1: [HttpPost] 2: [AjaxOnly] 3: [Authorize] 4: public JsonResult Edit(Product product) 5: { 6: if (this.ModelState.IsValid == true) 7: { 8: using (ProductContext ctx = new ProductContext()) 9: { 10: Boolean success = false; 11:  12: ctx.Entry(product).State = (product.ProductId == 0) ? EntityState.Added : EntityState.Modified; 13:  14: try 15: { 16: success = (ctx.SaveChanges() == 1); 17: } 18: catch (DbUpdateConcurrencyException) 19: { 20: ctx.Entry(product).Reload(); 21: } 22:  23: return (this.Json(new { Success = success, ProductId = product.ProductId, RowVersion = Convert.ToBase64String(product.RowVersion) })); 24: } 25: } 26: else 27: { 28: Dictionary<String, String> errors = new Dictionary<String, String>(); 29:  30: foreach (KeyValuePair<String, ModelState> keyValue in this.ModelState) 31: { 32: String key = keyValue.Key; 33: ModelState modelState = keyValue.Value; 34:  35: foreach (ModelError error in modelState.Errors) 36: { 37: errors[key] = error.ErrorMessage; 38: } 39: } 40:  41: return (this.Json(new { Success = false, ProductId = 0, RowVersion = String.Empty, Errors = errors })); 42: } 43: } As for the view, we need to change slightly the onSuccess JavaScript handler on the Single view: 1: function onSuccess(ctx) 2: { 3: if (typeof (ctx.Success) != 'undefined') 4: { 5: $('input#ProductId').val(ctx.ProductId); 6: $('input#RowVersion').val(ctx.RowVersion); 7:  8: if (ctx.Success == false) 9: { 10: var errors = ''; 11:  12: if (typeof (ctx.Errors) != 'undefined') 13: { 14: for (var key in ctx.Errors) 15: { 16: errors += key + ': ' + ctx.Errors[key] + '\n'; 17: } 18:  19: window.alert('An error occurred while updating the entity: the model contained the following errors.\n\n' + errors); 20: } 21: else 22: { 23: window.alert('An error occurred while updating the entity: it may have been modified by third parties. Please try again.'); 24: } 25: } 26: else 27: { 28: window.alert('Saved successfully'); 29: } 30: } 31: else 32: { 33: if (window.confirm('Not logged in. Login now?') == true) 34: { 35: document.location.href = '<% 1: : FormsAuthentication.LoginUrl %>?ReturnURL=' + document.location.pathname; 36: } 37: } 38: } The logic is as this: If the Edit action method is called for a new entity (the ProductId is 0) and it is valid, the entity is saved, and the JSON results contains a Success flag set to true, a ProductId property with the database-generated primary key and a RowVersion with the server-generated ROWVERSION; If the model is not valid, the JSON result will contain the Success flag set to false and the Errors collection populated with all the model validation errors; If the entity already exists in the database (ProductId not 0) and the model is valid, but the stored ROWVERSION is different that the one on the view, the result will set the Success property to false and will return the current (as loaded from the database) value of the ROWVERSION on the RowVersion property. On a future post I will talk about the possibilities that exist for performing model validation, stay tuned!

    Read the article

  • NDepend 4 – First Steps

    - by Ricardo Peres
    Introduction Thanks to Patrick Smacchia I had the chance to test NDepend 4. I can only say: awesome! This will be the first of a series of posts on NDepend, where I will talk about my discoveries. Keep in mind that I am just starting to use it, so more experienced users may find these too basic, I just hope I don’t say anything foolish! I must say that I am in no way affiliated with NDepend and I never actually met Patrick. Installation No installation program – a curious decision, I’m not against it -, just unzip the files to a folder and run the executable. It will optionally register itself with Visual Studio 2008, 2010 and 11 as well as RedGate’s Reflector; also, it automatically looks for updates. NDepend can either be used as a stand-alone program (with or without a GUI) or from within Visual Studio or Reflector. Getting Started One thing that really pleases me is the Getting Started section of the stand-alone, with links to pages on NDepend’s web site, featuring detailed explanations, which usually include screenshots and small videos (<5 minutes). There’s also an How do I with hierarchical navigation that guides us to through the major features so that we can easily find what we want. Usage There are two basic ways to use NDepend: Analyze .NET solutions, projects or assemblies; Compare two versions of the same assembly. I have so far not used NDepend to compare assemblies, so I will first talk about the first option. After selecting a solution and some of its projects, it generates a single HTML page with an highly detailed report of the analysis it produced. This includes some metrics such as number of lines of code, IL instructions, comments, types, methods and properties, the calculation of the cyclomatic complexity, coupling and lots of others indicators, typically grouped by type, namespace and assembly. The HTML also includes some nice diagrams depicting assembly dependencies, type and method relative proportions (according to the number of IL instructions, I guess) and assembly analysis relating to abstractness and stability. Useful, I would say. Then there’s the rules; NDepend tests the target assemblies against a set of more than 120 rules, grouped in categories Code Quality, Object Oriented Design, Design, Architecture and Layering, Dead Code, Visibility, Naming Conventions, Source Files Organization and .NET Framework Usage. The full list can be configured on the application, and an explanation of each rule can be found on the web site. Rules can be validated, violated and violated in a critical manner, and the HTML will contain the violated rules, their queries – more on this later - and results. The HTML uses some nice JavaScript effects, which allow paging and sorting of tables, so its nice to use. Similar to the rules, there are some queries that display results for a number (about 200) questions grouped as Object Oriented Design, API Breaking Changes (for assembly version comparison), Code Diff Summary (also for version comparison) and Dead Code. The difference between queries and rules is that queries are not classified as passes, violated or critically violated, just present results. The queries and rules are expressed through CQLinq, which is a very powerful LINQ derivative specific to code analysis. All of the included rules and queries can be enabled or disabled and new ones can be added, with intellisense to help. Besides the HTML report file, the NDepend application can be used to explore all analysis results, compare different versions of analysis reports and to run custom queries. Comparison to Other Analysis Tools Unlike StyleCop, NDepend only works with assemblies, not source code, so you can’t expect it to be able to enforce brackets placement, for example. It is more similar to FxCop, but you don’t have the option to analyze at the IL level, that is, other that the number of IL instructions and the complexity. What’s Next In the next days I’ll continue my exploration with a real-life test case. References The NDepend web site is http://www.ndepend.com/. Patrick keeps an updated blog on http://codebetter.com/patricksmacchia/ and he regularly monitors StackOverflow for questions tagged NDepend, which you can find on http://stackoverflow.com/questions/tagged/ndepend. The default list of CQLinq rules, queries and statistics can be found at http://www.ndepend.com/DefaultRules/webframe.html. The syntax itself is described at http://www.ndepend.com/Doc_CQLinq_Syntax.aspx and its features at http://www.ndepend.com/Doc_CQLinq_Features.aspx.

    Read the article

  • SPARC T4-4 Delivers World Record First Result on PeopleSoft Combined Benchmark

    - by Brian
    Oracle's SPARC T4-4 servers running Oracle's PeopleSoft HCM 9.1 combined online and batch benchmark achieved World Record 18,000 concurrent users while executing a PeopleSoft Payroll batch job of 500,000 employees in 43.32 minutes and maintaining online users response time at < 2 seconds. This world record is the first to run online and batch workloads concurrently. This result was obtained with a SPARC T4-4 server running Oracle Database 11g Release 2, a SPARC T4-4 server running PeopleSoft HCM 9.1 application server and a SPARC T4-2 server running Oracle WebLogic Server in the web tier. The SPARC T4-4 server running the application tier used Oracle Solaris Zones which provide a flexible, scalable and manageable virtualization environment. The average CPU utilization on the SPARC T4-2 server in the web tier was 17%, on the SPARC T4-4 server in the application tier it was 59%, and on the SPARC T4-4 server in the database tier was 35% (online and batch) leaving significant headroom for additional processing across the three tiers. The SPARC T4-4 server used for the database tier hosted Oracle Database 11g Release 2 using Oracle Automatic Storage Management (ASM) for database files management with I/O performance equivalent to raw devices. This is the first three tier mixed workload (online and batch) PeopleSoft benchmark also processing PeopleSoft payroll batch workload. Performance Landscape PeopleSoft HR Self-Service and Payroll Benchmark Systems Users Ave Response Search (sec) Ave Response Save (sec) Batch Time (min) Streams SPARC T4-2 (web) SPARC T4-4 (app) SPARC T4-2 (db) 18,000 0.944 0.503 43.32 64 Configuration Summary Application Configuration: 1 x SPARC T4-4 server with 4 x SPARC T4 processors, 3.0 GHz 512 GB memory 5 x 300 GB SAS internal disks 1 x 100 GB and 2 x 300 GB internal SSDs 2 x 10 Gbe HBA Oracle Solaris 11 11/11 PeopleTools 8.52 PeopleSoft HCM 9.1 Oracle Tuxedo, Version 10.3.0.0, 64-bit, Patch Level 031 Java Platform, Standard Edition Development Kit 6 Update 32 Database Configuration: 1 x SPARC T4-4 server with 4 x SPARC T4 processors, 3.0 GHz 256 GB memory 3 x 300 GB SAS internal disks Oracle Solaris 11 11/11 Oracle Database 11g Release 2 Web Tier Configuration: 1 x SPARC T4-2 server with 2 x SPARC T4 processors, 2.85 GHz 256 GB memory 2 x 300 GB SAS internal disks 1 x 100 GB internal SSD Oracle Solaris 11 11/11 PeopleTools 8.52 Oracle WebLogic Server 10.3.4 Java Platform, Standard Edition Development Kit 6 Update 32 Storage Configuration: 1 x Sun Server X2-4 as a COMSTAR head for data 4 x Intel Xeon X7550, 2.0 GHz 128 GB memory 1 x Sun Storage F5100 Flash Array (80 flash modules) 1 x Sun Storage F5100 Flash Array (40 flash modules) 1 x Sun Fire X4275 as a COMSTAR head for redo logs 12 x 2 TB SAS disks with Niwot Raid controller Benchmark Description This benchmark combines PeopleSoft HCM 9.1 HR Self Service online and PeopleSoft Payroll batch workloads to run on a unified database deployed on Oracle Database 11g Release 2. The PeopleSoft HRSS benchmark kit is a Oracle standard benchmark kit run by all platform vendors to measure the performance. It's an OLTP benchmark where DB SQLs are moderately complex. The results are certified by Oracle and a white paper is published. PeopleSoft HR SS defines a business transaction as a series of HTML pages that guide a user through a particular scenario. Users are defined as corporate Employees, Managers and HR administrators. The benchmark consist of 14 scenarios which emulate users performing typical HCM transactions such as viewing paycheck, promoting and hiring employees, updating employee profile and other typical HCM application transactions. All these transactions are well-defined in the PeopleSoft HR Self-Service 9.1 benchmark kit. This benchmark metric is the weighted average response search/save time for all the transactions. The PeopleSoft 9.1 Payroll (North America) benchmark demonstrates system performance for a range of processing volumes in a specific configuration. This workload represents large batch runs typical of a ERP environment during a mass update. The benchmark measures five application business process run times for a database representing large organization. They are Paysheet Creation, Payroll Calculation, Payroll Confirmation, Print Advice forms, and Create Direct Deposit File. The benchmark metric is the cumulative elapsed time taken to complete the Paysheet Creation, Payroll Calculation and Payroll Confirmation business application processes. The benchmark metrics are taken for each respective benchmark while running simultaneously on the same database back-end. Specifically, the payroll batch processes are started when the online workload reaches steady state (the maximum number of online users) and overlap with online transactions for the duration of the steady state. Key Points and Best Practices Two Oracle PeopleSoft Domain sets with 200 application servers each on a SPARC T4-4 server were hosted in 2 separate Oracle Solaris Zones to demonstrate consolidation of multiple application servers, ease of administration and performance tuning. Each Oracle Solaris Zone was bound to a separate processor set, each containing 15 cores (total 120 threads). The default set (1 core from first and third processor socket, total 16 threads) was used for network and disk interrupt handling. This was done to improve performance by reducing memory access latency by using the physical memory closest to the processors and offload I/O interrupt handling to default set threads, freeing up cpu resources for Application Servers threads and balancing application workload across 240 threads. See Also Oracle PeopleSoft Benchmark White Papers oracle.com SPARC T4-2 Server oracle.com OTN SPARC T4-4 Server oracle.com OTN PeopleSoft Enterprise Human Capital Management oracle.com OTN PeopleSoft Enterprise Human Capital Management (Payroll) oracle.com OTN Oracle Solaris oracle.com OTN Oracle Database 11g Release 2 Enterprise Edition oracle.com OTN Disclosure Statement Oracle's PeopleSoft HR and Payroll combined benchmark, www.oracle.com/us/solutions/benchmark/apps-benchmark/peoplesoft-167486.html, results 09/30/2012.

    Read the article

  • Embedded Prolog Interpreter/Compiler for Java

    - by Sami
    I'm working on an application in Java, that needs to do some complex logic rule deductions as part of its functionality. I'd like to code my logic deductions in Prolog or some other logic/constraint programming language, instead of Java, as I believe the resulting code will be significantly simpler and more maintainable. I Googled for embedded Java implementations on Prolog, and found number of them, each with very little documentation. My (modest) selection criteria are: should be embeddable in Java (e.g. can be bundled up with my java package instead of requiring any native installations on external programs) simple interface to use from Java (for initiating deductions, inspecting results, and adding rules) come with at least a few examples on how to use it doesn't necessarely have to be Prolog, but other logic/constraint programming languages with the above criteria would suit my needs, too. What choices do I have and what are their advantages and disadvantages?

    Read the article

  • Functional Methods on Collections

    - by GlenPeterson
    I'm learning Scala and am a little bewildered by all the methods (higher-order functions) available on the collections. Which ones produce more results than the original collection, which ones produce less, and which are most appropriate for a given problem? Though I'm studying Scala, I think this would pertain to most modern functional languages (Clojure, Haskell) and also to Java 8 which introduces these methods on Java collections. Specifically, right now I'm wondering about map with filter vs. fold/reduce. I was delighted that using foldRight() can yield the same result as a map(...).filter(...) with only one traversal of the underlying collection. But a friend pointed out that foldRight() may force sequential processing while map() is friendlier to being processed by multiple processors in parallel. Maybe this is why mapReduce() is so popular? More generally, I'm still sometimes surprised when I chain several of these methods together to get back a List(List()) or to pass a List(List()) and get back just a List(). For instance, when would I use: collection.map(a => a.map(b => ...)) vs. collection.map(a => ...).map(b => ...) The for/yield command does nothing to help this confusion. Am I asking about the difference between a "fold" and "unfold" operation? Am I trying to jam too many questions into one? I think there may be an underlying concept that, if I understood it, might answer all these questions, or at least tie the answers together.

    Read the article

  • MySQL: order by and limit gives wrong result

    - by Larry K
    MySQL ver 5.1.26 I'm getting the wrong result with a select that has where, order by and limit clauses. It's only a problem when the order by uses the id column. I saw the MySQL manual for LIMIT Optimization My guess from reading the manual is that there is some problem with the index on the primary key, id. But I don't know where I should go from here... Question: what should I do to best solve the problem? Works correctly: mysql> SELECT id, created_at FROM billing_invoices WHERE (billing_invoices.account_id = 5) ORDER BY id DESC ; +------+---------------------+ | id | created_at | +------+---------------------+ | 1336 | 2010-05-14 08:05:25 | | 1334 | 2010-05-06 08:05:25 | | 1331 | 2010-05-05 23:18:11 | +------+---------------------+ 3 rows in set (0.00 sec) WRONG result when limit added! Should be the first row, id - 1336 mysql> SELECT id, created_at FROM billing_invoices WHERE (billing_invoices.account_id = 5) ORDER BY id DESC limit 1; +------+---------------------+ | id | created_at | +------+---------------------+ | 1331 | 2010-05-05 23:18:11 | +------+---------------------+ 1 row in set (0.00 sec) Works correctly: mysql> SELECT id, created_at FROM billing_invoices WHERE (billing_invoices.account_id = 5) ORDER BY created_at DESC ; +------+---------------------+ | id | created_at | +------+---------------------+ | 1336 | 2010-05-14 08:05:25 | | 1334 | 2010-05-06 08:05:25 | | 1331 | 2010-05-05 23:18:11 | +------+---------------------+ 3 rows in set (0.01 sec) Works correctly with limit: mysql> SELECT id, created_at FROM billing_invoices WHERE (billing_invoices.account_id = 5) ORDER BY created_at DESC limit 1; +------+---------------------+ | id | created_at | +------+---------------------+ | 1336 | 2010-05-14 08:05:25 | +------+---------------------+ 1 row in set (0.01 sec) Additional info: explain SELECT id, created_at FROM billing_invoices WHERE (billing_invoices.account_id = 5) ORDER BY id DESC limit 1; +----+-------------+------------------+-------+--------------------------------------+--------------------------------------+---------+------+------+-------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+------------------+-------+--------------------------------------+--------------------------------------+---------+------+------+-------------+ | 1 | SIMPLE | billing_invoices | range | index_billing_invoices_on_account_id | index_billing_invoices_on_account_id | 4 | NULL | 3 | Using where | +----+-------------+------------------+-------+--------------------------------------+--------------------------------------+---------+------+------+-------------+

    Read the article

< Previous Page | 24 25 26 27 28 29 30 31 32 33 34 35  | Next Page >