Search Results

Search found 44940 results on 1798 pages for 'first rock'.

Page 9/1798 | < Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >

  • Get :first-letter of :hover element with CSS

    - by Rudie
    Is it possible to get the first letter of an element while in 'hover mode'? This is how it would look - I think - but it's not working in Chrome 10: a:hover:first-letter or a:first-letter:hover Technically (imho) they're not the same. The first takes the first letter of the hovering element. The second takes the entire element if the first letter is hovering. I require the first. As you can see on http://css4.hotblocks.nl (if you have a 1900px screen and a dom inspector) if you uncomment the CSS, both don't work. I want only the first letter of the element to color red, when the entire element is in :hover mode. Is it possible without additional HTML tags? Thanks. -- edit I've changed my online example for the better. CSS is now divided in separate <style> blocks. Makes for easier turning on and off try-outs. Conclusion - so far!? - is this: In Firefox 3.6/4 a:first-letter:hover does nothing (good) and a:hover:first-letter works perfectly (good!). In Chrome 10 a:first-letter:hover does nothing (good) and a:first-letter:hover breaks the previous CSS 'statement'. (In my example it breaks nothing because it's in a separate <style> block.) Which brings us to: once again Google Chrome lags behind Firefox =( --edit

    Read the article

  • Strategy to prevent players from seeing through walls in an online FPS?

    - by geneotech
    Why do we still moan on wallhackers in multiplayer first-person shooters ? Isn't it possible to perform occlusion culling for all players server-side ? For example, send player xyz information to client only when the player is visible in client's frustum and not occluded by any object ? Even if the collision-geometry is very simplified, most of the time cheater won't receive tactical information. Why not do this ?

    Read the article

  • Building an ASP.Net 4.5 Web forms application - part 5

    - by nikolaosk
    ?his is the fifth post in a series of posts on how to design and implement an ASP.Net 4.5 Web Forms store that sells posters on line. There are 4 more posts in this series of posts.Please make sure you read them first.You can find the first post here. You can find the second post here. You can find the third post here.You can find the fourth here.  In this new post we will build on the previous posts and we will demonstrate how to display the details of a poster when the user clicks on an individual poster photo/link. We will add a FormView control on a web form and will bind data from the database. FormView is a great web server control for displaying the details of a single record. 1) Launch Visual Studio and open your solution where your project lives2) Add a new web form item on the project.Make sure you include the Master Page.Name it PosterDetails.aspx 3) Open the PosterDetails.aspx page. We will add some markup in this page. Have a look at the code below <asp:Content ID="Content2" ContentPlaceHolderID="FeaturedContent" runat="server">    <asp:FormView ID="posterDetails" runat="server" ItemType="PostersOnLine.DAL.Poster" SelectMethod ="GetPosterDetails">        <ItemTemplate>            <div>                <h1><%#:Item.PosterName %></h1>            </div>            <br />            <table>                <tr>                    <td>                        <img src="<%#:Item.PosterImgpath %>" border="1" alt="<%#:Item.PosterName %>" height="300" />                    </td>                    <td style="vertical-align: top">                        <b>Description:</b><br /><%#:Item.PosterDescription %>                        <br />                        <span><b>Price:</b>&nbsp;<%#: String.Format("{0:c}", Item.PosterPrice) %></span>                        <br />                        <span><b>Poster Number:</b>&nbsp;<%#:Item.PosterID %></span>                        <br />                    </td>                </tr>            </table>        </ItemTemplate>    </asp:FormView></asp:Content> I set the ItemType property to the Poster entity class and the SelectMethod to the GetPosterDetails method.The Item binding expression is available and we can retrieve properties of the Poster object.I retrieve the name, the image,the description and the price of each poster. 4) Now we need to write the GetPosterDetails method.In the code behind of the PosterDetails.aspx page we type public IQueryable<Poster> GetPosterDetails([QueryString("PosterID")]int? posterid)        {                    PosterContext ctx = new PosterContext();            IQueryable<Poster> query = ctx.Posters;            if (posterid.HasValue && posterid > 0)            {                query = query.Where(p => p.PosterID == posterid);            }            else            {                query = null;            }            return query;        } I bind the value from the query string to the posterid parameter at run time.This is all possible due to the QueryStringAttribute class that lives inside the System.Web.ModelBinding and gets the value of the query string variable PosterID.If there is a matching poster it is fetched from the database.If not,there is no data at all coming back from the database. 5) I run my application and then click on the "Midfielders" link.Then click on the first poster that appears from the left (Kenny Dalglish) and click on it to see the details. Have a look at the picture below to see the results.   You can see that now I have all the details of the poster in a new page.?ake sure you place breakpoints in the code so you can see what is really going on. Hope it helps!!!

    Read the article

  • Testing Workflows &ndash; Test-First

    - by Timothy Klenke
    Originally posted on: http://geekswithblogs.net/TimothyK/archive/2014/05/30/testing-workflows-ndash-test-first.aspxThis is the second of two posts on some common strategies for approaching the job of writing tests.  The previous post covered test-after workflows where as this will focus on test-first.  Each workflow presented is a method of attack for adding tests to a project.  The more tools in your tool belt the better.  So here is a partial list of some test-first methodologies. Ping Pong Ping Pong is a methodology commonly used in pair programing.  One developer will write a new failing test.  Then they hand the keyboard to their partner.  The partner writes the production code to get the test passing.  The partner then writes the next test before passing the keyboard back to the original developer. The reasoning behind this testing methodology is to facilitate pair programming.  That is to say that this testing methodology shares all the benefits of pair programming, including ensuring multiple team members are familiar with the code base (i.e. low bus number). Test Blazer Test Blazing, in some respects, is also a pairing strategy.  The developers don’t work side by side on the same task at the same time.  Instead one developer is dedicated to writing tests at their own desk.  They write failing test after failing test, never touching the production code.  With these tests they are defining the specification for the system.  The developer most familiar with the specifications would be assigned this task. The next day or later in the same day another developer fetches the latest test suite.  Their job is to write the production code to get those tests passing.  Once all the tests pass they fetch from source control the latest version of the test project to get the newer tests. This methodology has some of the benefits of pair programming, namely lowering the bus number.  This can be good way adding an extra developer to a project without slowing it down too much.  The production coder isn’t slowed down writing tests.  The tests are in another project from the production code, so there shouldn’t be any merge conflicts despite two developers working on the same solution. This methodology is also a good test for the tests.  Can another developer figure out what system should do just by reading the tests?  This question will be answered as the production coder works there way through the test blazer’s tests. Test Driven Development (TDD) TDD is a highly disciplined practice that calls for a new test and an new production code to be written every few minutes.  There are strict rules for when you should be writing test or production code.  You start by writing a failing (red) test, then write the simplest production code possible to get the code working (green), then you clean up the code (refactor).  This is known as the red-green-refactor cycle. The goal of TDD isn’t the creation of a suite of tests, however that is an advantageous side effect.  The real goal of TDD is to follow a practice that yields a better design.  The practice is meant to push the design toward small, decoupled, modularized components.  This is generally considered a better design that large, highly coupled ball of mud. TDD accomplishes this through the refactoring cycle.  Refactoring is only possible to do safely when tests are in place.  In order to use TDD developers must be trained in how to look for and repair code smells in the system.  Through repairing these sections of smelly code (i.e. a refactoring) the design of the system emerges. For further information on TDD, I highly recommend the series “Is TDD Dead?”.  It discusses its pros and cons and when it is best used. Acceptance Test Driven Development (ATDD) Whereas TDD focuses on small unit tests that concentrate on a small piece of the system, Acceptance Tests focuses on the larger integrated environment.  Acceptance Tests usually correspond to user stories, which come directly from the customer. The unit tests focus on the inputs and outputs of smaller parts of the system, which are too low level to be of interest to the customer. ATDD generally uses the same tools as TDD.  However, ATDD uses fewer mocks and test doubles than TDD. ATDD often complements TDD; they aren’t competing methods.  A full test suite will usually consist of a large number of unit (created via TDD) tests and a smaller number of acceptance tests. Behaviour Driven Development (BDD) BDD is more about audience than workflow.  BDD pushes the testing realm out towards the client.  Developers, managers and the client all work together to define the tests. Typically different tooling is used for BDD than acceptance and unit testing.  This is done because the audience is not just developers.  Tools using the Gherkin family of languages allow for test scenarios to be described in an English format.  Other tools such as MSpec or FitNesse also strive for highly readable behaviour driven test suites. Because these tests are public facing (viewable by people outside the development team), the terminology usually changes.  You can’t get away with the same technobabble you can with unit tests written in a programming language that only developers understand.  For starters, they usually aren’t called tests.  Usually they’re called “examples”, “behaviours”, “scenarios”, or “specifications”. This may seem like a very subtle difference, but I’ve seen this small terminology change have a huge impact on the acceptance of the process.  Many people have a bias that testing is something that comes at the end of a project.  When you say we need to define the tests at the start of the project many people will immediately give that a lower priority on the project schedule.  But if you say we need to define the specification or behaviour of the system before we can start, you’ll get more cooperation.   Keep these test-first and test-after workflows in your tool belt.  With them you’ll be able to find new opportunities to apply them.

    Read the article

  • EF4 Code First Control Unicode and Decimal Precision, Scale with Attributes

    - by Dane Morgridge
    There are several attributes available when using code first with the Entity Framework 4 CTP5 Code First option.  When working with strings you can use [MaxLength(length)] to control the length and [Required] will work on all properties.  But there are a few things missing. By default all string will be created using unicode so you will get nvarchar instead of varchar.  You can change this using the fluent API or you can create an attribute to make the change.  If you have a lot of properties, the attribute will be much easier and require less code. You will need to add two classes to your project to create the attribute itself: 1: public class UnicodeAttribute : Attribute 2: { 3: bool _isUnicode; 4:  5: public UnicodeAttribute(bool isUnicode) 6: { 7: _isUnicode = isUnicode; 8: } 9:  10: public bool IsUnicode { get { return _isUnicode; } } 11: } 12:  13: public class UnicodeAttributeConvention : AttributeConfigurationConvention<PropertyInfo, StringPropertyConfiguration, UnicodeAttribute> 14: { 15: public override void Apply(PropertyInfo memberInfo, StringPropertyConfiguration configuration, UnicodeAttribute attribute) 16: { 17: configuration.IsUnicode = attribute.IsUnicode; 18: } 19: } The UnicodeAttribue class gives you a [Unicode] attribute that you can use on your properties and the UnicodeAttributeConvention will tell EF how to handle the attribute. You will need to add a line to the OnModelCreating method inside your context for EF to recognize the attribute: 1: protected override void OnModelCreating(System.Data.Entity.ModelConfiguration.ModelBuilder modelBuilder) 2: { 3: modelBuilder.Conventions.Add(new UnicodeAttributeConvention()); 4: base.OnModelCreating(modelBuilder); 5: } Once you have this done, you can use the attribute in your classes to make sure that you get database types of varchar instead of nvarchar: 1: [Unicode(false)] 2: public string Name { get; set; }   Another option that is missing is the ability to set the precision and scale on a decimal.  By default decimals get created as (18,0).  If you need decimals to be something like (9,2) then you can once again use the fluent API or create a custom attribute.  As with the unicode attribute, you will need to add two classes to your project: 1: public class DecimalPrecisionAttribute : Attribute 2: { 3: int _precision; 4: private int _scale; 5:  6: public DecimalPrecisionAttribute(int precision, int scale) 7: { 8: _precision = precision; 9: _scale = scale; 10: } 11:  12: public int Precision { get { return _precision; } } 13: public int Scale { get { return _scale; } } 14: } 15:  16: public class DecimalPrecisionAttributeConvention : AttributeConfigurationConvention<PropertyInfo, DecimalPropertyConfiguration, DecimalPrecisionAttribute> 17: { 18: public override void Apply(PropertyInfo memberInfo, DecimalPropertyConfiguration configuration, DecimalPrecisionAttribute attribute) 19: { 20: configuration.Precision = Convert.ToByte(attribute.Precision); 21: configuration.Scale = Convert.ToByte(attribute.Scale); 22:  23: } 24: } Add your line to the OnModelCreating: 1: protected override void OnModelCreating(System.Data.Entity.ModelConfiguration.ModelBuilder modelBuilder) 2: { 3: modelBuilder.Conventions.Add(new UnicodeAttributeConvention()); 4: modelBuilder.Conventions.Add(new DecimalPrecisionAttributeConvention()); 5: base.OnModelCreating(modelBuilder); 6: } Now you can use the following on your properties: 1: [DecimalPrecision(9,2)] 2: public decimal Cost { get; set; } Both these options use the same concepts so if there are other attributes that you want to use, you can create them quite simply.  The key to it all is the PropertyConfiguration classes.   If there is a class for the datatype, then you should be able to write an attribute to set almost everything you need.  You could also create a single attribute to encapsulate all of the possible string combinations instead of having multiple attributes on each property. All in all, I am loving code first and having attributes to control database generation instead of using the fluent API is huge and saves me a great deal of time.

    Read the article

  • First Impressions of a MacBook (from a PC guy)

    - by dgreen
    Disclaimer: I've been a PC guy my entire working career. I'd probably characterize myself as a power user. Never afraid to bust out the console line. But working with a Mac is totally foreign to me. So for those Mac guys who are curious, this is how your world appears from the outside to a computer literate person :)My Macbook Air has arrived! And it's a thing of beauty:First, the specs: 13" MacBook Air, 2.0GHz Core i7 processor. Upgraded to 8GB of RAM for an additional $100, SSD flash storage  = 256GB. The plan is ultimately to use this baby for some iOS development but also some decent lifting in Windows with Visual Studio. Done a lot of reading  and between VMWare Fusion, Parallels and Bootcamp...I'm going to go with VMWare Fusion for $49.99And now my impressions (please re-read disclaimer before proceeding!):I open the box and am trying to understand exactly how the magsafe connector works (and how to disconnect it).  Why does it have two socket outlet plugs? Who knows.  I feel like Hansel in Zoolander. The files are "in" the computer.Stuck in my external hard drive (usb). So how do I get to the files? To the Googles!Argh...it can't read my external NTFS drive. Fat32 can't support field over 4GB…problematic since some of my existing VMWare image files are much larger than 4GB. Didn't see this coming.Three year old loves iPhoto. Super easy to use. Don't even know what I'm doing but I've already (accidentally) discovered the image filtering options. Fun stuff.First thing I downloaded ever => Chrome. I need something to ground me, something familiar. My token, if you will (sorry, gratuitous Inception joke).Ok, I get it… Finder == windows explorer. But where is my hierarchical structure? I miss the tree :(On that note, yeah…how do I see what "path" my files reside in? I'm afraid to know the answer. You know what scares more though…this notion of a smart folder. Feel like the godfather - just get the job done, I don't care how you handle it, I don't want to know...just get it done. What the hell is AirDrop?Mail…just worked. Still in shock that they have a free client for yahoo mail (please no yahoo jokes).mail -> deleting a message takes 5 seconds. Have they heard of async?"Command" key instead of "Control" ok, then what the $%&^! is the control key for then"aliases" == shortcuts I thinkI don't see the file system. And I'm scared. All these things I'm downloading…these .dmg files (bad name) where are they going? Can't seem to delete when they're doneUgh...realized need to buy a mini-to-vga adaptor if I want to use my external monitor ($13 on ebay, $39 in apple store).Windows docking is trickiest for me…this notion of detached windows with a menu bar at the top. I don't like this paradigm, it's confusing. But maybe because I've been using Windows for too long.Evernote, Dropbox desktop clients seem almost identical…few quirks here and there I need to get used to.iTunes is still a bit gross. In a weird way it's actually worse on a Mac if thats possible. This is not the MacBook's fault…this is a software design issue. Overall: UI will take some getting used to. Can't decide if this represents the future and I'm stuck in the past…or this is the past and I've been spoiled by the future (which would be Windows…don't be hating I happen to be very productive in Win7)  So there you go - my 90 minute first impression of the MacBook universe.

    Read the article

  • strange segmentation fault during function return

    - by Kyle
    I am running a program on 2 different machines. On one it works fine without issue. On the other it results in a segmentation fault. Through debugging, I have figured out where the fault occurs, but I can't figure out a logical reason for it to happen. In one function I have the following code: pass_particles(particle_grid, particle_properties, input_data, coll_eros_track, collision_number_part, world, grid_rank_lookup, grid_locations); cout<<"done passing particles"<<endl; The function pass_particles looks like: void pass_particles(map<int,map<int,Particle> > & particle_grid, std::vector<Particle_props> & particle_properties, User_input& input_data, data_tracking & coll_eros_track, vector<int> & collision_number_part, mpi::communicator & world, std::map<int,int> & grid_rank_lookup, map<int,std::vector<double> > & grid_locations) { //cout<<"east-west"<<endl; //east-west exchange (x direction) map<int, vector<Particle> > particles_to_be_sent_east; map<int, vector<Particle> > particles_to_be_sent_west; vector<Particle> particles_received_east; vector<Particle> particles_received_west; int counter_x_sent=0; int counter_x_received=0; for(grid_iter=particle_grid.begin();grid_iter!=particle_grid.end();grid_iter++) { map<int,Particle>::iterator part_iter; for (part_iter=grid_iter->second.begin();part_iter!=grid_iter->second.end();) { if (particle_properties[part_iter->second.global_part_num()].particle_in_box()[grid_iter->first]) { //decide if a particle has left the box...need to consider whether particle was already outside the box if ((part_iter->second.position().x()<(grid_locations[grid_iter->first][0]) && part_iter->second.position().x()>(grid_locations[grid_iter->first-input_data.z_numboxes()][0])) || (input_data.periodic_walls_x() && (grid_iter->first-floor(grid_iter->first/(input_data.xz_numboxes()))*input_data.xz_numboxes()<input_data.z_numboxes()) && (part_iter->second.position().x()>(grid_locations[input_data.total_boxes()-1][0])))) { particles_to_be_sent_west[grid_iter->first].push_back(part_iter->second); particle_properties[particle_grid[grid_iter->first][part_iter->first].global_part_num()].particle_in_box()[grid_iter->first]=false; counter_sent++; counter_x_sent++; } else if ((part_iter->second.position().x()>(grid_locations[grid_iter->first][1]) && part_iter->second.position().x()<(grid_locations[grid_iter->first+input_data.z_numboxes()][1])) || (input_data.periodic_walls_x() && (grid_iter->first-floor(grid_iter->first/(input_data.xz_numboxes()))*input_data.xz_numboxes())>input_data.xz_numboxes()-input_data.z_numboxes()-1) && (part_iter->second.position().x()<(grid_locations[0][1]))) { particles_to_be_sent_east[grid_iter->first].push_back(part_iter->second); particle_properties[particle_grid[grid_iter->first][part_iter->first].global_part_num()].particle_in_box()[grid_iter->first]=false; counter_sent++; counter_x_sent++; } //select particles in overlap areas to send to neighboring cells else if ((part_iter->second.position().x()>(grid_locations[grid_iter->first][0]) && part_iter->second.position().x()<(grid_locations[grid_iter->first][0]+input_data.diam_large()))) { particles_to_be_sent_west[grid_iter->first].push_back(part_iter->second); counter_sent++; counter_x_sent++; } else if ((part_iter->second.position().x()<(grid_locations[grid_iter->first][1]) && part_iter->second.position().x()>(grid_locations[grid_iter->first][1]-input_data.diam_large()))) { particles_to_be_sent_east[grid_iter->first].push_back(part_iter->second); counter_sent++; counter_x_sent++; } ++part_iter; } else if (particles_received_current[grid_iter->first].find(part_iter->first)!=particles_received_current[grid_iter->first].end()) { if ((part_iter->second.position().x()>(grid_locations[grid_iter->first][0]) && part_iter->second.position().x()<(grid_locations[grid_iter->first][0]+input_data.diam_large()))) { particles_to_be_sent_west[grid_iter->first].push_back(part_iter->second); counter_sent++; counter_x_sent++; } else if ((part_iter->second.position().x()<(grid_locations[grid_iter->first][1]) && part_iter->second.position().x()>(grid_locations[grid_iter->first][1]-input_data.diam_large()))) { particles_to_be_sent_east[grid_iter->first].push_back(part_iter->second); counter_sent++; counter_x_sent++; } part_iter++; } else { particle_grid[grid_iter->first].erase(part_iter++); counter_removed++; } } } world.barrier(); mpi::request reqs_x_send[particles_to_be_sent_west.size()+particles_to_be_sent_east.size()]; vector<multimap<int,int> > box_sent_x_info; box_sent_x_info.resize(world.size()); vector<multimap<int,int> > box_received_x_info; box_received_x_info.resize(world.size()); int counter_x_reqs=0; //send particles for(grid_iter_vec=particles_to_be_sent_west.begin();grid_iter_vec!=particles_to_be_sent_west.end();grid_iter_vec++) { if (grid_iter_vec->second.size()!=0) { //send a particle. 50 will be "west" tag if (input_data.periodic_walls_x() && (grid_iter_vec->first-floor(grid_iter_vec->first/(input_data.xz_numboxes()))*input_data.xz_numboxes()<input_data.z_numboxes())) { reqs_x_send[counter_x_reqs++]=world.isend(grid_rank_lookup[grid_iter_vec->first + input_data.z_numboxes()*(input_data.x_numboxes()-1)], grid_iter_vec->first + input_data.z_numboxes()*(input_data.x_numboxes()-1), particles_to_be_sent_west[grid_iter_vec->first]); box_sent_x_info[grid_rank_lookup[grid_iter_vec->first + input_data.z_numboxes()*(input_data.x_numboxes()-1)]].insert(pair<int,int>(world.rank(), grid_iter_vec->first + input_data.z_numboxes()*(input_data.x_numboxes()-1))); } else if (!(grid_iter_vec->first-floor(grid_iter_vec->first/(input_data.xz_numboxes()))*input_data.xz_numboxes()<input_data.z_numboxes())) { reqs_x_send[counter_x_reqs++]=world.isend(grid_rank_lookup[grid_iter_vec->first - input_data.z_numboxes()], grid_iter_vec->first - input_data.z_numboxes(), particles_to_be_sent_west[grid_iter_vec->first]); box_sent_x_info[grid_rank_lookup[grid_iter_vec->first - input_data.z_numboxes()]].insert(pair<int,int>(world.rank(),grid_iter_vec->first - input_data.z_numboxes())); } } } for(grid_iter_vec=particles_to_be_sent_east.begin();grid_iter_vec!=particles_to_be_sent_east.end();grid_iter_vec++) { if (grid_iter_vec->second.size()!=0) { //send a particle. 60 will be "east" tag if (input_data.periodic_walls_x() && (grid_iter_vec->first-floor(grid_iter_vec->first/(input_data.xz_numboxes())*input_data.xz_numboxes())>input_data.xz_numboxes()-input_data.z_numboxes()-1)) { reqs_x_send[counter_x_reqs++]=world.isend(grid_rank_lookup[grid_iter_vec->first - input_data.z_numboxes()*(input_data.x_numboxes()-1)], 2000000000-(grid_iter_vec->first - input_data.z_numboxes()*(input_data.x_numboxes()-1)), particles_to_be_sent_east[grid_iter_vec->first]); box_sent_x_info[grid_rank_lookup[grid_iter_vec->first - input_data.z_numboxes()*(input_data.x_numboxes()-1)]].insert(pair<int,int>(world.rank(),2000000000-(grid_iter_vec->first - input_data.z_numboxes()*(input_data.x_numboxes()-1)))); } else if (!(grid_iter_vec->first-floor(grid_iter_vec->first/(input_data.xz_numboxes())*input_data.xz_numboxes())>input_data.xz_numboxes()-input_data.z_numboxes()-1)) { reqs_x_send[counter_x_reqs++]=world.isend(grid_rank_lookup[grid_iter_vec->first + input_data.z_numboxes()], 2000000000-(grid_iter_vec->first + input_data.z_numboxes()), particles_to_be_sent_east[grid_iter_vec->first]); box_sent_x_info[grid_rank_lookup[grid_iter_vec->first + input_data.z_numboxes()]].insert(pair<int,int>(world.rank(), 2000000000-(grid_iter_vec->first + input_data.z_numboxes()))); } } } counter=0; for (int i=0;i<world.size();i++) { //if (world.rank()!=i) //{ reqs[counter++]=world.isend(i,1000000000,box_sent_x_info[i]); reqs[counter++]=world.irecv(i,1000000000,box_received_x_info[i]); //} } mpi::wait_all(reqs, reqs + world.size()*2); //receive particles //receive west particles for (int j=0;j<world.size();j++) { multimap<int,int>::iterator received_info_iter; for (received_info_iter=box_received_x_info[j].begin();received_info_iter!=box_received_x_info[j].end();received_info_iter++) { //receive the message if (received_info_iter->second<1000000000) { //receive the message world.recv(received_info_iter->first,received_info_iter->second,particles_received_west); //loop through all the received particles and add them to the particle_grid for this processor for (unsigned int i=0;i<particles_received_west.size();i++) { particle_grid[received_info_iter->second].insert(pair<int,Particle>(particles_received_west[i].global_part_num(),particles_received_west[i])); if(particles_received_west[i].position().x()>grid_locations[received_info_iter->second][0] && particles_received_west[i].position().x()<grid_locations[received_info_iter->second][1]) { particle_properties[particles_received_west[i].global_part_num()].particle_in_box()[received_info_iter->second]=true; } counter_received++; counter_x_received++; } } else { //receive the message world.recv(received_info_iter->first,received_info_iter->second,particles_received_east); //loop through all the received particles and add them to the particle_grid for this processor for (unsigned int i=0;i<particles_received_east.size();i++) { particle_grid[2000000000-received_info_iter->second].insert(pair<int,Particle>(particles_received_east[i].global_part_num(),particles_received_east[i])); if(particles_received_east[i].position().x()>grid_locations[2000000000-received_info_iter->second][0] && particles_received_east[i].position().x()<grid_locations[2000000000-received_info_iter->second][1]) { particle_properties[particles_received_east[i].global_part_num()].particle_in_box()[2000000000-received_info_iter->second]=true; } counter_received++; counter_x_received++; } } } } mpi::wait_all(reqs_y_send, reqs_y_send + particles_to_be_sent_bottom.size()+particles_to_be_sent_top.size()); mpi::wait_all(reqs_z_send, reqs_z_send + particles_to_be_sent_south.size()+particles_to_be_sent_north.size()); mpi::wait_all(reqs_x_send, reqs_x_send + particles_to_be_sent_west.size()+particles_to_be_sent_east.size()); cout<<"x sent "<<counter_x_sent<<" and received "<<counter_x_received<<" from rank "<<world.rank()<<endl; cout<<"rank "<<world.rank()<<" sent "<<counter_sent<<" and received "<<counter_received<<" and removed "<<counter_removed<<endl; cout<<"done passing"<<endl; } I only posted some of the code (so ignore the fact that some variables may appear to be undefined, as they are in a portion of the code I didn't post) When I run the code (on the machine in which it fails), I get done passing but not done passing particles I am lost as to what could possibly cause a segmentation fault between the end of the called function and the next line in the calling function and why it would happen on one machine and not another.

    Read the article

  • Using Entity Framework 4.0 with Code-First and POCO: How to Get Parent Object with All its Children

    - by SirEel
    I'm new to EF 4.0, so maybe this is an easy question. I've got VS2010 RC and the latest EF CTP. I'm trying to implement the "Foreign Keys" code-first example on the EF Team's Design Blog, http://blogs.msdn.com/efdesign/archive/2009/10/12/code-only-further-enhancements.aspx. public class Customer { public int Id { get; set; public string CustomerDescription { get; set; public IList<PurchaseOrder> PurchaseOrders { get; set; } } public class PurchaseOrder { public int Id { get; set; } public int CustomerId { get; set; } public Customer Customer { get; set; } public DateTime DateReceived { get; set; } } public class MyContext : ObjectContext { public RepositoryContext(EntityConnection connection) : base(connection){} public IObjectSet<Customer> Customers { get {return base.CreateObjectSet<Customer>();} } } I use a ContextBuilder to configure MyContext: { var builder = new ContextBuilder<MyContext>(); var customerConfig = _builder.Entity<Customer>(); customerConfig.Property(c => c.Id).IsIdentity(); var poConfig = _builder.Entity<PurchaseOrder>(); poConfig.Property(po => po.Id).IsIdentity(); poConfig.Relationship(po => po.Customer) .FromProperty(c => c.PurchaseOrders) .HasConstraint((po, c) => po.CustomerId == c.Id); ... } This works correctly when I'm adding new Customers, but not when I try to retrieve existing Customers. This code successfully saves a new Customer and all its child PurchaseOrders: using (var context = builder.Create(connection)) { context.Customers.AddObject(customer); context.SaveChanges(); } But this code only retrieves Customer objects; their PurchaseOrders lists are always empty. using (var context = _builder.Create(_conn)) { var customers = context.Customers.ToList(); } What else do I need to do to the ContextBuilder to make MyContext always retrieve all the PurchaseOrders with each Customer?

    Read the article

  • First Stable Version of Opera 15 has been Released

    - by Akemi Iwaya
    Opera has just released the first stable version of their revamped browser and will be proceeding at a rapid pace going forward. There is also news concerning the three development streams they will maintain along with news of an update for the older 12.x series for those who are not ready to update to 15.x just yet. The day is full of good news for Opera users whether they have already switched to the new Blink/Webkit Engine version or are still using the older Presto Engine version. First, news of the new development streams… Opera has released details outlining their three new release streams: Opera (Stable) – Released every couple of weeks, this is the most solid version, ready for mission-critical daily use. Opera Next – Updated more frequently than Stable, this is the feature-complete candidate for the Stable version. While it should be ready for daily use, you can expect some bugs there. Opera Developer – A bleeding edge version, you can expect a lot of fancy stuff there; however, some nasty bugs might also appear from time to time. From the Opera Desktop Team blog post: When you install Opera from a particular stream, your installation will stick to it, so Opera Stable will be always updated to Opera Stable, Opera Next to Opera Next and so on. You can choose for yourself which stream is the best for you. You can even follow a couple of them at the same time! Of particular interest is the announcement of continued development for the 12.x series. A new version (12.16) is due to be released soon to help keep the older series up to date and secure while the transition process from 12.x to 15.x continues.    

    Read the article

  • New Trusted Status awarded to first Mobile Java Developer

    - by Jacob Lehrbaum
    Java Verified has just announced that GameLoft is the first developer to receive its new Trusted Status!  Java Verified is an industry-recognized Java testing and signing program backed and funded by companies such as AT&T, LG, Motorola, Nokia, Oracle, Orange, Samsung and Vodafone, and chartered with making it easier for mobile developers to certify and deploy applications for use across the billions of mobile handsets that run the Java ME.  Because of its breadth and diversity, Java ME provides an unmatched opportunity to reach more than 3 billions consumers, but at the same time, developers are faced with the challenge of working with multiple distribution channels and a range of handsets. To this end, the Java Verified program provides a suite of tests that help to validate identity, functionality, integrity, and quality.  Since its rebirth in 2010 as an independent organization, the Java Verified program has been actively working to make it even easier to create and distribute Java ME apps.  Example initiatives include updates to the Unified Testing Criteria to make it easier to test "Simple Apps," community outreach to better understand and address developer pain-points  and a new "Trusted Status."  In the words of the Java Verified Program, Trusted Status is:a privileged status to be granted to developers who will have proven that the quality of their Java ME apps is of a consistently high standard. These are developers who will have earned the trust of Java Verified by demonstrating unfailingly that testing to the UTC standard is a crucial part of their product development activityThe first developer to be awarded this status is GameLoft.  By achieving Trusted Status Gameloft can now test their applications to the Java Verified standard without needing to provide Java Verified with the evidence.  The apps then automatically get signed with the Java Verified signature enabling GameLoft to benefit from reduced costs and time-to-market for their new Java ME applications from here on out.  Learn more about the exciting news or apply now for Trusted Status!

    Read the article

  • Oracle OpenWorld 2013: First glimpses of the new SOA Suite 12c by Lucas Jellema

    - by JuergenKress
    During this week’s Oracle OpenWorld Conference, we were given some sneak peeks into the short term future of the Oracle SOA Suite. During various roadmap sessions, on the demo grounds as well as in the keynote session by Thomas Kurian (the replay of which you can see here, new features were described and demonstrated, allowing us to get a fairly good overview of what is going to come for SOA Suite - later in 2013 and sometime in 2014 (probably the first half of that year). The SOA Suite plays an important part in the three themes Thomas Kurian set down for the Fusion Middleware suite of products: support for mobility, cloud and business user empowerment. Some of the highlighted new aspects of Oracle SOA Suite are: Adapters to connect from on-premise to in-the-cloud – specifically targeting SalesForce, RightNow and also providing an SDK to create custom integrations into the cloud (the first cloud adapters will be released on 11g, before the end of the year) Mobile enablement by exposing RESTful services that communicate using JSON as well as adding the capability to call out to such services (12c functionality) Enhanced functionality on Exalogic (of course it runs faster on Exalogic, up to 20 times) Modular runtime with a lighter footprint. A brief demonstration of the Cloud Adapter was given by Demed L’Her during said keynote. The next screenshot shows the Adapter wizard for the Cloud Adapter. It allows the developer to pick a specific operation for a specific business object exposed by RightNow (or SalesForce) (the adapter knows about the APIs exposed by RightNow and SalesForce): This next screenshot shows the adapter that is used in SOA Suite 12c to expose a RESTful service on top of an SCA Composite or a Service Bus service: Read the full article here. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Mix Forum Technorati Tags: Amis,Lucas Jellema,SOA Suite 12c,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • First JSRs Proposed for Java EE 7

    - by Jacob Lehrbaum
    With the approval of Java SE 7 and Java SE 8 JSRs last month, attention is now shifting towards the Java EE platform.  While functionality pegged for Java EE 7 was previewed at least as early as Devoxx, the filing of these JSRs marks the first, officially proposed, specifications for the next generation of the popular application server standard.  Let's take a quick look at the proposed new functionality.Java Persistence API 2.1The first of the new proposed specifications is JSR 338: Java Persistence API (JPA) 2.1. JPA is designed for use with both Java EE and Java SE and: "deals with the way relational data is mapped to Java objects ("persistent entities"), the way that these objects are stored in a relational database so that they can be accessed at a later time, and the continued existence of an entity's state even after the application that uses it ends. In addition to simplifying the entity persistence model, the Java Persistence API standardizes object-relational mapping." (more about JPA)JAX-RS 2.0The second of the new Java specifications that have been proposed is JSR 339, otherwise known as JAX-RS 2.0. JAX-RS provides an API that enables the easy creation of web services using the Representational State Transfer (REST) architecture.  Key features proposed in the new JSR include a Client API, improved support for URIs, a Model-View-Controller architecture and much more!More informationOfficial proposal for Java Persistence 2.1 (jcp.org)Official proposal for JAX-RS 2.0 (jcp.org)Kicking off Java EE 7 with 2 JSRs: JAX-RS 2.0 / JPA 2.1 (the Aquarium)

    Read the article

  • Almost at our first year anniversary!

    - by Vizioz Limited
    It has been a hectic first year at Vizioz and things are still going from strength to strength. 11 months ago I started Vizioz with zero capital investment in the middle of a recession, which to some may seem a daunting prospect but to others including myself it was the challenge I needed to make me want to get up in the morning :) I wanted to prove that even in the curent financial climate it is still possible to start a new business.We are still experiencing the normal growing pains of a small business but this is something we just need to work our way through, it is amazing how much paperwork and administration there is running a small business, office admin, insurance, vat and for the last few months PAYE.For the last 9 months we have shared an office with another small business called Little Big Ideas. They are a design agency working across a broad spectrum of design from branding, print and digital. Last month we decided to move offices to a larger office and now have room for 8 of us, so now we need a couple more clients to help produce enough work to fill the space and grow to the next level.As well as moving office 2 months ago I blogged about my first employee Colin starting work for me, he has picked up Umbraco very well and has mastered the art of good CSS design, as the majority of our clients are large multi-nationals they still require support for IE6 which as all web developers know is the nightmare of all web browsers.This month has seen the next step in the growth of Vizioz as I have taken on another PhD graduate called Pricilla, welcome to the team!This month we plan to launch our own website to enable us to showcase some of the sites we have built over the past 11 months and to allow potential clients to see what we can offer. We might still be relatively small but we have some great case studies to show and with two PhD graduates on the team we have great talent capable of producing complex and innovative solutions for our clients. As soon as we have launched out new website I will blog again about what the future holds for Vizioz and what we can offer our prospective clients as well as e obvious Umbraco CMS solutions.

    Read the article

  • EF4 CPT5 Code First Remove Cascading Deletes

    - by Dane Morgridge
    I have been using EF4 CTP5 with code first and I really like the new code.  One issue I was having however, was cascading deletes is on by default.  This may come as a surprise as using Entity Framework with anything but code first, this is not the case.  I ran into an exception with some one-to-many relationships I had: Introducing FOREIGN KEY constraint 'ProjectAuthorization_UserProfile' on table 'ProjectAuthorizations' may cause cycles or multiple cascade paths. Specify ON DELETE NO ACTION or ON UPDATE NO ACTION, or modify other FOREIGN KEY constraints. Could not create constraint. See previous errors. To get around this, you can use the fluent API and put some code in the OnModelCreating: 1: protected override void OnModelCreating(System.Data.Entity.ModelConfiguration.ModelBuilder modelBuilder) 2: { 3: modelBuilder.Entity<UserProfile>() 4: .HasMany(u => u.ProjectAuthorizations) 5: .WithRequired(a => a.UserProfile) 6: .WillCascadeOnDelete(false); 7: } This will work to remove the cascading delete, but I have to use the fluent API and it has to be done for every one-to-many relationship that causes the problem. I am personally not a fan of cascading deletes in general (for several reasons) and I’m not a huge fan of fluent APIs.  However, there is a way to do this without using the fluent API.  You can in the OnModelCreating, remove the convention that creates the cascading deletes altogether. 1: protected override void OnModelCreating(System.Data.Entity.ModelConfiguration.ModelBuilder modelBuilder) 2: { 3: modelBuilder.Conventions.Remove<OneToManyCascadeDeleteConvention>(); 4: } Thanks to Jeff Derstadt from Microsoft for the info on removing the convention all together.  There is a way to build a custom attribute to remove it on a case by case basis and I’ll have a post on how to do this in the near future.

    Read the article

  • First Minecraft mod not working: make a new sword

    - by yamikoWebs
    I am making my first mod and cannot see what is wrong with it. I am using MCP and Modloader. For my first mod I was going to make swords. I started with making a new EnumToolMaterials WOOD(0, 59, 2.0F, 0, 15), STONE(1, 131, 4.0F, 1, 5), IRON(2, 250, 6.0F, 2, 14), LAPIS(3, 750, 7.0F, 2, 14), OBSIDIAN(3, 1000, 7.5F, 3, 12), EMERALD(3, 1561, 8.0F, 3, 10),//diamond GREEN(3, 2000, 9.0F, 4, 10),//emerald GOLD(0, 200, 12.0F, 0, 22); then here is the mod class public class _Mod_Yamiko extends BaseMod{ /* mod itemts */ public static final Item swordLapis = (new ItemSword(600, EnumToolMaterial.LAPIS)).setItemName("swordLapis"); public static final Item swordObsidian = (new ItemSword(601, EnumToolMaterial.OBSIDIAN)).setItemName("swordObsidian"); public static final Item swordGreen = (new ItemSword(602, EnumToolMaterial.GREEN)).setItemName("swordGreen"); public void load(){ //set images swordLapis.iconIndex = ModLoader.addOverride("/gui/items.png","/gui/swordLapis.png"); ModLoader.addName(swordLapis, "Lapis Sword"); //craft ModLoader.addRecipe(new ItemStack(_Mod_Yamiko.swordLapis, 1), new Object[]{ " X ", " X ", " Y ", 'X', Block.dirt, 'Y', Item.stick }); } public String getVersion(){ return "0.1"; } } Then I made a 16×16 .png image. I am not sure where to save it so I recompiled and reobfuscated, took the mod files and put it in my local Minecraft install, added the image where it be should be. No problems when playing but I cannot make the new sword.

    Read the article

  • Woman Is the World's First Computer Programmer? [closed]

    - by Sveta Bondarenko
    This week, on 10th December, we celebrate the 197th birth anniversary of Ada Lovelace, often considered as the world's first computer programmer. Ada became famous not only as a daughter of romantic poet Lord Byron but also as an outstanding 19th century mathematician. Her works on analytical engine are recognized as the first algorithm intended to be processed by a machine. Women always played a crucial role in the computer science evolution, but unfortunately, they are considered to be not so good at programming and engineering as men. Even though the fair sex makes up a growing portion of computer and Internet users, there is still a large gender gap in the field of Computer Science. But all is not lost! According to the study women's enrollment in the computer science raised from 7 percent in 1995 to 42 percent in 2000. And it is still increasing. Soon women will take a well-deserved position among the world's top computer programmers. After all, a number of notable female computer pioneers such as Ada Lovelace, Grace Hopper, and Anita Borg have proven that women make great computer scientists. But will women make great contributions to the modern technologies industry? Or successful and famous female computer programmer is just a pipe dream?

    Read the article

  • Were the first assemblers written in machine code?

    - by The111
    I am reading the book The Elements of Computing Systems: Building a Modern Computer from First Principles, which contains projects encompassing the build of a computer from boolean gates all the way to high level applications (in that order). The current project I'm working on is writing an assembler using a high level language of my choice, to translate from Hack assembly code to Hack machine code (Hack is the name of the hardware platform built in the previous chapters). Although the hardware has all been built in a simulator, I have tried to pretend that I am really constructing each level using only the tools available to me at that point in the real process. That said, it got me thinking. Using a high level language to write my assembler is certainly convenient, but for the very first assembler ever written (i.e. in history), wouldn't it need to be written in machine code, since that's all that existed at the time? And a correlated question... how about today? If a brand new CPU architecture comes out, with a brand new instruction set, and a brand new assembly syntax, how would the assembler be constructed? I'm assuming you could still use an existing high level language to generate binaries for the assembler program, since if you know the syntax of both the assembly and machine languages for your new platform, then the task of writing the assembler is really just a text analysis task and is not inherently related to that platform (i.e. needing to be written in that platform's machine language)... which is the very reason I am able to "cheat" while writing my Hack assembler in 2012, and use some preexisting high level language to help me out.

    Read the article

  • First ATMs programming language

    - by revo
    First ATMs performed tasks like a cash dispenser, they were offline machines which worked with punch cards impregnated with Carbon and a 6-digit PIN code. Maximum withdrawal with a card was 10 pounds and each one was a one-time use card - ATM swallowed cards! The first ATM was installed in London in the year 1967, as I looked at time line of programming languages, there were many programming languages made before that decade. I don't know about the hardware neither, but in which programming language it was written? *I didn't find a detailed biography of John Shepherd-Barron (ATM inventor at 70s) Update I found this picture, which is taken from a newspaper back to the year 1972 in Iran. Translated PS : Shows Mr. Rad-lon (if spelled correctly), The manager of Barros (if spelled correctly) International Educational Institute in United Kingdom at the right, and Mr. Jim Sutherland - Expert of Computer Kiosks. In the rest of the text I found on this paper, these kind of ATMs which called "Automated Computer Kiosk" were advertised with this: Mr. Rad-lon (if spelled correctly) puts his card to one specific location of Automated Computer Kiosk and after 10 seconds he withdraws his cash. Two more questions are: 1- How those ATMs were so fast? (withdrawal in 10 seconds in that year) 2- I didn't find any text on Internet which state about "Automated Computer Kiosk", Is it valid or were they being called Computer in that time?

    Read the article

  • Visual Studio 2012 first impressions...no Macros!

    - by bconlon
    Yesterday I installed Microsoft Visual Studio 2012 for the first time (all 8.5GB) and after 20 years of (mostly) happy times using VS they have removed Macros, one of the most handy features.The first thing I wanted to do when I upgraded my VS2010 project was to add a #elseif block to each file. This would usually be simple case of find in files of the previous #elseif and then Ctrl+Shift+R to record a macro which would be: F8 (to select the next file from find list), F3 (to find the correct position in file), Ctrl+V to paste the new code. Then all I would need to do is keep Ctrl+Shift+P (Play Macro) pressed until all the files were processed.But alas Ctrl+Shift+R does nothing! I won’t say that I use Macros every day but it was a very useful feature.To continue my moaning a little more, I also don't like the bland interface. This has been well documented by others, but now I have used it myself, I find it difficult to tell one grey area of screen from another and the lack of colour makes the icons unclear.I also don't see why the menus now need to SHOUT in capital letters?On the plus side, they have now added the ability to see WPF properties in the debugger...a bit of an oversight in Visual Studio 2010. Oh, but you still can't edit and continue on files that contain templated code.Whilst Visual Studio 2012 is not a complete disaster like Windows 8 (why develop a desk top OS to be the same as a Smart device OS), it does not float my boat.Rant over.#

    Read the article

  • How to sync client and server at the first frame

    - by wheelinlight
    I'm making a game where an authoritative server sends information to all clients about states and positions for objects in a 3d world. The player can control his character by clicking on the screen to set a destination for the character, much like in the Diablo series. I've read most information I can find online about interpolation, reconciliation, and general networking architecture (Valve's for instance). I think I understand everything but one thing seems to be missing in every article I read. Let say we have an interpolation delay of 100ms, server tickrate=50ms, latency=200ms; How do I know when 100ms has past on the client? If the server sends the first update on t=0, can I assume it arrives at t=200, therefore assuming that all packets takes the same amount of time to reach the client? What if the first packet arrives a little quick, for instance at t=150. I would then be starting the client with t=150 and at t=250 it will think it has past 100ms since its connect to the server when it in fact only 50ms has past. Hopefully the above paragraph is understandable. The summarized question would be: How do I know at what tick to start simulating the client? EDIT: This is how I ended up doing it: The client keeps a clock (approximately) in sync with the server. The client then simulates the world at simulationTime = syncedTime - avg(RTT)/2 - interpolationTime The round-trip time can fluctuate so therefore I average it out over time. By only keeping the most recent values when calculating the average I hope to adapt to more permanent changes in latency. It's still to early to draw any conclusion. I'm currently simulating bad network connections, but it's looking good so far. Anyone see any possible problems?

    Read the article

  • NDepend 4 – First Steps

    - by Ricardo Peres
    Introduction Thanks to Patrick Smacchia I had the chance to test NDepend 4. I can only say: awesome! This will be the first of a series of posts on NDepend, where I will talk about my discoveries. Keep in mind that I am just starting to use it, so more experienced users may find these too basic, I just hope I don’t say anything foolish! I must say that I am in no way affiliated with NDepend and I never actually met Patrick. Installation No installation program – a curious decision, I’m not against it -, just unzip the files to a folder and run the executable. It will optionally register itself with Visual Studio 2008, 2010 and 11 as well as RedGate’s Reflector; also, it automatically looks for updates. NDepend can either be used as a stand-alone program (with or without a GUI) or from within Visual Studio or Reflector. Getting Started One thing that really pleases me is the Getting Started section of the stand-alone, with links to pages on NDepend’s web site, featuring detailed explanations, which usually include screenshots and small videos (<5 minutes). There’s also an How do I with hierarchical navigation that guides us to through the major features so that we can easily find what we want. Usage There are two basic ways to use NDepend: Analyze .NET solutions, projects or assemblies; Compare two versions of the same assembly. I have so far not used NDepend to compare assemblies, so I will first talk about the first option. After selecting a solution and some of its projects, it generates a single HTML page with an highly detailed report of the analysis it produced. This includes some metrics such as number of lines of code, IL instructions, comments, types, methods and properties, the calculation of the cyclomatic complexity, coupling and lots of others indicators, typically grouped by type, namespace and assembly. The HTML also includes some nice diagrams depicting assembly dependencies, type and method relative proportions (according to the number of IL instructions, I guess) and assembly analysis relating to abstractness and stability. Useful, I would say. Then there’s the rules; NDepend tests the target assemblies against a set of more than 120 rules, grouped in categories Code Quality, Object Oriented Design, Design, Architecture and Layering, Dead Code, Visibility, Naming Conventions, Source Files Organization and .NET Framework Usage. The full list can be configured on the application, and an explanation of each rule can be found on the web site. Rules can be validated, violated and violated in a critical manner, and the HTML will contain the violated rules, their queries – more on this later - and results. The HTML uses some nice JavaScript effects, which allow paging and sorting of tables, so its nice to use. Similar to the rules, there are some queries that display results for a number (about 200) questions grouped as Object Oriented Design, API Breaking Changes (for assembly version comparison), Code Diff Summary (also for version comparison) and Dead Code. The difference between queries and rules is that queries are not classified as passes, violated or critically violated, just present results. The queries and rules are expressed through CQLinq, which is a very powerful LINQ derivative specific to code analysis. All of the included rules and queries can be enabled or disabled and new ones can be added, with intellisense to help. Besides the HTML report file, the NDepend application can be used to explore all analysis results, compare different versions of analysis reports and to run custom queries. Comparison to Other Analysis Tools Unlike StyleCop, NDepend only works with assemblies, not source code, so you can’t expect it to be able to enforce brackets placement, for example. It is more similar to FxCop, but you don’t have the option to analyze at the IL level, that is, other that the number of IL instructions and the complexity. What’s Next In the next days I’ll continue my exploration with a real-life test case. References The NDepend web site is http://www.ndepend.com/. Patrick keeps an updated blog on http://codebetter.com/patricksmacchia/ and he regularly monitors StackOverflow for questions tagged NDepend, which you can find on http://stackoverflow.com/questions/tagged/ndepend. The default list of CQLinq rules, queries and statistics can be found at http://www.ndepend.com/DefaultRules/webframe.html. The syntax itself is described at http://www.ndepend.com/Doc_CQLinq_Syntax.aspx and its features at http://www.ndepend.com/Doc_CQLinq_Features.aspx.

    Read the article

  • SPARC T4-4 Delivers World Record First Result on PeopleSoft Combined Benchmark

    - by Brian
    Oracle's SPARC T4-4 servers running Oracle's PeopleSoft HCM 9.1 combined online and batch benchmark achieved World Record 18,000 concurrent users while executing a PeopleSoft Payroll batch job of 500,000 employees in 43.32 minutes and maintaining online users response time at < 2 seconds. This world record is the first to run online and batch workloads concurrently. This result was obtained with a SPARC T4-4 server running Oracle Database 11g Release 2, a SPARC T4-4 server running PeopleSoft HCM 9.1 application server and a SPARC T4-2 server running Oracle WebLogic Server in the web tier. The SPARC T4-4 server running the application tier used Oracle Solaris Zones which provide a flexible, scalable and manageable virtualization environment. The average CPU utilization on the SPARC T4-2 server in the web tier was 17%, on the SPARC T4-4 server in the application tier it was 59%, and on the SPARC T4-4 server in the database tier was 35% (online and batch) leaving significant headroom for additional processing across the three tiers. The SPARC T4-4 server used for the database tier hosted Oracle Database 11g Release 2 using Oracle Automatic Storage Management (ASM) for database files management with I/O performance equivalent to raw devices. This is the first three tier mixed workload (online and batch) PeopleSoft benchmark also processing PeopleSoft payroll batch workload. Performance Landscape PeopleSoft HR Self-Service and Payroll Benchmark Systems Users Ave Response Search (sec) Ave Response Save (sec) Batch Time (min) Streams SPARC T4-2 (web) SPARC T4-4 (app) SPARC T4-2 (db) 18,000 0.944 0.503 43.32 64 Configuration Summary Application Configuration: 1 x SPARC T4-4 server with 4 x SPARC T4 processors, 3.0 GHz 512 GB memory 5 x 300 GB SAS internal disks 1 x 100 GB and 2 x 300 GB internal SSDs 2 x 10 Gbe HBA Oracle Solaris 11 11/11 PeopleTools 8.52 PeopleSoft HCM 9.1 Oracle Tuxedo, Version 10.3.0.0, 64-bit, Patch Level 031 Java Platform, Standard Edition Development Kit 6 Update 32 Database Configuration: 1 x SPARC T4-4 server with 4 x SPARC T4 processors, 3.0 GHz 256 GB memory 3 x 300 GB SAS internal disks Oracle Solaris 11 11/11 Oracle Database 11g Release 2 Web Tier Configuration: 1 x SPARC T4-2 server with 2 x SPARC T4 processors, 2.85 GHz 256 GB memory 2 x 300 GB SAS internal disks 1 x 100 GB internal SSD Oracle Solaris 11 11/11 PeopleTools 8.52 Oracle WebLogic Server 10.3.4 Java Platform, Standard Edition Development Kit 6 Update 32 Storage Configuration: 1 x Sun Server X2-4 as a COMSTAR head for data 4 x Intel Xeon X7550, 2.0 GHz 128 GB memory 1 x Sun Storage F5100 Flash Array (80 flash modules) 1 x Sun Storage F5100 Flash Array (40 flash modules) 1 x Sun Fire X4275 as a COMSTAR head for redo logs 12 x 2 TB SAS disks with Niwot Raid controller Benchmark Description This benchmark combines PeopleSoft HCM 9.1 HR Self Service online and PeopleSoft Payroll batch workloads to run on a unified database deployed on Oracle Database 11g Release 2. The PeopleSoft HRSS benchmark kit is a Oracle standard benchmark kit run by all platform vendors to measure the performance. It's an OLTP benchmark where DB SQLs are moderately complex. The results are certified by Oracle and a white paper is published. PeopleSoft HR SS defines a business transaction as a series of HTML pages that guide a user through a particular scenario. Users are defined as corporate Employees, Managers and HR administrators. The benchmark consist of 14 scenarios which emulate users performing typical HCM transactions such as viewing paycheck, promoting and hiring employees, updating employee profile and other typical HCM application transactions. All these transactions are well-defined in the PeopleSoft HR Self-Service 9.1 benchmark kit. This benchmark metric is the weighted average response search/save time for all the transactions. The PeopleSoft 9.1 Payroll (North America) benchmark demonstrates system performance for a range of processing volumes in a specific configuration. This workload represents large batch runs typical of a ERP environment during a mass update. The benchmark measures five application business process run times for a database representing large organization. They are Paysheet Creation, Payroll Calculation, Payroll Confirmation, Print Advice forms, and Create Direct Deposit File. The benchmark metric is the cumulative elapsed time taken to complete the Paysheet Creation, Payroll Calculation and Payroll Confirmation business application processes. The benchmark metrics are taken for each respective benchmark while running simultaneously on the same database back-end. Specifically, the payroll batch processes are started when the online workload reaches steady state (the maximum number of online users) and overlap with online transactions for the duration of the steady state. Key Points and Best Practices Two Oracle PeopleSoft Domain sets with 200 application servers each on a SPARC T4-4 server were hosted in 2 separate Oracle Solaris Zones to demonstrate consolidation of multiple application servers, ease of administration and performance tuning. Each Oracle Solaris Zone was bound to a separate processor set, each containing 15 cores (total 120 threads). The default set (1 core from first and third processor socket, total 16 threads) was used for network and disk interrupt handling. This was done to improve performance by reducing memory access latency by using the physical memory closest to the processors and offload I/O interrupt handling to default set threads, freeing up cpu resources for Application Servers threads and balancing application workload across 240 threads. See Also Oracle PeopleSoft Benchmark White Papers oracle.com SPARC T4-2 Server oracle.com OTN SPARC T4-4 Server oracle.com OTN PeopleSoft Enterprise Human Capital Management oracle.com OTN PeopleSoft Enterprise Human Capital Management (Payroll) oracle.com OTN Oracle Solaris oracle.com OTN Oracle Database 11g Release 2 Enterprise Edition oracle.com OTN Disclosure Statement Oracle's PeopleSoft HR and Payroll combined benchmark, www.oracle.com/us/solutions/benchmark/apps-benchmark/peoplesoft-167486.html, results 09/30/2012.

    Read the article

  • Jersey 2 in GlassFish 4 - First Java EE 7 Implementation Now Integrated (TOTD #182)

    - by arungupta
    The JAX-RS 2.0 specification released their Early Draft 3 recently. One of my earlier blogs explained as the features were first introduced in the very first draft of the JAX-RS 2.0 specification. Last week was another milestone when the first Java EE 7 specification implementation was added to GlassFish 4 builds. Jakub blogged about Jersey 2 integration in GlassFish 4 builds. Most of the basic functionality is working but EJB, CDI, and Validation are still a TBD. Here is a simple Tip Of The Day (TOTD) sample to get you started with using that functionality. Create a Java EE 6-style Maven project mvn archetype:generate -DarchetypeGroupId=org.codehaus.mojo.archetypes -DarchetypeArtifactId=webapp-javaee6 -DgroupId=example -DartifactId=jersey2-helloworld -DarchetypeVersion=1.5 -DinteractiveMode=false Note, this is still a Java EE 6 archetype, at least for now. Open the project in NetBeans IDE as it makes it much easier to edit/add the files. Add the following <respositories> <repositories> <repository> <id>snapshot-repository.java.net</id> <name>Java.net Snapshot Repository for Maven</name> <url>https://maven.java.net/content/repositories/snapshots/</url> <layout>default</layout> </repository></repositories> Add the following <dependency>s <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>4.10</version> <scope>test</scope></dependency><dependency> <groupId>javax.ws.rs</groupId> <artifactId>javax.ws.rs-api</artifactId> <version>2.0-m09</version> <scope>test</scope></dependency><dependency> <groupId>org.glassfish.jersey.core</groupId> <artifactId>jersey-client</artifactId> <version>2.0-m05</version> <scope>test</scope></dependency> The complete list of Maven coordinates for Jersey2 are available here. An up-to-date status of Jersey 2 can always be obtained from here. Here is a simple resource class: @Path("movies")public class MoviesResource { @GET @Path("list") public List<Movie> getMovies() { List<Movie> movies = new ArrayList<Movie>(); movies.add(new Movie("Million Dollar Baby", "Hillary Swank")); movies.add(new Movie("Toy Story", "Buzz Light Year")); movies.add(new Movie("Hunger Games", "Jennifer Lawrence")); return movies; }} This resource publishes a list of movies and is accessible at "movies/list" path with HTTP GET. The project is using the standard JAX-RS APIs. Of course, you need the trivial "Movie" and the "Application" class as well. They are available in the downloadable project anyway. Build the project mvn package And deploy to GlassFish 4.0 promoted build 43 (download, unzip, and start as "bin/asadmin start-domain") as asadmin deploy --force=true target/jersey2-helloworld.war Add a simple test case by right-clicking on the MoviesResource class, select "Tools", "Create Tests", and take defaults. Replace the function "testGetMovies" to @Testpublic void testGetMovies() { System.out.println("getMovies"); Client client = ClientFactory.newClient(); List<Movie> movieList = client.target("http://localhost:8080/jersey2-helloworld/webresources/movies/list") .request() .get(new GenericType<List<Movie>>() {}); assertEquals(3, movieList.size());} This test uses the newly defined JAX-RS 2 client APIs to access the RESTful resource. Run the test by giving the command "mvn test" and see the output as ------------------------------------------------------- T E S T S-------------------------------------------------------Running example.MoviesResourceTestgetMoviesTests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.561 secResults :Tests run: 1, Failures: 0, Errors: 0, Skipped: 0 GlassFish 4 contains Jersey 2 as the JAX-RS implementation. If you want to use Jersey 1.1 functionality, then Martin's blog provide more details on that. All JAX-RS 1.x functionality will be supported using standard APIs anyway. This workaround is only required if Jersey 1.x functionality needs to be accessed. The complete source code explained in this project can be downloaded from here. Here are some pointers to follow JAX-RS 2 Specification Early Draft 3 Latest status on specification (jax-rs-spec.java.net) Latest JAX-RS 2.0 Javadocs Latest status on Jersey (Reference Implementation of JAX-RS 2 - jersey.java.net) Latest Jersey API Javadocs Latest GlassFish 4.0 Promoted Build Follow @gf_jersey Provide feedback on Jersey 2 to [email protected] and JAX-RS specification to [email protected].

    Read the article

< Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >