Search Results

Search found 90770 results on 3631 pages for 'first time'.

Page 34/3631 | < Previous Page | 30 31 32 33 34 35 36 37 38 39 40 41  | Next Page >

  • Why does my macbook pro have long ping times over wifi?

    - by randynov
    I have been having problems connecting with my wifi. It is weird, the ping times to the router (<30 feet away) seem to surge, often getting over 10s before slowly coming back down. You can see the trend below. I'm on a macbook pro and have done the normal stuff (reset the pram and smc, changed wireless channels, etc.). It happens across different routers, so I think it must be my laptop, but I don't know what it could be. The RSSI value hovers around -57, but I've seen the transmit rate flip between 0, 48 & 54. The signal strength is ~60% with 9% noise. Currently, there are 17 other wireless networks in range, but only one in the same channel. 1 - How can I figure out what's going on? 2 - How can I correct the situation? TIA! Randall PING 192.168.1.1 (192.168.1.1): 56 data bytes 64 bytes from 192.168.1.1: icmp_seq=0 ttl=254 time=781.107 ms 64 bytes from 192.168.1.1: icmp_seq=1 ttl=254 time=681.551 ms 64 bytes from 192.168.1.1: icmp_seq=2 ttl=254 time=610.001 ms 64 bytes from 192.168.1.1: icmp_seq=3 ttl=254 time=544.915 ms 64 bytes from 192.168.1.1: icmp_seq=4 ttl=254 time=547.622 ms 64 bytes from 192.168.1.1: icmp_seq=5 ttl=254 time=468.914 ms 64 bytes from 192.168.1.1: icmp_seq=6 ttl=254 time=237.368 ms 64 bytes from 192.168.1.1: icmp_seq=7 ttl=254 time=229.902 ms 64 bytes from 192.168.1.1: icmp_seq=8 ttl=254 time=11754.151 ms 64 bytes from 192.168.1.1: icmp_seq=9 ttl=254 time=10753.943 ms 64 bytes from 192.168.1.1: icmp_seq=10 ttl=254 time=9754.428 ms 64 bytes from 192.168.1.1: icmp_seq=11 ttl=254 time=8754.199 ms 64 bytes from 192.168.1.1: icmp_seq=12 ttl=254 time=7754.138 ms 64 bytes from 192.168.1.1: icmp_seq=13 ttl=254 time=6754.159 ms 64 bytes from 192.168.1.1: icmp_seq=14 ttl=254 time=5753.991 ms 64 bytes from 192.168.1.1: icmp_seq=15 ttl=254 time=4754.068 ms 64 bytes from 192.168.1.1: icmp_seq=16 ttl=254 time=3753.930 ms 64 bytes from 192.168.1.1: icmp_seq=17 ttl=254 time=2753.768 ms 64 bytes from 192.168.1.1: icmp_seq=18 ttl=254 time=1753.866 ms 64 bytes from 192.168.1.1: icmp_seq=19 ttl=254 time=753.592 ms 64 bytes from 192.168.1.1: icmp_seq=20 ttl=254 time=517.315 ms 64 bytes from 192.168.1.1: icmp_seq=37 ttl=254 time=1.315 ms 64 bytes from 192.168.1.1: icmp_seq=38 ttl=254 time=1.035 ms 64 bytes from 192.168.1.1: icmp_seq=39 ttl=254 time=4.597 ms 64 bytes from 192.168.1.1: icmp_seq=21 ttl=254 time=18010.681 ms 64 bytes from 192.168.1.1: icmp_seq=22 ttl=254 time=17010.449 ms 64 bytes from 192.168.1.1: icmp_seq=23 ttl=254 time=16010.430 ms 64 bytes from 192.168.1.1: icmp_seq=24 ttl=254 time=15010.540 ms 64 bytes from 192.168.1.1: icmp_seq=25 ttl=254 time=14010.450 ms 64 bytes from 192.168.1.1: icmp_seq=26 ttl=254 time=13010.175 ms 64 bytes from 192.168.1.1: icmp_seq=27 ttl=254 time=12010.282 ms 64 bytes from 192.168.1.1: icmp_seq=28 ttl=254 time=11010.265 ms 64 bytes from 192.168.1.1: icmp_seq=29 ttl=254 time=10010.285 ms 64 bytes from 192.168.1.1: icmp_seq=30 ttl=254 time=9010.235 ms 64 bytes from 192.168.1.1: icmp_seq=31 ttl=254 time=8010.399 ms 64 bytes from 192.168.1.1: icmp_seq=32 ttl=254 time=7010.144 ms 64 bytes from 192.168.1.1: icmp_seq=33 ttl=254 time=6010.113 ms 64 bytes from 192.168.1.1: icmp_seq=34 ttl=254 time=5010.025 ms 64 bytes from 192.168.1.1: icmp_seq=35 ttl=254 time=4009.966 ms 64 bytes from 192.168.1.1: icmp_seq=36 ttl=254 time=3009.825 ms 64 bytes from 192.168.1.1: icmp_seq=40 ttl=254 time=16000.676 ms 64 bytes from 192.168.1.1: icmp_seq=41 ttl=254 time=15000.477 ms 64 bytes from 192.168.1.1: icmp_seq=42 ttl=254 time=14000.388 ms 64 bytes from 192.168.1.1: icmp_seq=43 ttl=254 time=13000.549 ms 64 bytes from 192.168.1.1: icmp_seq=44 ttl=254 time=12000.469 ms 64 bytes from 192.168.1.1: icmp_seq=45 ttl=254 time=11000.332 ms 64 bytes from 192.168.1.1: icmp_seq=46 ttl=254 time=10000.339 ms 64 bytes from 192.168.1.1: icmp_seq=47 ttl=254 time=9000.338 ms 64 bytes from 192.168.1.1: icmp_seq=48 ttl=254 time=8000.198 ms 64 bytes from 192.168.1.1: icmp_seq=49 ttl=254 time=7000.388 ms 64 bytes from 192.168.1.1: icmp_seq=50 ttl=254 time=6000.217 ms 64 bytes from 192.168.1.1: icmp_seq=51 ttl=254 time=5000.084 ms 64 bytes from 192.168.1.1: icmp_seq=52 ttl=254 time=3999.920 ms 64 bytes from 192.168.1.1: icmp_seq=53 ttl=254 time=3000.010 ms 64 bytes from 192.168.1.1: icmp_seq=54 ttl=254 time=1999.832 ms 64 bytes from 192.168.1.1: icmp_seq=55 ttl=254 time=1000.072 ms 64 bytes from 192.168.1.1: icmp_seq=58 ttl=254 time=1.125 ms 64 bytes from 192.168.1.1: icmp_seq=59 ttl=254 time=1.070 ms 64 bytes from 192.168.1.1: icmp_seq=60 ttl=254 time=2.515 ms

    Read the article

  • Development costs of an indie multiplayer arcade shooter

    - by VorteX
    Me and a friend of mine have been wanting to remake one of our favorite games of all time (007: Nightfire) for a long time now. However, remaking a game likes this is really complicated because of the rights to the Bond-franchise. That site was created months ago, and by doing so we have found some great people (modelers, level designers, etc.) that want to help, but our plans have changed a little bit. Our current plan is to create a multiplayer-only remake of the original game, removing all the Bond-references so that the rights shouldn't be a problem anymore. We still want to create the game using the UDK and SteamWorks for both PC and Mac. Currently there's 3 things I need to find out: The costs of creating an arcade shooter like this. We want to use crowdfunding to fund the project. The best way to manage a project like this over the internet. Our current team consists of people all over the world, and we need a central place to discuss, collaborate and store our files. The best place to find suitable people for this project. We already have some modelers and level designers but we also need animators, artists, programmers, etc. I believe creating an arcade game like this with a small team is feasible. The game in a nutshell: ±10 maps, ±20 weapons, ±12 game modes, weapon/armor pickups, grapple hook gadget, no ADS, uses SteamWorks, online matchmaking, custom games, AI bots, appearance selection, level progression using XP (no unlocks), achievements. Does anyone know where to start? Any help is appreciated.

    Read the article

  • EF4 Code First Control Unicode and Decimal Precision, Scale with Attributes

    - by Dane Morgridge
    There are several attributes available when using code first with the Entity Framework 4 CTP5 Code First option.  When working with strings you can use [MaxLength(length)] to control the length and [Required] will work on all properties.  But there are a few things missing. By default all string will be created using unicode so you will get nvarchar instead of varchar.  You can change this using the fluent API or you can create an attribute to make the change.  If you have a lot of properties, the attribute will be much easier and require less code. You will need to add two classes to your project to create the attribute itself: 1: public class UnicodeAttribute : Attribute 2: { 3: bool _isUnicode; 4:  5: public UnicodeAttribute(bool isUnicode) 6: { 7: _isUnicode = isUnicode; 8: } 9:  10: public bool IsUnicode { get { return _isUnicode; } } 11: } 12:  13: public class UnicodeAttributeConvention : AttributeConfigurationConvention<PropertyInfo, StringPropertyConfiguration, UnicodeAttribute> 14: { 15: public override void Apply(PropertyInfo memberInfo, StringPropertyConfiguration configuration, UnicodeAttribute attribute) 16: { 17: configuration.IsUnicode = attribute.IsUnicode; 18: } 19: } The UnicodeAttribue class gives you a [Unicode] attribute that you can use on your properties and the UnicodeAttributeConvention will tell EF how to handle the attribute. You will need to add a line to the OnModelCreating method inside your context for EF to recognize the attribute: 1: protected override void OnModelCreating(System.Data.Entity.ModelConfiguration.ModelBuilder modelBuilder) 2: { 3: modelBuilder.Conventions.Add(new UnicodeAttributeConvention()); 4: base.OnModelCreating(modelBuilder); 5: } Once you have this done, you can use the attribute in your classes to make sure that you get database types of varchar instead of nvarchar: 1: [Unicode(false)] 2: public string Name { get; set; }   Another option that is missing is the ability to set the precision and scale on a decimal.  By default decimals get created as (18,0).  If you need decimals to be something like (9,2) then you can once again use the fluent API or create a custom attribute.  As with the unicode attribute, you will need to add two classes to your project: 1: public class DecimalPrecisionAttribute : Attribute 2: { 3: int _precision; 4: private int _scale; 5:  6: public DecimalPrecisionAttribute(int precision, int scale) 7: { 8: _precision = precision; 9: _scale = scale; 10: } 11:  12: public int Precision { get { return _precision; } } 13: public int Scale { get { return _scale; } } 14: } 15:  16: public class DecimalPrecisionAttributeConvention : AttributeConfigurationConvention<PropertyInfo, DecimalPropertyConfiguration, DecimalPrecisionAttribute> 17: { 18: public override void Apply(PropertyInfo memberInfo, DecimalPropertyConfiguration configuration, DecimalPrecisionAttribute attribute) 19: { 20: configuration.Precision = Convert.ToByte(attribute.Precision); 21: configuration.Scale = Convert.ToByte(attribute.Scale); 22:  23: } 24: } Add your line to the OnModelCreating: 1: protected override void OnModelCreating(System.Data.Entity.ModelConfiguration.ModelBuilder modelBuilder) 2: { 3: modelBuilder.Conventions.Add(new UnicodeAttributeConvention()); 4: modelBuilder.Conventions.Add(new DecimalPrecisionAttributeConvention()); 5: base.OnModelCreating(modelBuilder); 6: } Now you can use the following on your properties: 1: [DecimalPrecision(9,2)] 2: public decimal Cost { get; set; } Both these options use the same concepts so if there are other attributes that you want to use, you can create them quite simply.  The key to it all is the PropertyConfiguration classes.   If there is a class for the datatype, then you should be able to write an attribute to set almost everything you need.  You could also create a single attribute to encapsulate all of the possible string combinations instead of having multiple attributes on each property. All in all, I am loving code first and having attributes to control database generation instead of using the fluent API is huge and saves me a great deal of time.

    Read the article

  • Improving the running time of Breadth First Search and Adjacency List creation

    - by user45957
    We are given an array of integers where all elements are between 0-9. have to start from the 1st position and reach end in minimum no of moves such that we can from an index i move 1 position back and forward i.e i-1 and i+1 and jump to any index having the same value as index i. Time Limit : 1 second Max input size : 100000 I have tried to solve this problem use a single source shortest path approach using Breadth First Search and though BFS itself is O(V+E) and runs in time the adjacency list creation takes O(n2) time and therefore overall complexity becomes O(n2). is there any way i can decrease the time complexity of adjacency list creation? or is there a better and more efficient way of solving the problem? int main(){ vector<int> v; string str; vector<int> sets[10]; cin>>str; int in; for(int i=0;i<str.length();i++){ in=str[i]-'0'; v.push_back(in); sets[in].push_back(i); } int n=v.size(); if(n==1){ cout<<"0\n"; return 0; } if(v[0]==v[n-1]){ cout<<"1\n"; return 0; } vector<int> adj[100001]; for(int i=0;i<10;i++){ for(int j=0;j<sets[i].size();j++){ if(sets[i][j]>0) adj[sets[i][j]].push_back(sets[i][j]-1); if(sets[i][j]<n-1) adj[sets[i][j]].push_back(sets[i][j]+1); for(int k=j+1;k<sets[i].size();k++){ if(abs(sets[i][j]-sets[i][k])!=1){ adj[sets[i][j]].push_back(sets[i][k]); adj[sets[i][k]].push_back(sets[i][j]); } } } } queue<int> q; q.push(0); int dist[100001]; bool visited[100001]={false}; dist[0]=0; visited[0]=true; int c=0; while(!q.empty()){ int dq=q.front(); q.pop(); c++; for(int i=0;i<adj[dq].size();i++){ if(visited[adj[dq][i]]==false){ dist[adj[dq][i]]=dist[dq]+1; visited[adj[dq][i]]=true; q.push(adj[dq][i]); } } } cout<<dist[n-1]<<"\n"; return 0; }

    Read the article

  • First Impressions of a MacBook (from a PC guy)

    - by dgreen
    Disclaimer: I've been a PC guy my entire working career. I'd probably characterize myself as a power user. Never afraid to bust out the console line. But working with a Mac is totally foreign to me. So for those Mac guys who are curious, this is how your world appears from the outside to a computer literate person :)My Macbook Air has arrived! And it's a thing of beauty:First, the specs: 13" MacBook Air, 2.0GHz Core i7 processor. Upgraded to 8GB of RAM for an additional $100, SSD flash storage  = 256GB. The plan is ultimately to use this baby for some iOS development but also some decent lifting in Windows with Visual Studio. Done a lot of reading  and between VMWare Fusion, Parallels and Bootcamp...I'm going to go with VMWare Fusion for $49.99And now my impressions (please re-read disclaimer before proceeding!):I open the box and am trying to understand exactly how the magsafe connector works (and how to disconnect it).  Why does it have two socket outlet plugs? Who knows.  I feel like Hansel in Zoolander. The files are "in" the computer.Stuck in my external hard drive (usb). So how do I get to the files? To the Googles!Argh...it can't read my external NTFS drive. Fat32 can't support field over 4GB…problematic since some of my existing VMWare image files are much larger than 4GB. Didn't see this coming.Three year old loves iPhoto. Super easy to use. Don't even know what I'm doing but I've already (accidentally) discovered the image filtering options. Fun stuff.First thing I downloaded ever => Chrome. I need something to ground me, something familiar. My token, if you will (sorry, gratuitous Inception joke).Ok, I get it… Finder == windows explorer. But where is my hierarchical structure? I miss the tree :(On that note, yeah…how do I see what "path" my files reside in? I'm afraid to know the answer. You know what scares more though…this notion of a smart folder. Feel like the godfather - just get the job done, I don't care how you handle it, I don't want to know...just get it done. What the hell is AirDrop?Mail…just worked. Still in shock that they have a free client for yahoo mail (please no yahoo jokes).mail -> deleting a message takes 5 seconds. Have they heard of async?"Command" key instead of "Control" ok, then what the $%&^! is the control key for then"aliases" == shortcuts I thinkI don't see the file system. And I'm scared. All these things I'm downloading…these .dmg files (bad name) where are they going? Can't seem to delete when they're doneUgh...realized need to buy a mini-to-vga adaptor if I want to use my external monitor ($13 on ebay, $39 in apple store).Windows docking is trickiest for me…this notion of detached windows with a menu bar at the top. I don't like this paradigm, it's confusing. But maybe because I've been using Windows for too long.Evernote, Dropbox desktop clients seem almost identical…few quirks here and there I need to get used to.iTunes is still a bit gross. In a weird way it's actually worse on a Mac if thats possible. This is not the MacBook's fault…this is a software design issue. Overall: UI will take some getting used to. Can't decide if this represents the future and I'm stuck in the past…or this is the past and I've been spoiled by the future (which would be Windows…don't be hating I happen to be very productive in Win7)  So there you go - my 90 minute first impression of the MacBook universe.

    Read the article

  • strange segmentation fault during function return

    - by Kyle
    I am running a program on 2 different machines. On one it works fine without issue. On the other it results in a segmentation fault. Through debugging, I have figured out where the fault occurs, but I can't figure out a logical reason for it to happen. In one function I have the following code: pass_particles(particle_grid, particle_properties, input_data, coll_eros_track, collision_number_part, world, grid_rank_lookup, grid_locations); cout<<"done passing particles"<<endl; The function pass_particles looks like: void pass_particles(map<int,map<int,Particle> > & particle_grid, std::vector<Particle_props> & particle_properties, User_input& input_data, data_tracking & coll_eros_track, vector<int> & collision_number_part, mpi::communicator & world, std::map<int,int> & grid_rank_lookup, map<int,std::vector<double> > & grid_locations) { //cout<<"east-west"<<endl; //east-west exchange (x direction) map<int, vector<Particle> > particles_to_be_sent_east; map<int, vector<Particle> > particles_to_be_sent_west; vector<Particle> particles_received_east; vector<Particle> particles_received_west; int counter_x_sent=0; int counter_x_received=0; for(grid_iter=particle_grid.begin();grid_iter!=particle_grid.end();grid_iter++) { map<int,Particle>::iterator part_iter; for (part_iter=grid_iter->second.begin();part_iter!=grid_iter->second.end();) { if (particle_properties[part_iter->second.global_part_num()].particle_in_box()[grid_iter->first]) { //decide if a particle has left the box...need to consider whether particle was already outside the box if ((part_iter->second.position().x()<(grid_locations[grid_iter->first][0]) && part_iter->second.position().x()>(grid_locations[grid_iter->first-input_data.z_numboxes()][0])) || (input_data.periodic_walls_x() && (grid_iter->first-floor(grid_iter->first/(input_data.xz_numboxes()))*input_data.xz_numboxes()<input_data.z_numboxes()) && (part_iter->second.position().x()>(grid_locations[input_data.total_boxes()-1][0])))) { particles_to_be_sent_west[grid_iter->first].push_back(part_iter->second); particle_properties[particle_grid[grid_iter->first][part_iter->first].global_part_num()].particle_in_box()[grid_iter->first]=false; counter_sent++; counter_x_sent++; } else if ((part_iter->second.position().x()>(grid_locations[grid_iter->first][1]) && part_iter->second.position().x()<(grid_locations[grid_iter->first+input_data.z_numboxes()][1])) || (input_data.periodic_walls_x() && (grid_iter->first-floor(grid_iter->first/(input_data.xz_numboxes()))*input_data.xz_numboxes())>input_data.xz_numboxes()-input_data.z_numboxes()-1) && (part_iter->second.position().x()<(grid_locations[0][1]))) { particles_to_be_sent_east[grid_iter->first].push_back(part_iter->second); particle_properties[particle_grid[grid_iter->first][part_iter->first].global_part_num()].particle_in_box()[grid_iter->first]=false; counter_sent++; counter_x_sent++; } //select particles in overlap areas to send to neighboring cells else if ((part_iter->second.position().x()>(grid_locations[grid_iter->first][0]) && part_iter->second.position().x()<(grid_locations[grid_iter->first][0]+input_data.diam_large()))) { particles_to_be_sent_west[grid_iter->first].push_back(part_iter->second); counter_sent++; counter_x_sent++; } else if ((part_iter->second.position().x()<(grid_locations[grid_iter->first][1]) && part_iter->second.position().x()>(grid_locations[grid_iter->first][1]-input_data.diam_large()))) { particles_to_be_sent_east[grid_iter->first].push_back(part_iter->second); counter_sent++; counter_x_sent++; } ++part_iter; } else if (particles_received_current[grid_iter->first].find(part_iter->first)!=particles_received_current[grid_iter->first].end()) { if ((part_iter->second.position().x()>(grid_locations[grid_iter->first][0]) && part_iter->second.position().x()<(grid_locations[grid_iter->first][0]+input_data.diam_large()))) { particles_to_be_sent_west[grid_iter->first].push_back(part_iter->second); counter_sent++; counter_x_sent++; } else if ((part_iter->second.position().x()<(grid_locations[grid_iter->first][1]) && part_iter->second.position().x()>(grid_locations[grid_iter->first][1]-input_data.diam_large()))) { particles_to_be_sent_east[grid_iter->first].push_back(part_iter->second); counter_sent++; counter_x_sent++; } part_iter++; } else { particle_grid[grid_iter->first].erase(part_iter++); counter_removed++; } } } world.barrier(); mpi::request reqs_x_send[particles_to_be_sent_west.size()+particles_to_be_sent_east.size()]; vector<multimap<int,int> > box_sent_x_info; box_sent_x_info.resize(world.size()); vector<multimap<int,int> > box_received_x_info; box_received_x_info.resize(world.size()); int counter_x_reqs=0; //send particles for(grid_iter_vec=particles_to_be_sent_west.begin();grid_iter_vec!=particles_to_be_sent_west.end();grid_iter_vec++) { if (grid_iter_vec->second.size()!=0) { //send a particle. 50 will be "west" tag if (input_data.periodic_walls_x() && (grid_iter_vec->first-floor(grid_iter_vec->first/(input_data.xz_numboxes()))*input_data.xz_numboxes()<input_data.z_numboxes())) { reqs_x_send[counter_x_reqs++]=world.isend(grid_rank_lookup[grid_iter_vec->first + input_data.z_numboxes()*(input_data.x_numboxes()-1)], grid_iter_vec->first + input_data.z_numboxes()*(input_data.x_numboxes()-1), particles_to_be_sent_west[grid_iter_vec->first]); box_sent_x_info[grid_rank_lookup[grid_iter_vec->first + input_data.z_numboxes()*(input_data.x_numboxes()-1)]].insert(pair<int,int>(world.rank(), grid_iter_vec->first + input_data.z_numboxes()*(input_data.x_numboxes()-1))); } else if (!(grid_iter_vec->first-floor(grid_iter_vec->first/(input_data.xz_numboxes()))*input_data.xz_numboxes()<input_data.z_numboxes())) { reqs_x_send[counter_x_reqs++]=world.isend(grid_rank_lookup[grid_iter_vec->first - input_data.z_numboxes()], grid_iter_vec->first - input_data.z_numboxes(), particles_to_be_sent_west[grid_iter_vec->first]); box_sent_x_info[grid_rank_lookup[grid_iter_vec->first - input_data.z_numboxes()]].insert(pair<int,int>(world.rank(),grid_iter_vec->first - input_data.z_numboxes())); } } } for(grid_iter_vec=particles_to_be_sent_east.begin();grid_iter_vec!=particles_to_be_sent_east.end();grid_iter_vec++) { if (grid_iter_vec->second.size()!=0) { //send a particle. 60 will be "east" tag if (input_data.periodic_walls_x() && (grid_iter_vec->first-floor(grid_iter_vec->first/(input_data.xz_numboxes())*input_data.xz_numboxes())>input_data.xz_numboxes()-input_data.z_numboxes()-1)) { reqs_x_send[counter_x_reqs++]=world.isend(grid_rank_lookup[grid_iter_vec->first - input_data.z_numboxes()*(input_data.x_numboxes()-1)], 2000000000-(grid_iter_vec->first - input_data.z_numboxes()*(input_data.x_numboxes()-1)), particles_to_be_sent_east[grid_iter_vec->first]); box_sent_x_info[grid_rank_lookup[grid_iter_vec->first - input_data.z_numboxes()*(input_data.x_numboxes()-1)]].insert(pair<int,int>(world.rank(),2000000000-(grid_iter_vec->first - input_data.z_numboxes()*(input_data.x_numboxes()-1)))); } else if (!(grid_iter_vec->first-floor(grid_iter_vec->first/(input_data.xz_numboxes())*input_data.xz_numboxes())>input_data.xz_numboxes()-input_data.z_numboxes()-1)) { reqs_x_send[counter_x_reqs++]=world.isend(grid_rank_lookup[grid_iter_vec->first + input_data.z_numboxes()], 2000000000-(grid_iter_vec->first + input_data.z_numboxes()), particles_to_be_sent_east[grid_iter_vec->first]); box_sent_x_info[grid_rank_lookup[grid_iter_vec->first + input_data.z_numboxes()]].insert(pair<int,int>(world.rank(), 2000000000-(grid_iter_vec->first + input_data.z_numboxes()))); } } } counter=0; for (int i=0;i<world.size();i++) { //if (world.rank()!=i) //{ reqs[counter++]=world.isend(i,1000000000,box_sent_x_info[i]); reqs[counter++]=world.irecv(i,1000000000,box_received_x_info[i]); //} } mpi::wait_all(reqs, reqs + world.size()*2); //receive particles //receive west particles for (int j=0;j<world.size();j++) { multimap<int,int>::iterator received_info_iter; for (received_info_iter=box_received_x_info[j].begin();received_info_iter!=box_received_x_info[j].end();received_info_iter++) { //receive the message if (received_info_iter->second<1000000000) { //receive the message world.recv(received_info_iter->first,received_info_iter->second,particles_received_west); //loop through all the received particles and add them to the particle_grid for this processor for (unsigned int i=0;i<particles_received_west.size();i++) { particle_grid[received_info_iter->second].insert(pair<int,Particle>(particles_received_west[i].global_part_num(),particles_received_west[i])); if(particles_received_west[i].position().x()>grid_locations[received_info_iter->second][0] && particles_received_west[i].position().x()<grid_locations[received_info_iter->second][1]) { particle_properties[particles_received_west[i].global_part_num()].particle_in_box()[received_info_iter->second]=true; } counter_received++; counter_x_received++; } } else { //receive the message world.recv(received_info_iter->first,received_info_iter->second,particles_received_east); //loop through all the received particles and add them to the particle_grid for this processor for (unsigned int i=0;i<particles_received_east.size();i++) { particle_grid[2000000000-received_info_iter->second].insert(pair<int,Particle>(particles_received_east[i].global_part_num(),particles_received_east[i])); if(particles_received_east[i].position().x()>grid_locations[2000000000-received_info_iter->second][0] && particles_received_east[i].position().x()<grid_locations[2000000000-received_info_iter->second][1]) { particle_properties[particles_received_east[i].global_part_num()].particle_in_box()[2000000000-received_info_iter->second]=true; } counter_received++; counter_x_received++; } } } } mpi::wait_all(reqs_y_send, reqs_y_send + particles_to_be_sent_bottom.size()+particles_to_be_sent_top.size()); mpi::wait_all(reqs_z_send, reqs_z_send + particles_to_be_sent_south.size()+particles_to_be_sent_north.size()); mpi::wait_all(reqs_x_send, reqs_x_send + particles_to_be_sent_west.size()+particles_to_be_sent_east.size()); cout<<"x sent "<<counter_x_sent<<" and received "<<counter_x_received<<" from rank "<<world.rank()<<endl; cout<<"rank "<<world.rank()<<" sent "<<counter_sent<<" and received "<<counter_received<<" and removed "<<counter_removed<<endl; cout<<"done passing"<<endl; } I only posted some of the code (so ignore the fact that some variables may appear to be undefined, as they are in a portion of the code I didn't post) When I run the code (on the machine in which it fails), I get done passing but not done passing particles I am lost as to what could possibly cause a segmentation fault between the end of the called function and the next line in the calling function and why it would happen on one machine and not another.

    Read the article

  • SQL Sentry First Impressions

    - by AjarnMark
    After struggling to defend my SQL Servers from a political attack recently, I realized that I needed better tools to back me up, and SQL Sentry is the leading candidate. A couple of weeks ago, seemingly from out of nowhere, complaints from the business users started coming in that one of the core internal applications was running dramatically slower than normal, and fingers were being pointed at the SQL Server.  Unfortunately, we don’t have a production DBA whose entire job is to monitor and maintain our SQL Servers.  The responsibility falls to me to do the best I can, investing only a small portion of my time, because there are so many other responsibilities to take care of, and our industry is still deep in recession.  I inherited these SQL Servers and have made significant improvements in process and procedure, but I had not yet made the time to take real baseline measurements or keep a really close eye on the performance.  Like many DBAs, I wrote several of my own tools and used the “built-in tools” like Profiler, PerfMon, and sp_who2 (did I mention most of our instances are SQL Server 2000?).  These have all served me well for in-the-moment troubleshooting and maintenance, but they really fell down on the job when I was called upon to “prove” that SQL Server performance was acceptable and more importantly had not degraded recently (i.e. historical comparisons).  I really didn’t have anything from a historical comparison perspective, but I was able to show that current performance was acceptable, and deflect attention back onto other components (which in fact turned out to be the real culprit). That experience dramatically illustrated the need for better monitoring tools.  Coincidentally, I had been talking recently to my boss about the mini nightmare of monitoring several critical and interdependent overnight jobs that operate on separate instances of SQL Server.  Among other tools, I had been using Idera’s SQL Job Manager which is a free tool and did a nice job of showing me job schedules and histories in a nice calendar view.  This worked fairly well, and for the money (did I mention it was free?) it couldn’t be beat.  But it is based on the stored job history in MSDB, and there were other performance problems that we ran into when we started changing the settings for how much job history to retain, in order to be able to look back a month or more in the calendar view.  Another coincidence (if you believe in such things) was that when we had some of those performance challenges, I posted a couple of questions to the #sqlhelp hashtag on Twitter and Greg Gonzalez (@SQLSensei) suggested I check out SQL Sentry’s Event Manager.  At the time, I just thought he worked there, but later found out that he founded the company.  When I took a quick look at the features & benefits, the one that really jumped out at me is Chaining and Queueing which sounded like it would really help with our “interdependent jobs on different servers” issue. I know that is a lot of background story and coincidences, but hopefully you have stuck with me so far, and now we have arrived at the point where last week I downloaded and installed the 30-day trial of the SQL Sentry Power Suite, which is Event Manager plus Performance Advisor.  And I must say that I really like what I see so far.  Here are a few highlights: Great Support.  I had two issues getting the trial setup and monitoring a handful of our servers.  One of which was entirely my fault (missed a security setting in SQL 2008) and the other was mostly my fault (late change to some config settings that were apparently cached and did not get refreshed properly).  In both cases, the support staff at SQL Sentry were very responsive and rather quickly figured out what the cause and fix was for each of them.  This left me with a great impression of the company.  Kudos to them! Chaining and Queueing.  While I have not yet activated this feature, I am very excited about the possibilities.  We have jobs on three different instances of SQL Server that have to be run in a certain order, and each has to finish before the next can successfully begin, and I believe this feature will ensure just that.  It has been a real pain in the backside when one of those jobs runs just a little too long and does not finish before the job on another instance starts, thus triggering a chain reaction of either outright job failures, or worse, successful completion of completely invalid processing. Calendar View.  I really, really like the Event Manager calendar view where I can see all jobs and events across all instances and identify potential resource contention as well as windows of opportunity for maintenance activity.  Very well done, and based on Event Manager’s own database of accumulated historical information rather than querying the source instances every time. Performance Advisor Dashboard History View.  This view let’s me quickly select a date and time range and it displays graphs of key SQL Server and Windows metrics.  This is exactly the thing I needed to answer the “has performance changed recently” question at the beginning of this post. Reporting Services Subscription Jobs with Report Name.  This was a big and VERY pleasant surprise.  If you have ever looked at the list of SQL Server jobs that SQL Server Reporting Services creates when you make a Subscription, you will notice that they all have some sort of GUID as the name of the job.  This is really ugly, and really annoying because when you are just looking at the SQL Agent and Job Activity Monitor, if you see that Job X failed, you really do not have any indication in the name or the properties of the Job itself, as to what Report that was for.  But with SQL Sentry Event Manager you do.  The Jobs list in the Navigator pane in SQL Sentry, amazingly, displays the name of the Report that the Subscription Job is for.  And when you open it to see more details, it shows you the full Reporting Services path to that Report, so you can immediately track it down in the Report Manager in case you want to identify/notify the owner or edit the Subscription information.  I did not expect this at all, but I sure do like it.  HOORAY! That is just my first impressions from using the tools for a few days.  And I haven’t even gotten into how it showed me where I was completely mistaken about one aspect of my SQL Server disk configurations.  I’ll share that lesson in another blog entry.  But I have to say it again, the combination of Event Manager and Performance Advisor working together have really made me a fan.

    Read the article

  • Right-Time Retail Part 1

    - by David Dorf
    This is the first in a three-part series. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Right-Time Revolution Technology enables some amazing feats in retail. I can order flowers for my wife while flying 30,000 feet in the air. I can order my groceries in the subway and have them delivered later that day. I can even see how clothes look on me without setting foot in a store. Who knew that a TV, diamond necklace, or even a car would someday be as easy to purchase as a candy bar? Can technology make a mattress an impulse item? Wake-up and your back is hurting, so you rollover and grab your iPad, then a new mattress is delivered the next day. Behind the scenes the many processes are being choreographed to make the sale happen. This includes moving data between systems with the least amount for friction, which in some cases is near real-time. But real-time isn’t appropriate for all the integrations. Think about what a completely real-time retailer would look like. A consumer grabs toothpaste off the shelf, and all systems are immediately notified so that the backroom clerk comes running out and pushes the consumer aside so he can replace the toothpaste on the shelf. Such a system is not only cost prohibitive, but it’s also very inefficient and ineffectual. Retailers must balance the realities of people, processes, and systems to find the right speed of execution. That’ what “right-time retail” means. Retailers used to sell during the day and count the money and restock at night, but global expansion and the Web have complicated that simplistic viewpoint. Our 24hr society demands not only access but also speed, which constantly pushes the boundaries of our IT systems. In the last twenty years, there have been three major technology advancements that have moved us closer to real-time systems. Networking is the first technology that drove the real-time trend. As systems became connected, it became easier to move data between them. In retail we no longer had to mail the daily business report back to corporate each day as the dial-up modem could transfer the data. That was soon replaced with trickle-polling, when sale transactions were occasionally sent from stores to corporate throughout the day, often through VSAT. Then we got terrestrial networks like DSL and Ethernet that allowed the constant stream of data between stores and corporate. When corporate could see the sales transactions coming from stores, it could better plan for replenishment and promotions. That drove the need for speed into the supply chain and merchandising, but for many years those systems were stymied by the huge volumes of data. Nordstrom has 150 million SKU/Store combinations when planning (RPAS); The Gap generates 110 million price changes during end-of-season (RPM); Argos does 1.78 billion calculations executed each day for replenishment planning (AIP). These areas are now being alleviated by the second technology, storage. The typical laptop disk drive runs at 5,400rpm with PCs stepping up to 7,200rpm and servers hitting 15,000rpm. But the platters can only spin so fast, so to squeeze more performance we’ve had to rely on things like disk striping. Then solid state drives (SSDs) were introduced and prices continue to drop. (Augmenting your harddrive with a SSD is the single best PC upgrade these days.) RAM continues to be expensive, but compressing data in memory has allowed more efficient use. So a few years back, Oracle decided to build a box that incorporated all these advancements to move us closer to real-time. This family of products, often categorized as engineered systems, combines the hardware and software so that they work together to provide better performance. How much better? If Exadata powered a 747, you’d go from New York to Paris in 42 minutes, and it would carry 5,000 passengers. If Exadata powered baseball, games would last only 18 minutes and Boston’s Fenway would hold 370,000 fans. The Exa-family enables processing more data in less time. So with faster networks and storage, that brings us to the third and final ingredient. If we continue to process data in traditional ways, we won’t be able to take advantage of the faster networks and storage. Enter what Harvard calls “The Sexiest Job of the 21st Century” – the data scientist. New technologies like the Hadoop-powered Oracle Big Data Appliance, Oracle Advanced Analytics, and Oracle Endeca Information Discovery change the way in which we organize data. These technologies allow us to extract actionable information from raw data at incredible speeds, often ad-hoc. So the foundation to support the real-time enterprise exists, but how does a retailer begin to take advantage? The most visible way is through real-time marketing, but I’ll save that for part 3 and instead begin with improved integrations for the assets you already have in part 2.

    Read the article

  • TimeZone Issue during DayLight Saving

    - by Viren
    I just been bugged by the Day light saving hours I seem that 3rd November 2013 01:00:00 start EST time Now ever Time I set my time to 3rd November 2013 00:58:xx(some seconds) and run date it give me valid Time zone i.e EDT but even after the time pass 01:00:00 and I still query the date library I still see the Time zone as EDT and not EST have a look at this screenshot You can clearly see the Time zone saying as EDT even when it is EST any one has a clue for this Update There is one other finding I found if I restart my machine I see this More Update Before Restart After Restart

    Read the article

  • TimeZone Issue during DayLight Saving

    - by user1328293
    I just been bugged by the Day light saving hours I seem that 3rd November 2013 01:00:00 start EST time Now ever Time I set my time to 3rd November 2013 00:58:xx(some seconds) and run date it give me valid Time zone i.e EDT but even after the time pass 01:00:00 and I still query the date library I still see the Time zone as EDT and not EST have a look at this screenshot You can clearly see the Time zone saying as EDT even when it is EST any one has a clue for this Update There is one other finding I found if I restart my machine I see this

    Read the article

  • How can I find files added to the system within X minutes of a specific time?

    - by Jack W-H
    I have done a fresh install of Mac OS X Mountain Lion today on a new MacBook. Because this was a new install, when I finally got round to configuring some of my own developer things, I was surprised to find some app had installed a binary into /usr/local/bin - a single binary called galileod. Interestingly, I can't find anything online about galileod. I had only installed the bare minimum of software at this point. Looking in the file columns I can see Date Modified was 9th November 2012, but Date Added to the system was today at 17:01. It's now 10:20PM and I can't remember which software I was installing at that point. So how do I find out which other files were installed to the system within, say, 5 minutes either side of 17:01? EDIT: I found out what galileod was by running galileod --help - it is a binary used with Fitbit to communicate with the USB dongle. So that's the mystery solved - but it would still be interesting to know how to find files added within X minutes of a timeframe for future reference.

    Read the article

  • Use Mac Pro as time machine, server and editing station?

    - by Dan
    Background: My fiancée needs a Mac Pro for movie editing and rendering. I need a web server and a backup solution for my MacBook Pro. Idea We thought we could split the costs of the Mac Pro and set it up to act as both a web server and a backup device. Question Is this a good idea? Specifically: Is it easy to set it up to incrementally backup one or several laptops over wifi? And what software would you recommend? Is it silent and stable enough to run a web server continuously? Will it manage all this, including simultaneous editing? Thanks.

    Read the article

  • How to have a soft-real-time process in presense of heavily swapping IO-intensive background load?

    - by Vi
    schedtool: PID 32301: PRIO 4, POLICY R: SCHED_RR , NICE -20, AFFINITY 0xf ionice: realtime: prio 4 But the music is stumbling anyway. Background load is low prio (SCHED_IDLEPRIO, idle ionice), but uses a lot of memory (more than is physically available) and does a lot of IO and calculations. Latencytop shows about 1500ms for: Following symlink Writing buffer to disk (sync) Page fault Writing a page to disk both for the bg load and for unrelated processes. Load average is 10 and counting. Why cannot it allocate, for example, 200MHZ of one of the cores and 32M of memory and not less than once per second opportunity for IO for mplayer to make it happy while continuing calculations on the background? Or: why it cannot leave background task and swap loving each other but keeping the rest of the system as if there were no background load? How to have RT processes AND heavy bg load simultaneously (without of virtual machines)?

    Read the article

  • Snapshots are disappearing from Time Machine. Why?

    - by AntonAL
    I have TimeMachine enabled for my external HDD. I'm triggering backups manually 2 times per week. I have noticed, that some snapshots, that i made, are gone. I noticed this behaviour several weeks later, but i had doubts. To make it sure, i've recorded screen video with listing snapshots in TimeMachine. After watching my screen recording today, I'm sure, that for Augusut i had 9 snapshots. Now, i have only 3 of them. What's happening ? I had no crash disk reports, errors etc

    Read the article

  • Using Entity Framework 4.0 with Code-First and POCO: How to Get Parent Object with All its Children

    - by SirEel
    I'm new to EF 4.0, so maybe this is an easy question. I've got VS2010 RC and the latest EF CTP. I'm trying to implement the "Foreign Keys" code-first example on the EF Team's Design Blog, http://blogs.msdn.com/efdesign/archive/2009/10/12/code-only-further-enhancements.aspx. public class Customer { public int Id { get; set; public string CustomerDescription { get; set; public IList<PurchaseOrder> PurchaseOrders { get; set; } } public class PurchaseOrder { public int Id { get; set; } public int CustomerId { get; set; } public Customer Customer { get; set; } public DateTime DateReceived { get; set; } } public class MyContext : ObjectContext { public RepositoryContext(EntityConnection connection) : base(connection){} public IObjectSet<Customer> Customers { get {return base.CreateObjectSet<Customer>();} } } I use a ContextBuilder to configure MyContext: { var builder = new ContextBuilder<MyContext>(); var customerConfig = _builder.Entity<Customer>(); customerConfig.Property(c => c.Id).IsIdentity(); var poConfig = _builder.Entity<PurchaseOrder>(); poConfig.Property(po => po.Id).IsIdentity(); poConfig.Relationship(po => po.Customer) .FromProperty(c => c.PurchaseOrders) .HasConstraint((po, c) => po.CustomerId == c.Id); ... } This works correctly when I'm adding new Customers, but not when I try to retrieve existing Customers. This code successfully saves a new Customer and all its child PurchaseOrders: using (var context = builder.Create(connection)) { context.Customers.AddObject(customer); context.SaveChanges(); } But this code only retrieves Customer objects; their PurchaseOrders lists are always empty. using (var context = _builder.Create(_conn)) { var customers = context.Customers.ToList(); } What else do I need to do to the ContextBuilder to make MyContext always retrieve all the PurchaseOrders with each Customer?

    Read the article

  • New Trusted Status awarded to first Mobile Java Developer

    - by Jacob Lehrbaum
    Java Verified has just announced that GameLoft is the first developer to receive its new Trusted Status!  Java Verified is an industry-recognized Java testing and signing program backed and funded by companies such as AT&T, LG, Motorola, Nokia, Oracle, Orange, Samsung and Vodafone, and chartered with making it easier for mobile developers to certify and deploy applications for use across the billions of mobile handsets that run the Java ME.  Because of its breadth and diversity, Java ME provides an unmatched opportunity to reach more than 3 billions consumers, but at the same time, developers are faced with the challenge of working with multiple distribution channels and a range of handsets. To this end, the Java Verified program provides a suite of tests that help to validate identity, functionality, integrity, and quality.  Since its rebirth in 2010 as an independent organization, the Java Verified program has been actively working to make it even easier to create and distribute Java ME apps.  Example initiatives include updates to the Unified Testing Criteria to make it easier to test "Simple Apps," community outreach to better understand and address developer pain-points  and a new "Trusted Status."  In the words of the Java Verified Program, Trusted Status is:a privileged status to be granted to developers who will have proven that the quality of their Java ME apps is of a consistently high standard. These are developers who will have earned the trust of Java Verified by demonstrating unfailingly that testing to the UTC standard is a crucial part of their product development activityThe first developer to be awarded this status is GameLoft.  By achieving Trusted Status Gameloft can now test their applications to the Java Verified standard without needing to provide Java Verified with the evidence.  The apps then automatically get signed with the Java Verified signature enabling GameLoft to benefit from reduced costs and time-to-market for their new Java ME applications from here on out.  Learn more about the exciting news or apply now for Trusted Status!

    Read the article

  • ASP.NET MVC: MVC Time Planner is available at CodePlex

    - by DigiMortal
    I get almost every week some e-mails where my dear readers ask for source code of my ASP.NET MVC and FullCalendar example. I have some great news, guys! I ported my sample application to Visual Studio 2010 and made it available at CodePlex. Feel free to visit the page of MVC Time Planner. NB! Current release of MVC Time Planner is initial one and it is basically conversion sfrom VS2008 example solution to VS2010. Current source code is not any study material but it gives you idea how to make calendars work together. Future releases will introduce many design and architectural improvements. I have planned also some new features. How MVC Time Planner looks? Image on right shows you how time planner looks like. It uses default design elements of ASP.NET MVC applications and jQueryUI. If I find some artist skills from myself I will improve design too, of course. :) Currently only day view of calendar is available, other views are coming in near future (I hope future will be week or two). Important links And here are some important links you may find useful. MVC Time Planner page @ CodePlex Documentation Release plan Help and support – questions, ideas, other communication Bugs and feature requests If you have any questions or you are interested in new features then please feel free to contact me through MVC Time Planner discussion forums.

    Read the article

  • Time Travel 101

    - by Jim Duffy
    I’m thinking maybe I should have used Time Crunching 101 as the title instead… or maybe ‘Duh Duffy, where have you been? Everyone knows that!” Ok, so maybe you won’t actually learn how to travel through time from this post but you will learn how to cram more learning into one day. We all know you can’t make it to every conference, every presentation, or every training session. The good news is that many of those events make their content available to either watch online or to download for off-line viewing. The problem is who has time to sit and watch all those presentations in real time? Not me. One trick I use is to view the content at an increased play rate. Why listen to a boring speaker like me drone on for the entire length of the session when you can listen to them drone on in almost half the time. :-) I view nearly all off-line content with Windows Media Player though I’m sure you can implement this idea with any media playback software. The idea is changing the playback speed you view the content at. With Windows Media Player you can change the play speed from the menu system. Once you have the Play Speed Setting panel open you can specify the playback speed. Depending on the content and the presenter I can typically listen between 1.6 and 2.0 times normal speed. My Florida edumacation taught me that playing the video back at twice the speed means I’ll listen to it twice as fast and that means I can view it in almost 1/2 the time.  Too bad it won’t make me twice as smart. :-) I hope this helps you speed your way through more training content. Have a day. :-|

    Read the article

  • Passing elapsed time to the update function from the game loop

    - by Sri Harsha Chilakapati
    I want to pass the time elapsed to the update() method as this would make easy to implement the animations and time related concepts. Here's my game-loop. public void gameLoop(){ boolean running = true; long gameTime = getCurrentTime(); long elapsedTime = 0; long lastUpdateTime = 0; int loops; while (running){ loops = 0; while(getCurrentTime()>gameTime && loops<Global.MAX_FRAMESKIP){ elapsedTime = getCurrentTime() - lastUpdateTime; lastUpdateTime = getCurrentTime(); update(elapsedTime); gameTime += SKIP_STEPS; loops++; } displayGame(); } } getCurrentTime() method public long getCurrentTime(){ return (System.nanoTime()/1000000); } update() method long time = 0; public void update(long elapsedTime){ time += elapsedTime; if (time>=1000){ System.out.println("A second elapsed"); time -= 1000; } } But this is printing the message for 3 seconds. Thanks.

    Read the article

  • Oracle OpenWorld 2013: First glimpses of the new SOA Suite 12c by Lucas Jellema

    - by JuergenKress
    During this week’s Oracle OpenWorld Conference, we were given some sneak peeks into the short term future of the Oracle SOA Suite. During various roadmap sessions, on the demo grounds as well as in the keynote session by Thomas Kurian (the replay of which you can see here, new features were described and demonstrated, allowing us to get a fairly good overview of what is going to come for SOA Suite - later in 2013 and sometime in 2014 (probably the first half of that year). The SOA Suite plays an important part in the three themes Thomas Kurian set down for the Fusion Middleware suite of products: support for mobility, cloud and business user empowerment. Some of the highlighted new aspects of Oracle SOA Suite are: Adapters to connect from on-premise to in-the-cloud – specifically targeting SalesForce, RightNow and also providing an SDK to create custom integrations into the cloud (the first cloud adapters will be released on 11g, before the end of the year) Mobile enablement by exposing RESTful services that communicate using JSON as well as adding the capability to call out to such services (12c functionality) Enhanced functionality on Exalogic (of course it runs faster on Exalogic, up to 20 times) Modular runtime with a lighter footprint. A brief demonstration of the Cloud Adapter was given by Demed L’Her during said keynote. The next screenshot shows the Adapter wizard for the Cloud Adapter. It allows the developer to pick a specific operation for a specific business object exposed by RightNow (or SalesForce) (the adapter knows about the APIs exposed by RightNow and SalesForce): This next screenshot shows the adapter that is used in SOA Suite 12c to expose a RESTful service on top of an SCA Composite or a Service Bus service: Read the full article here. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Mix Forum Technorati Tags: Amis,Lucas Jellema,SOA Suite 12c,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • First JSRs Proposed for Java EE 7

    - by Jacob Lehrbaum
    With the approval of Java SE 7 and Java SE 8 JSRs last month, attention is now shifting towards the Java EE platform.  While functionality pegged for Java EE 7 was previewed at least as early as Devoxx, the filing of these JSRs marks the first, officially proposed, specifications for the next generation of the popular application server standard.  Let's take a quick look at the proposed new functionality.Java Persistence API 2.1The first of the new proposed specifications is JSR 338: Java Persistence API (JPA) 2.1. JPA is designed for use with both Java EE and Java SE and: "deals with the way relational data is mapped to Java objects ("persistent entities"), the way that these objects are stored in a relational database so that they can be accessed at a later time, and the continued existence of an entity's state even after the application that uses it ends. In addition to simplifying the entity persistence model, the Java Persistence API standardizes object-relational mapping." (more about JPA)JAX-RS 2.0The second of the new Java specifications that have been proposed is JSR 339, otherwise known as JAX-RS 2.0. JAX-RS provides an API that enables the easy creation of web services using the Representational State Transfer (REST) architecture.  Key features proposed in the new JSR include a Client API, improved support for URIs, a Model-View-Controller architecture and much more!More informationOfficial proposal for Java Persistence 2.1 (jcp.org)Official proposal for JAX-RS 2.0 (jcp.org)Kicking off Java EE 7 with 2 JSRs: JAX-RS 2.0 / JPA 2.1 (the Aquarium)

    Read the article

  • Almost at our first year anniversary!

    - by Vizioz Limited
    It has been a hectic first year at Vizioz and things are still going from strength to strength. 11 months ago I started Vizioz with zero capital investment in the middle of a recession, which to some may seem a daunting prospect but to others including myself it was the challenge I needed to make me want to get up in the morning :) I wanted to prove that even in the curent financial climate it is still possible to start a new business.We are still experiencing the normal growing pains of a small business but this is something we just need to work our way through, it is amazing how much paperwork and administration there is running a small business, office admin, insurance, vat and for the last few months PAYE.For the last 9 months we have shared an office with another small business called Little Big Ideas. They are a design agency working across a broad spectrum of design from branding, print and digital. Last month we decided to move offices to a larger office and now have room for 8 of us, so now we need a couple more clients to help produce enough work to fill the space and grow to the next level.As well as moving office 2 months ago I blogged about my first employee Colin starting work for me, he has picked up Umbraco very well and has mastered the art of good CSS design, as the majority of our clients are large multi-nationals they still require support for IE6 which as all web developers know is the nightmare of all web browsers.This month has seen the next step in the growth of Vizioz as I have taken on another PhD graduate called Pricilla, welcome to the team!This month we plan to launch our own website to enable us to showcase some of the sites we have built over the past 11 months and to allow potential clients to see what we can offer. We might still be relatively small but we have some great case studies to show and with two PhD graduates on the team we have great talent capable of producing complex and innovative solutions for our clients. As soon as we have launched out new website I will blog again about what the future holds for Vizioz and what we can offer our prospective clients as well as e obvious Umbraco CMS solutions.

    Read the article

  • EF4 CPT5 Code First Remove Cascading Deletes

    - by Dane Morgridge
    I have been using EF4 CTP5 with code first and I really like the new code.  One issue I was having however, was cascading deletes is on by default.  This may come as a surprise as using Entity Framework with anything but code first, this is not the case.  I ran into an exception with some one-to-many relationships I had: Introducing FOREIGN KEY constraint 'ProjectAuthorization_UserProfile' on table 'ProjectAuthorizations' may cause cycles or multiple cascade paths. Specify ON DELETE NO ACTION or ON UPDATE NO ACTION, or modify other FOREIGN KEY constraints. Could not create constraint. See previous errors. To get around this, you can use the fluent API and put some code in the OnModelCreating: 1: protected override void OnModelCreating(System.Data.Entity.ModelConfiguration.ModelBuilder modelBuilder) 2: { 3: modelBuilder.Entity<UserProfile>() 4: .HasMany(u => u.ProjectAuthorizations) 5: .WithRequired(a => a.UserProfile) 6: .WillCascadeOnDelete(false); 7: } This will work to remove the cascading delete, but I have to use the fluent API and it has to be done for every one-to-many relationship that causes the problem. I am personally not a fan of cascading deletes in general (for several reasons) and I’m not a huge fan of fluent APIs.  However, there is a way to do this without using the fluent API.  You can in the OnModelCreating, remove the convention that creates the cascading deletes altogether. 1: protected override void OnModelCreating(System.Data.Entity.ModelConfiguration.ModelBuilder modelBuilder) 2: { 3: modelBuilder.Conventions.Remove<OneToManyCascadeDeleteConvention>(); 4: } Thanks to Jeff Derstadt from Microsoft for the info on removing the convention all together.  There is a way to build a custom attribute to remove it on a case by case basis and I’ll have a post on how to do this in the near future.

    Read the article

  • First Minecraft mod not working: make a new sword

    - by yamikoWebs
    I am making my first mod and cannot see what is wrong with it. I am using MCP and Modloader. For my first mod I was going to make swords. I started with making a new EnumToolMaterials WOOD(0, 59, 2.0F, 0, 15), STONE(1, 131, 4.0F, 1, 5), IRON(2, 250, 6.0F, 2, 14), LAPIS(3, 750, 7.0F, 2, 14), OBSIDIAN(3, 1000, 7.5F, 3, 12), EMERALD(3, 1561, 8.0F, 3, 10),//diamond GREEN(3, 2000, 9.0F, 4, 10),//emerald GOLD(0, 200, 12.0F, 0, 22); then here is the mod class public class _Mod_Yamiko extends BaseMod{ /* mod itemts */ public static final Item swordLapis = (new ItemSword(600, EnumToolMaterial.LAPIS)).setItemName("swordLapis"); public static final Item swordObsidian = (new ItemSword(601, EnumToolMaterial.OBSIDIAN)).setItemName("swordObsidian"); public static final Item swordGreen = (new ItemSword(602, EnumToolMaterial.GREEN)).setItemName("swordGreen"); public void load(){ //set images swordLapis.iconIndex = ModLoader.addOverride("/gui/items.png","/gui/swordLapis.png"); ModLoader.addName(swordLapis, "Lapis Sword"); //craft ModLoader.addRecipe(new ItemStack(_Mod_Yamiko.swordLapis, 1), new Object[]{ " X ", " X ", " Y ", 'X', Block.dirt, 'Y', Item.stick }); } public String getVersion(){ return "0.1"; } } Then I made a 16×16 .png image. I am not sure where to save it so I recompiled and reobfuscated, took the mod files and put it in my local Minecraft install, added the image where it be should be. No problems when playing but I cannot make the new sword.

    Read the article

< Previous Page | 30 31 32 33 34 35 36 37 38 39 40 41  | Next Page >