Search Results

Search found 6460 results on 259 pages for 'cpp person'.

Page 53/259 | < Previous Page | 49 50 51 52 53 54 55 56 57 58 59 60  | Next Page >

  • Who Makes a Good Product Owner

    - by Robert May
    In general, the best product owners are those that care passionately about the customer of the product.  Note that I didn’t say about the product itself.  Actually, people that only care about the product, generally do not make good product owners.  Products only matter in relationship to their customers.  If a product doesn’t provide value to the customer, then the product has no value, no matter what a person might think of the product, and no matter what cool technologies exist inside of the product. A good product owner is also a good negotiator.  They recognize that many different priorities exist inside of a corporation, but that there can be only one list that developers work from.  A good product owner recognizes that its their job to help others around them prioritize (perhaps with a Product Council), but also understand that they alone have the final say about priorities and are willing to make the tough decisions required.  Deciding the priority between two perfectly valid stories is very difficult, especially when the stories are from two different departments! A good product owner is deeply interested in helping the team be successful.  They don’t seek to control the team, but instead seek to understand what the team can do and then work with the team to get the best product possible for the Customer.  A good product owner is never denigrating to team members, ever.  They recognize that such behavior would damage the trust that needs to be present between team members and product owners and will avoid it at all costs. In general, technical people (i.e. former or current developers) make poor product owners.  In their minds, they can’t separate implementation details from user functionality, so their stories end up sounding like implementation details.  For example, “The user enters their username on the password screen” is something that a technical product owner would write.  The proper wording for that story is “A user supplies the system with their credentials.”  Because technical people think different from the rest of the population, they are generally not a good fit. A good product owner is also a good writer.  Writing good stories demands good writing.  The art of persuasion, descriptiveness and just general good grammar are all required.  A good Product Owner must also be well spoken, since most of what will be conveyed will be conveyed with the spoken word, not just written word. A good product owner is a “People Person.”  They like talking to people and are very patient.  They don’t mind having questions repeated or fielding many questions, because they want to make sure that the ideas they’re conveying are properly understood so the customer gets the best product possible.  They are happy to answer any questions a team member may have and invite feedback and criticism of designs and stories, since they want a good product.  They really have little ego that gets in the way of building a great product. All of these qualities can be hard to find, but if you look close enough, you’ll find the right person in your organization.  Product owners can be found anywhere, not just in upper management.  Some of the best product owners are those that are very close to the customer.  In fact, check your customer support staff.  I’d bet that several great product owners are lurking there. Final note about what makes a good product owner.  You’re probably NOT going to find a good product owner in a manager, especially if they consider themselves a “Manager.”  Product owners don’t manage anything but the backlog, so be especially careful if the person you’re selecting for Product Owner is a manager. Up Next, “Messing with the Team.” Technorati Tags: Scrum,Product Owner

    Read the article

  • When is your interview?

    - by Rob Farley
    Sometimes it’s tough to evaluate someone – to figure out if you think they’d be worth hiring. These days, since starting LobsterPot Solutions, I have my share of interviews, on both sides of the desk. Sometimes I’m checking out potential staff members; sometimes I’m persuading someone else to get us on board for a project. Regardless of who is on which side of the desk, we’re both checking each other out. The world is not how it was some years ago. I’m pretty sure that every time I walk into a room for an interview, I’ve searched for them online, and they’ve searched for me. I suspect they usually have the easier time finding me, although there are obviously other Rob Farleys in the world. They may have even checked out some of my presentations from conferences, read my blog posts, maybe even heard me tell jokes or sing. I know some people need me to explain who I am, but for the most part, I think they’ve done plenty of research long before I’ve walked in the room. I remember when this was different (as it could be for you still). I remember a time when I dealt with recruitment agents, looking for work. I remember sitting in rooms having been giving a test designed to find out if I knew my stuff or not, and then being pulled into interviews with managers who had to find out if I could communicate effectively. I’d need to explain who I was, what kind of person I was, what my value-system involved, and so on. I’m sure you understand what I’m getting at. (Oh, and in case you hadn’t realised, it’s a T-SQL Tuesday post, this month about interviews.) At TechEd Australia some years ago (either 2009 or 2010 – I forget which), I remember hearing a comment made during the ‘locknote’, the closing session. The presenter described a conversation he’d heard between two girls, discussing a guy that one of them had just started dating. The other girl expressed horror at the fact that her friend had met this guy in person, rather than through an online dating agency. The presenter pointed out that people realise that there’s a certain level of safety provided through the checks that those sites do. I’m not sure I completely trust this, but I’m sure it’s true for people’s technical profiles. If I interview someone, I hope they have a profile. I hope I can look at what they already know. I hope I can get samples of their work, and see how they communicate. I hope I can get a feel for their sense of humour. I hope I already know exactly what kind of person they are – their value system, their beliefs, their passions. Even their grammar. I can work out if the person is a good risk or not from who they are online. If they don’t have an online presence, then I don’t have this information, and the risk is higher. So if you’re interviewing with me, your interview started long before the conversation. I hope it started before I’d ever heard of you. I know the interview in which I’m being assessed started before I even knew there was a product called SQL Server. It’s reflected in what I write. It’s in the way I present. I have spent my life becoming me – so let’s talk! @rob_farley

    Read the article

  • fail2ban custom action to permanent ban IPs from China

    - by John Magnolia
    When a IP address gets banned how can I check if the banned IP address is from China. If yes, then add it to the permanent ban list. I have found this nice guide which write the banned IP to file. Reason: I am getting a lot of brute force attacks from China daily, thankfully fail2ban is helping restrict this although they appear to be getting worse and they are just changing their IP Address. Or even better would be if there was a maintained database of known hacker IP addresses. Example 1 Hi, The IP 60.169.78.77 has just been banned by Fail2Ban after 4 attempts against vsftpd. Here are more information about 60.169.78.77: % [whois.apnic.net node-7] % Whois data copyright terms http://www.apnic.net/db/dbcopyright.html inetnum: 60.166.0.0 - 60.175.255.255 netname: CHINANET-AH descr: CHINANET anhui province network descr: China Telecom descr: A12,Xin-Jie-Kou-Wai Street descr: Beijing 100088 country: CN admin-c: CH93-AP tech-c: JW89-AP mnt-by: APNIC-HM mnt-routes: MAINT-CHINANET-AH mnt-lower: MAINT-CHINANET-AH status: ALLOCATED PORTABLE changed: [email protected] 20040721 source: APNIC person: Chinanet Hostmaster nic-hdl: CH93-AP e-mail: [email protected] address: No.31 ,jingrong street,beijing address: 100032 phone: +86-10-58501724 fax-no: +86-10-58501724 country: CN changed: [email protected] 20070416 mnt-by: MAINT-CHINANET source: APNIC person: Jinneng Wang address: 17/F, Postal Building No.120 Changjiang address: Middle Road, Hefei, Anhui, China country: CN phone: +86-551-2659073 fax-no: +86-551-2659287 e-mail: [email protected] nic-hdl: JW89-AP mnt-by: MAINT-NEW changed: [email protected] 19990818 source: APNIC Regards, Fail2Ban Example 2 Hi, The IP 60.169.78.81 has just been banned by Fail2Ban after 4 attempts against vsftpd. Here are more information about 60.169.78.81: % [whois.apnic.net node-6] % Whois data copyright terms http://www.apnic.net/db/dbcopyright.html inetnum: 60.166.0.0 - 60.175.255.255 netname: CHINANET-AH descr: CHINANET anhui province network descr: China Telecom descr: A12,Xin-Jie-Kou-Wai Street descr: Beijing 100088 country: CN admin-c: CH93-AP tech-c: JW89-AP mnt-by: APNIC-HM mnt-routes: MAINT-CHINANET-AH mnt-lower: MAINT-CHINANET-AH status: ALLOCATED PORTABLE changed: [email protected] 20040721 source: APNIC person: Chinanet Hostmaster nic-hdl: CH93-AP e-mail: [email protected] address: No.31 ,jingrong street,beijing address: 100032 phone: +86-10-58501724 fax-no: +86-10-58501724 country: CN changed: [email protected] 20070416 mnt-by: MAINT-CHINANET source: APNIC person: Jinneng Wang address: 17/F, Postal Building No.120 Changjiang address: Middle Road, Hefei, Anhui, China country: CN phone: +86-551-2659073 fax-no: +86-551-2659287 e-mail: [email protected] nic-hdl: JW89-AP mnt-by: MAINT-NEW changed: [email protected] 19990818 source: APNIC Regards, Fail2Ban Example 3 Hi, The IP 222.133.244.99 has just been banned by Fail2Ban after 4 attempts against vsftpd. Here are more information about 222.133.244.99: % [whois.apnic.net node-6] % Whois data copyright terms http://www.apnic.net/db/dbcopyright.html inetnum: 222.133.244.96 - 222.133.244.127 netname: LCZFFHQ country: CN descr: liaochenggovermentfanghuoqiang admin-c: DS95-AP tech-c: DS95-AP status: ASSIGNED NON-PORTABLE changed: [email protected] 20060122 mnt-by: MAINT-CNCGROUP-SD source: APNIC route: 222.132.0.0/14 descr: CNC Group CHINA169 Shandong Province Network country: CN origin: AS4837 mnt-by: MAINT-CNCGROUP-RR changed: [email protected] 20060118 source: APNIC person: Data Communication Bureau Shandong nic-hdl: DS95-AP e-mail: [email protected] address: No.77 Jingsan Road,Jinan,Shandong,P.R.China phone: +86-531-6052611 fax-no: +86-531-6052414 country: CN changed: [email protected] 20050330 mnt-by: MAINT-CNCGROUP-SD source: APNIC Regards, Fail2Ban

    Read the article

  • What Counts For a DBA: Replaceable

    - by Louis Davidson
    Replaceable is what every employee in every company instinctively strives not to be. Yet, if you’re an irreplaceable DBA, meaning that the company couldn’t find someone else who could do what you do, then you’re not doing a great job. A good DBA is replaceable. I imagine some of you are already reaching for the lighter fluid, about to set the comments section ablaze, but before you destroy a perfectly good Commodore 64, read on… Everyone is replaceable, ultimately. Anyone, anywhere, in any job, could be sitting at their desk reading this, blissfully unaware that this is to be their last day at work. Morbidly, you could be about to take your terminal breath. Ideally, it will be because another company suddenly offered you a truck full of money to take a new job, forcing you to bid a regretful farewell to your current employer (with barely a “so long suckers!” left wafting in the air as you zip out of the office like the Wile E Coyote wearing two pairs of rocket skates). I’ve often wondered what it would be like to be present at the meeting where your former work colleagues discuss your potential replacement. It is perhaps only at this point, as they struggle with the question “What kind of person do we need to replace old Wile?” that you would know your true worth in their eyes. Of course, this presupposes you need replacing. I’ve known one or two people whose absence we adequately compensated with a small rock, to keep their old chair from rolling down a slight incline in the floor. On another occasion, we bought a noise-making machine that frequently attracted attention its way, with unpleasant sounds, but never contributed anything worthwhile. These things never actually happened, of course, but you take my point: don’t confuse replaceable with expendable. Likewise, if the term “trained seal” comes up, someone they can teach to follow basic instructions and push buttons in the right order, then the replacement discussion is going to be over quickly. What, however, if your colleagues decide they’ll need a super-specialist to replace you. That’s a good thing, right? Well, usually, in my experience, no it is not. It often indicates that no one really knows what you do, or how. A typical example is the “senior” DBA who built a system just before 16-bit computing became all the rage and then settled into a long career managing it. Such systems are often central to the company’s operations and the DBA very skilled at what they do, but almost impossible to replace, because the system hasn’t evolved, and runs on processes and routines that others no longer understand or recognize. The only thing you really want to hear, at your replacement discussion, is that they need someone skilled at the fundamentals and adaptable. This means that the person they need understands that their goal is to be an excellent DBA, not a specialist in whatever the-heck the company does. Someone who understands the new versions of SQL Server and can adapt the company’s systems to the way things work today, who uses industry standard methods that any other qualified DBA/programmer can understand. More importantly, this person rarely wants to get “pigeon-holed” and so documents and shares the specialized knowledge and responsibilities with their teammates. Being replaceable doesn’t mean being “dime a dozen”. The company might need four people to take your place due to the depth of your skills, but still, they could find those replacements and those replacements could step right in using techniques that any decent DBA should know. It is a tough question to contemplate, but take some time to think about the sort of person that your colleagues would seek to replace you. If you think they would go looking for a “super-specialist” then consider urgently how you can diversify and share your knowledge, and start documenting all the processes you know as if today were your last day, because who knows, it just might be.

    Read the article

  • Incentivizing Work with Development Teams

    - by MarkPearl
    Recently I saw someone on twitter asking about incentives and if anyone had past experience with incentivizing work. I promised to respond with some of the experiences I have had in the past so here goes... **Disclaimer** - these are my experiences with incentives, generally in software development - in some other industries this may not be applicable – this is also my thinking at this point in time, with more experience my opinion may change. Incentivize at the level that you want people to group at If you are wanting to promote a team mentality, incentivize teams. If you want to promote an individual mentality, incentivize individuals. There is nothing worse than mixing this up. Some organizations put a lot of effort in establishing teams and team mentalities but reward individuals. This has a counter effect on the resources they have put towards establishing a team mentality. In the software projects that I work with we want promote cross functional teams that collaborate. Personally, if I was on a team and knew that there was an opportunity to work on a critical component of the system, and that by doing so I would get a bigger bonus, then I would be hesitant to include other people in solving that problem. Thus, I would hinder the teams efforts in being cross functional and reduce collaboration levels. Does that mean everyone in the team should get an even share of an incentive? In most situations I would say yes - even though this may feel counter-intuitive. I have heard arguments put forward that if “person x contributed more than person Y then they should be rewarded more” – This may sound controversial but I would rather treat people how would you like them to perform, not where they currently are at. To add to this approach, if someone is free loading, you bet your bottom dollar that the team is going to make this a lot more transparent if they feel that individual is going to be rewarded at the same level that everyone else is. Bad incentives promote destructive work If you are going to incentivize people, pick you incentives very carefully. I had an experience once with a sales person who was told they would get a bonus provided that they met an ordering target with a particular supplier. What did this person do? They sold everything at cost for the next month or so. They reached the goal, but the company didn't gain anything from it. It was a bad incentive. Expect the same with development teams, if you incentivize zero bug levels, you will get zero code committed to the solution. If you incentivize lines of code, you will get many many lines of bad code. Is there such a thing as a good incentives? Monetary wise, I am not sure there is. I would much rather encourage organizations to pay their people what they are worth upfront. I would also advise against paying money to teams as an incentive or even a bonus or reward for reaching a milestone. Rather have a breakaway for the team that promotes team building as a reward if they reach a milestone than pay them more money. I would also advise against making the incentive the reason for them to reach the milestone. If this becomes the norm it promotes people to begin to only do their job if there is an incentive at the end of the line. This is not a behaviour one wants to encourage. If the team or individual is in the right mind-set, they should not work any harder than they are right now with normal pay.

    Read the article

  • Class design issue

    - by user2865206
    I'm new to OOP and a lot of times I become stumped in situations similar to this example: Task: Generate an XML document that contains information about a person. Assume the information is readily available in a database. Here is an example of the structure: <Person> <Name>John Doe</Name> <Age>21</Age> <Address> <Street>100 Main St.</Street> <City>Sylvania</City> <State>OH</State> </Address> <Relatives> <Parents> <Mother> <Name>Jane Doe</Name> </Mother> <Father> <Name>John Doe Sr.</Name> </Father> </Parents> <Siblings> <Brother> <Name>Jeff Doe</Name> </Brother> <Brother> <Name>Steven Doe</Name> </Brother> </Siblings> </Relatives> </Person> Ok lets create a class for each tag (ie: Person, Name, Age, Address) Lets assume each class is only responsible for itself and the elements directly contained Each class will know (have defined by default) the classes that are directly contained within them Each class will have a process() function that will add itself and its childeren to the XML document we are creating When a child is drawn, as in the previous line, we will have them call process() as well Now we are in a recursive loop where each object draws their childeren until all are drawn But what if only some of the tags need to be drawn, and the rest are optional? Some are optional based on if the data exists (if we have it, we must draw it), and some are optional based on the preferences of the user generating the document How do we make sure each object has the data it needs to draw itself and it's childeren? We can pass down a massive array through every object, but that seems shitty doesnt it? We could have each object query the database for it, but thats a lot of queries, and how does it know what it's query is? What if we want to get rid of a tag later? There is no way to reference them. I've been thinking about this for 20 hours now. I feel like I am misunderstanding a design principle or am just approaching this all wrong. How would you go about programming something like this? I suppose this problem could apply to any senario where there are classes that create other classes, but the classes created need information to run. How do I get the information to them in a way that doesn't seem fucky? Thanks for all of your time, this has been kicking my ass.

    Read the article

  • "Expected initializer before '<' token" in header file

    - by Sarah
    I'm pretty new to programming and am generally confused by header files and includes. I would like help with an immediate compile problem and would appreciate general suggestions about cleaner, safer, slicker ways to write my code. I'm currently repackaging a lot of code that used to be in main() into a Simulation class. I'm getting a compile error with the header file for this class. I'm compiling with gcc version 4.2.1. // Simulation.h #ifndef SIMULATION_H #define SIMULATION_H #include <cstdlib> #include <iostream> #include <cmath> #include <string> #include <fstream> #include <set> #include <boost/multi_index_container.hpp> #include <boost/multi_index/hashed_index.hpp> #include <boost/multi_index/member.hpp> #include <boost/multi_index/ordered_index.hpp> #include <boost/multi_index/mem_fun.hpp> #include <boost/multi_index/composite_key.hpp> #include <boost/shared_ptr.hpp> #include <boost/tuple/tuple_comparison.hpp> #include <boost/tuple/tuple_io.hpp> #include "Parameters.h" #include "Host.h" #include "rng.h" #include "Event.h" #include "Rdraws.h" typedef multi_index_container< // line 33 - first error boost::shared_ptr< Host >, indexed_by< hashed_unique< const_mem_fun<Host,int,&Host::getID> >, // 0 - ID index ordered_non_unique< tag<age>,const_mem_fun<Host,int,&Host::getAgeInY> >, // 1 - Age index hashed_non_unique< tag<household>,const_mem_fun<Host,int,&Host::getHousehold> >, // 2 - Household index ordered_non_unique< // 3 - Eligible by age & household tag<aeh>, composite_key< Host, const_mem_fun<Host,int,&Host::getAgeInY>, const_mem_fun<Host,bool,&Host::isEligible>, const_mem_fun<Host,int,&Host::getHousehold> > >, ordered_non_unique< // 4 - Eligible by household (all single adults) tag<eh>, composite_key< Host, const_mem_fun<Host,bool,&Host::isEligible>, const_mem_fun<Host,int,&Host::getHousehold> > >, ordered_non_unique< // 5 - Household & age tag<ah>, composite_key< Host, const_mem_fun<Host,int,&Host::getHousehold>, const_mem_fun<Host,int,&Host::getAgeInY> > > > // end indexed_by > HostContainer; typedef std::set<int> HHSet; class Simulation { public: Simulation( int sid ); ~Simulation(); // MEMBER FUNCTION PROTOTYPES void runDemSim( void ); void runEpidSim( void ); void ageHost( int id ); int calcPartnerAge( int a ); void executeEvent( Event & te ); void killHost( int id ); void pairHost( int id ); void partner2Hosts( int id1, int id2 ); void fledgeHost( int id ); void birthHost( int id ); void calcSI( void ); double beta_ij_h( int ai, int aj, int s ); double beta_ij_nh( int ai, int aj, int s ); private: // SIMULATION OBJECTS double t; double outputStrobe; int idCtr; int hholdCtr; int simID; RNG rgen; HostContainer allHosts; // shared_ptr to Hosts - line 102 - second error HHSet allHouseholds; int numInfecteds[ INIT_NUM_AGE_CATS ][ INIT_NUM_STYPES ]; EventPQ currentEvents; // STREAM MANAGEMENT void writeOutput(); void initOutput(); void closeOutput(); std::ofstream ageDistStream; std::ofstream ageDistTStream; std::ofstream hhDistStream; std::ofstream hhDistTStream; std::string ageDistFile; std::string ageDistTFile; std::string hhDistFile; std::string hhDistTFile; }; #endif I'm hoping the other files aren't so relevant to this problem. When I compile with g++ -g -o -c a.out -I /Applications/boost_1_42_0/ Host.cpp Simulation.cpp rng.cpp main.cpp Rdraws.cpp I get Simulation.h:33: error: expected initializer before '<' token Simulation.h:102: error: 'HostContainer' does not name a type and then a bunch of other errors related to not recognizing the HostContainer. It seems like I have all the right Boost #includes for the HostContainer to be understood. What else could be going wrong? I would appreciate immediate suggestions, troubleshooting tips, and other advice about my code. My plan is to create a "HostContainer.h" file that includes the typedef and structs that define its tags, similar to what I'm doing in "Event.h" for the EventPQ container. I'm assuming this is legal and good form.

    Read the article

  • MVVM Light Toolkit - RelayCommands, DelegateCommands, and ObservableObjects

    - by DanM
    I just started experimenting with Laurent Bugnion's MVVM Light Toolkit. I think I'm going to really like it, but I have a couple questions. Before I get to them, I need to explain where I'm coming from. I currently use a combination of Josh Smith's MVVM Foundation and another project on Codeplex called MVVM Toolkit. I use ObservableObject and Messenger from MVVM Foundation and DelegateCommand and CommandReference from MVVM Toolkit. The only real overlap between MVVM Foundation and MVVM Tookit is that they both have an implementation for ICommand: MVVM Foundation has a RelayCommand and MVVM Tookit has a DelegateCommand. Of these two, DelegateCommand appears to be more sophisticated. It employs a CommandManagerHelper that uses weak references to avoid memory leaks. With that said, a couple questions: Why does MVVM Light Toolkit use RelayCommand rather than DelegateCommand? Is the use of weak references in an ICommand unnecessary or not recommended for some reason? Why is there no ObservableObject in MVVM Light? ObservableObject is basically just the part of ViewModelBase that implements INotifyPropertyChanged, but it's very convenient to have as a separate class because view-models are not the only objects that need to implement INotifyPropertyChanged. For example, let's say you have DataGrid that binds to a list of Person objects. If any of the properties in Person can change while the user is viewing the DataGrid, Person would need to implement INotifyPropertyChanged. (I realize that if Person is auto-generated using something like LinqToSql, it will probably already implement INotifyPropertyChanged, but there are cases where I need to make a view-specific version of entity model objects, say, because I need to include a command to support a button column in a DataGrid.) Thanks.

    Read the article

  • Use DataSource in DataFormWebPart

    - by Bryan Shen
    I'm writing a custom web part that extends DataFormWebPart. public class MyCustomWebPart : DataFormWebPart{ // other methods public override void DataBind() { XmlDataSource source = new XmlDataSource() { Data = @" <Person> <name cap='true'>Bryan</name> <occupation>student</occupation> </Person> " }; DataSources.Add(source); base.DataBind(); } } The only noticeable thing I do is overriding the DataBind() method, where I use xml as the data source. After I deploy the web part, I set the following XSL to it: <?xml version="1.0" encoding="utf-8"?> <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> <xsl:output method="xml" version="1.0" encoding="UTF-8" indent="yes"/> <xsl:template match="/"> <xmp> <xsl:copy-of select="*"/> </xmp> </xsl:template> </xsl:stylesheet> This xsl will surround the input xml with a tag . So I expected the web part to display the original xml data as I wrote in C# code behind. But what shows up in the web part is this: <Person> <name cap="true" /> <occupation /> </Person> All the values within the inner-most tags disappear. What's going on? Can anybody help me? Thanks.

    Read the article

  • How to Use DataSource Property in DataFormWebPart

    - by Bryan Shen
    I'm writing a custom web part that extends DataFormWebPart. public class MyCustomWebPart : DataFormWebPart{ // other methods public override void DataBind() { XmlDataSource source = new XmlDataSource() { Data = @" <Person> <name cap='true'>Bryan</name> <occupation>student</occupation> </Person> " }; DataSources.Add(source); base.DataBind(); } } The only noticeable thing I do is overriding the DataBind() method, where I use xml as the data source. After I deploy the web part, I set the following XSL to it: <?xml version="1.0" encoding="utf-8"?> <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> <xsl:output method="xml" version="1.0" encoding="UTF-8" indent="yes"/> <xsl:template match="/"> <xmp> <xsl:copy-of select="*"/> </xmp> </xsl:template> </xsl:stylesheet> This xsl will surround the input xml with a tag . So I expected the web part to display the original xml data as I wrote in C# code behind. But what shows up in the web part is this: <Person> <name cap="true" /> <occupation /> </Person> All the values within the inner-most tags disappear. What's going on? Can anybody help me? Thanks.

    Read the article

  • Using Hibernate's ScrollableResults to slowly read 90 million records

    - by at
    I simply need to read each row in a table in my MySQL database using Hibernate and write a file based on it. But there are 90 million rows and they are pretty big. So it seemed like the following would be appropriate: ScrollableResults results = session.createQuery("SELECT person FROM Person person") .setReadOnly(true).setCacheable(false).scroll(ScrollMode.FORWARD_ONLY); while (results.next()) storeInFile(results.get()[0]); The problem is the above will try and load all 90 million rows into RAM before moving on to the while loop... and that will kill my memory with OutOfMemoryError: Java heap space exceptions :(. So I guess ScrollableResults isn't what I was looking for? What is the proper way to handle this? I don't mind if this while loop takes days (well I'd love it to not). I guess the only other way to handle this is to use setFirstResult and setMaxResults to iterate through the results and just use regular Hibernate results instead of ScrollableResults. That feels like it will be inefficient though and will start taking a ridiculously long time when I'm calling setFirstResult on the 89 millionth row... UPDATE: setFirstResult/setMaxResults doesn't work, it turns out to take an unusably long time to get to the offsets like I feared. There must be a solution here! Isn't this a pretty standard procedure?? I'm willing to forgo Hibernate and use JDBC or whatever it takes. UPDATE 2: the solution I've come up with which works ok, not great, is basically of the form: select * from person where id > <offset> and <other_conditions> limit 1 Since I have other conditions, even all in an index, it's still not as fast as I'd like it to be... so still open for other suggestions..

    Read the article

  • How to generating comments in hbm2java created POJO?

    - by jschoen
    My current setup using hibernate uses the hibernate.reveng.xml file to generate the various hbm.xml files. Which are then turned into POJOs using hbm2java. We spent some time while designing our schema, to place some pretty decent descriptions on the Tables and there columns. I am able to pull these descriptions into the hbm.xml files when generating them using hbm2jhbmxml. So I get something similar to this: <class name="test.Person" table="PERSONS"> <comment>The comment about the PERSONS table.</comment> <property name="firstName" type="string"> <column name="FIRST_NAME" length="100" not-null="true"> <comment>The first name of this person.</comment> </column> </property> <property name="middleInitial" type="string"> <column name="MIDDLE_INITIAL" length="1"> <comment>The middle initial of this person.</comment> </column> </property> <property name="lastName" type="string"> <column name="LAST_NAME" length="100"> <comment>The last name of this person.</comment> </column> </property> </class> So how do I tell hbm2java to pull and place these comments in the created Java files? I have read over this about editing the freemarker templates to change the way code is generated. I under stand the concept, but it was not to detailed about what else you could do with it beyond there example of pre and post conditions.

    Read the article

  • Entity Framework lazy loading doesn't work from other thread

    - by Thomas Levesque
    Hi, I just found out that lazy loading in Entity Framework only works from the thread that created the ObjectContext. To illustrate the problem, I did a simple test, with a simple model containing just 2 entities : Person and Address. Here's the code : private static void TestSingleThread() { using (var context = new TestDBContext()) { foreach (var p in context.Person) { Console.WriteLine("{0} lives in {1}.", p.Name, p.Address.City); } } } private static void TestMultiThread() { using (var context = new TestDBContext()) { foreach (var p in context.Person) { Person p2 = p; // to avoid capturing the loop variable ThreadPool.QueueUserWorkItem( arg => { Console.WriteLine("{0} lives in {1}.", p2.Name, p2.Address.City); }); } } } The TestSingleThread method works fine, the Address property is lazily loaded. But in TestMultiThread, I get a NullReferenceException on p2.Address.City, because p2.Address is null. It that a bug ? Is this the way it's supposed to work ? If so, is there any documentation mentioning it ? I couldn't find anything on the subject on MSDN or Google... And more importantly, is there a workaround ? (other than explicitly calling LoadProperty from the worker thread...) Any help would be very appreciated PS: I'm using VS2010, so it's EF 4.0. I don't know if it was the same in the previous version of EF...

    Read the article

  • Validation Summary for Collections

    - by Myster
    Hi All, EDIT: upgraded this question to MVC 2.0 With asp.net MVC 2.0 is there an existing method of creating Validation Summary that makes sense for models containing collections? If not I can create my own validation summary Example Model: public class GroupDetailsViewModel { public string GroupName { get; set; } public int NumberOfPeople { get; set; } public List<Person> People{ get; set; } } public class Person { [Required(ErrorMessage = "Please enter your Email Address")] [RegularExpression(@"^([a-zA-Z0-9_\-\.]+)@((\[[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.)|(([a-zA-Z0-9\-]+\.)+))([a-zA-Z]{2,4}|[0-9]{1,3})(\]?)$", ErrorMessage = "Please enter a valid Email Address")] public string EmailAddress { get; set; } [Required(ErrorMessage = "Please enter your Phone Number")] public string Phone { get; set; } [Required(ErrorMessage = "Please enter your First Name")] public string FirstName { get; set; } [Required(ErrorMessage = "Please enter your Last Name")] public string LastName { get; set; } } The existing summary <%=Html.ValidationSummary %> if nothing is entered looks like this. The following error(s) must be corrected before proceeding to the next step * Please enter your Email Address * Please enter your Phone Number * Please enter your First Name * Please enter your Last Name * Please enter your Email Address * Please enter your Phone Number * Please enter your First Name * Please enter your Last Name The design calls for headings to be inserted like this: The following error(s) must be corrected before proceeding to the next step Person 1 * Please enter your Email Address * Please enter your Phone Number * Please enter your First Name * Please enter your Last Name Person 2 * Please enter your Email Address * Please enter your Phone Number * Please enter your First Name * Please enter your Last Name

    Read the article

  • Rails - How can I display nicely indented JSON?

    - by sa125
    Hi - I have a controller action that returns JSON data for api purposes, and plenty of it. I want to be able to inspect it in the browser, and have it nicely indented for the viewer. For example, if my data is data = { :person => { :id => 1, :name => "john doe", :age => 30 }, :person => ... } I want to see { "person" : { "id" : 1, "name" : "john doe", "age" : 30, }, "person" : { "id" : 2, "name" : "jane doe", "age" : 31, }, ...etc } In the view. I thought about using different routes to get the bulk/pretty data: # GET /api/json # ... respond_to do |format| format.html { render :json => data.to_json } end # GET /api/json/inspect # ... respond_to do |format| format.html { render :text => pretty_json } end Anyone knows of a gem/plugin that does this or something similar? I tried using JSON.pretty_generate, but it doesn't seem to work inside rails (2.3.5). thanks.

    Read the article

  • RESTful idempotence

    - by DutrowLLC
    I'm designing a RESTful web service utilizing ROA(Resource oriented architecture). I'm trying to work out an efficient way to guarantee idempotence for PUT requests that create new resources in cases that the server designates the resource key. From my understanding, the traditional approach is to create a type of transaction resource such as /CREATE_PERSON. The the client-server interaction for creating a new person resource would be in two parts: Step 1: Get unique transaction id for creating the new PERSON resource::: **Client request:** GET /CREATE_PERSON **Server response:** 200 OK transaction-id:"as8yfasiob" Step 2: Create the new person resource in a request guaranteed to be unique by using the transaction id::: **Client request** PUT /CREATE_PERSON/{transaction_id} first_name="Big bubba" **Server response** 201 Created // (If the request is a duplicate, it would send this PersonKey="398u4nsdf" // same response without creating a new resource. It // would perhaps send an error response if the was used // on a transaction id non-duplicate request, but I have // control over the client, so I can guarantee that this // won't happen) The problem that I see with this approach is that it requires sending two requests to the server in order to do to single operation of creating a new PERSON resource. This creates a performance issues increasing the chance that the user will be waiting around for the client to complete their request. I've been trying to hash out ideas for eliminating the first step such as pre-sending transaction-id's with each request, but most of my ideas have other issues or involve sacrificing the statelessness of the application. Is there a way to do this?

    Read the article

  • How to generate comments in hbm2java created POJO?

    - by jschoen
    My current setup using hibernate uses the hibernate.reveng.xml file to generate the various hbm.xml files. Which are then turned into POJOs using hbm2java. We spent some time while designing our schema, to place some pretty decent descriptions on the Tables and there columns. I am able to pull these descriptions into the hbm.xml files when generating them using hbm2jhbmxml. So I get something similar to this: <class name="test.Person" table="PERSONS"> <comment>The comment about the PERSONS table.</comment> <property name="firstName" type="string"> <column name="FIRST_NAME" length="100" not-null="true"> <comment>The first name of this person.</comment> </column> </property> <property name="middleInitial" type="string"> <column name="MIDDLE_INITIAL" length="1"> <comment>The middle initial of this person.</comment> </column> </property> <property name="lastName" type="string"> <column name="LAST_NAME" length="100"> <comment>The last name of this person.</comment> </column> </property> </class> So how do I tell hbm2java to pull and place these comments in the created Java files? I have read over this about editing the freemarker templates to change the way code is generated. I under stand the concept, but it was not to detailed about what else you could do with it beyond there example of pre and post conditions.

    Read the article

  • XmlSerializer equivalent of IExtensibleDataObject

    - by demoncodemonkey
    With DataContracts you can derive from IExtensibleDataObject to allow round-tripping to work without losing any unknown additional data from your XML file. I can't use DataContract because I need to control the formatting of the output XML. But I also need to be able to read a future version of the XML file in the old version of the app, without losing any of the data from the XML file. e.g. XML v1: <Person> <Name>Fred</Name> </Person> XML v2: <Person> <Name>Fred</Name> <Age>42</Age> </Person> If reading an XML v2 file from v1 of my app, deserializing and serializing it again turns it into an XML v1 file. i.e. the "Age" field is erased. Is there anything similar to IExtensibleDataObject that I can use with XmlSerializer to avoid the Age field disappearing?

    Read the article

  • C# COM objects with VB6/asp error

    - by Ken
    I'm trying to expose a C# class library via COM so that I can use it in a classic asp web site. I've used sn - k, regasm and gacutil. About all I can do now though is echo back strings. Methods which take Class variables as input are not working for me. ie my test method EchoPerson(Person p) which returns a string of the first and last name doesn't work. I get a runtime error 5 - Invalid procedure call or argument. Please let me know what I am missing. Also I have no intellisence in VB. What do I need to do to get the intellisence working. Below is my C# test code namespace MrdcToFastCom { public class Person : MrdcToFastCom.IPerson { public string FirstName { get; set; } public string LastName { get; set; } } public class ComTester : MrdcToFastCom.IComTester { public string EchoString(string s) { return ("Echo: " + s); } public string Hello() { return "Hello"; } public string EchoPerson(ref Person p) { return string.Format("{0} {1}", p.FirstName, p.LastName); } } } and VB6 call Private Sub btnClickMe_Click() Dim ct Set ct = New MrdcToFastCom.ComTester Dim p Set p = New MrdcToFastCom.Person p.FirstName = "Joe" p.LastName = "Test" Dim s s = ct.EchoPerson(p) 'Error on this line tbx1.Text = s End Sub

    Read the article

  • Using Apache Camel how do I unmarshal my deserialized object that comes in through a CXF Endpoint?

    - by ScArcher2
    I have a very simple camel route. It starts with a CXF Endpoint exposed as a web service. I then want to convert it to xml and call a method on a bean. Currently i'm getting a CXF specific object after the web service call. How do I take my serialized object out of the CXF MessageList and use it going forward? My Route: <camel:route> <camel:from uri="cxf:bean:helloEndpoint" /> <camel:marshal ref="xstream-utf8" /> <camel:to uri="bean:hello?method=hello"/> </camel:route> The XML Serialized Message: <?xml version='1.0' encoding='UTF-8'?> <org.apache.cxf.message.MessageContentsList serialization="custom"> <unserializable-parents /> <list> <default> <size>1</size> </default> <int>6</int> <com.whatever.Person> <firstName>Joe</firstName> <middleName></middleName> <lastName>Buddah</lastName> <dateOfBirth>2010-04-13 12:09:00.137 CDT</dateOfBirth> </com.whatever.Person> </list> </org.apache.cxf.message.MessageContentsList> I would expect the XML to be more like this: <com.whatever.Person> <firstName>Joe</firstName> <middleName></middleName> <lastName>Buddah</lastName> <dateOfBirth>2010-04-13 12:09:00.137 CDT</dateOfBirth> </com.whatever.Person>

    Read the article

  • Simplest possible voting/synchronization algorithm

    - by Domchi
    What would be a simplest algorithm one or more people could use to decide who of them should perform some task? There is one task, which needs to be done only once, and one or more people. People can speak, that is, send messages one to another. Communication must be minimal, and all people use the exact same algorithm. One person saying "I'm doing it" is not good enough since two persons may say it at a same time. Simplest that comes to my mind is that each person says a number and waits a bit. If somebody responds in that time, the person with lower number "wins" and does the task. If nobody responds, person says that she's doing it and does it. When she says that she does it, everybody else backs off. This should be enough to avoid two persons doing the task in the same time (since there is wait/handhake period), but might need a "second round" if both persons say the same number. Is there something simpler? For those curious, I'm trying to synchronize several copies of SecondLife LSL script to do something only once.

    Read the article

  • Zend: How to authenticate using OpenId on local server.

    - by NAVEED
    I am using zend framework. Now I want to authenticate users with other already registered accounts using RPX. I am following the 3 steps as described at RPX site: 1- Get the Widget 2- Receive Tokens 3- Choose Providers I created a controller(person) and action(signin) to show widget and my own signin form. When following action (http://test.dev/#person/personsignin) is called then my own login form and widget is shown successfully. # is used in above URL for AJAX indication. public function personsigninAction() { $this->view->jsonEncoded = true; // Person Signin Form $PersonSigninForm = new Form_PersonSignin(); $this->view->PersonSigninForm = $PersonSigninForm; $this->view->PersonSigninForm->setAction( $this->view->url() ); $request = $this->getRequest(); if ( $request->isPost() ) { } } There are two problems while login using openid widget: When I am authenticated from outside(for example: Yahoo) then I am redirected to http://test.dev, therefor indexAction in called in indexController and home page is shown. I want to redirect to http://test.dev/#person/personsignin after authentication and want to set session in isPost() condition of personsigninAction() (described above). For now I consider indexAction to be called when outside authentication is done. Now I posted the code from http://gist.github.com/291396 in indexAction to follow step 3 mentioned above. But it is giving me following error: An error occured: Invalid parameter: apiKey Am I using the right way to use this. This is my very first attempt to this stuff. Can someone tell me the exact steps using my above actions? Thanks.

    Read the article

  • RedirectToAction and validate MVC 2

    - by Dan
    Hi, my problem is the View where the user typed, the validation. I have to take RedirectToAction on the site because on the site upload a file. Thats my code. My model class public class Person { [Required(ErrorMessage= "Please enter name")] public string name { get; set; } } My View <%@ Page Title="" Language="C#" MasterPageFile="~/Views/Shared/Site.Master" Inherits="System.Web.Mvc.ViewPage<MvcWebRole1.Models.Person>" %> Name <h2>Information Data</h2> <%= Html.ValidationSummary() %> <%using (Html.BeginForm ("upload","Home", FormMethod.Post, new{ enctype ="multipart/form-data" })) {%> <fieldset> <legend>Fields</legend> <p> <label for="name">name:</label> <%= Html.TextBox("name") %> <%= Html.ValidationMessage("name", "*") %> </p> </fieldset> <% } %> and the Controller [AcceptVerbs(HttpVerbs.Post)] public ActionResult upload(FormCollection form) { Person lastname = new Person(); lastname.name = form["name"]; return RedirectToAction("Index"); } Thx for answer my question In advance

    Read the article

  • How to test a DAO with JPA implementation ?

    - by smallufo
    Hi I came from the Spring camp , I don't want to use Spring , and am migrating to JavaEE6 , But I have problem testing DAO + JPA , here is my simplified sample : public interface PersonDao { public Person get(long id); } This is a very basic DAO , because I came from Spring , I believe DAO still have it value , so I decided to add a DAO layer . public class PersonDaoImpl implements PersonDao , Serializable { @PersistenceContext(unitName = "test", type = PersistenceContextType.EXTENDED) EntityManager entityManager ; public PersonDaoImpl() { } @Override public Person get(long id) { return entityManager .find(Person.class , id); } } This is a JPA-implemented DAO , I hope the EE container or the test container able to inject the EntityManager. public class PersonDaoImplTest extends TestCase { @Inject protected PersonDao personDao; @Override protected void setUp() throws Exception { //personDao = new PersonDaoImpl(); } public void testGet() { System.out.println("personDao = " + personDao); // NULL ! Person p = personDao.get(1L); System.out.println("p = " + p); } } This is my test file . OK , here comes the problem : Because JUnit doesn't understand @javax.inject.Inject , the PersonDao will not be able to injected , the test will fail. How do I find a test framework that able to inject the EntityManager to the PersonDaoImpl , and @Inject the PersonDaoImpl to the PersonDao of TestCase ? I tried unitils.org , but cannot find a sample like this , it just directly inject the EntityManagerFactory to the TestCast , not what I want ...

    Read the article

  • PCA extended face recognition

    - by cMinor
    The state of the art says that we can use PCA to perform face recognition. like this, this or this I am working with a project that involves training a classifier to detect a person who is wearing glasess or hats or even a mustache. The purpose of doing this is to detect when a person that has robbed a bank, store, or have commeted some sort of crime(s) (we have their image in a database), enters a certain place ( historically we know these guys have robbed, so we should take care to avoid problems). We came first to have a distributed database with all images of criminals, then I thought to have a layer of them clasifying these criminals using accesories like hats, mustache or anything that hides their face etc... Then, to apply that knowledge to detect when a particular or a suspect person enters a comercial place. ( In practice when someone is going to rob not all the times they are using an accesorie...) What do you think about this idea of doing PCA to first detect principal components of the face and then the components of an accesory. I was thinking that maybe a probabilistic approach is better so we can compute the probability the criminal is the person that entered a place and call the respective authorities.

    Read the article

< Previous Page | 49 50 51 52 53 54 55 56 57 58 59 60  | Next Page >