Search Results

Search found 12371 results on 495 pages for 'ok'.

Page 36/495 | < Previous Page | 32 33 34 35 36 37 38 39 40 41 42 43  | Next Page >

  • How to print string in this way

    - by xRobot
    For every string, I need to print # each 6 characters. For example: example_string = "this is an example string. ok ????" myfunction(example_string) "this i#s an e#ample #string#. ok ?#???" What is the most efficient way to do that ?

    Read the article

  • Load HTML NSString into a UIWebView

    - by ehenrik
    Im doing a project where I connect to a webpage using the NSURLConnection to be able to monitor the status codes that are returned (200 OK / 404 ERROR). I would like to send the user to the top url www.domain.com if I recieve 404 as status code and if i recieve as 200 status code I would like to load the page in to a webview. I have seen several implementations of this problem by creating a new request but I feel that it is unnecessary since you already received the html in the first request so i would just like to load that HTML in to the webView. So i try to use the [webView loadHTMLFromString: baseURL:] but it doesn't always work, I have noticed that when i print the NSString with html in the connectionDidFinnishLoading it sometimes is null and when I monitor these cases by printing the html in didReceiveData a random number of the last packets is NULL (differs between 2-10). It is always the same webpages that doesn't get loaded. If I load them to my webView using [webView loadRequest:myRequest] it always works. My implementation looks like this perhaps someone of you can see what Im doing wrong. I create my first request with a button click. -(IBAction)buttonClick:(id)sender { NSURL *url = [NSURL URLWithString:@"http://www.domain.com/page2/apa.html"]; NSURLRequest *theRequest = [NSURLRequest requestWithURL:url] NSURLConnection *theConnection = [[NSURLConnection alloc] initWithRequest:theRequest delegate:self]; if( theConnection ) { webData = [[NSMutableData data] retain]; } else { } } Then I monitor the response code in the didReceiveResponse method by casting the request to a NSHTTPURLResponse to be able to access the status codes and then setting a Bool depending on the status code. -(void)connection:(NSURLConnection *)connection didReceiveResponse:(NSURLResponse *)response { NSHTTPURLResponse *ne = (NSHTTPURLResponse *)response; if ([ne statusCode] == 200){ ok = TRUE; } [webData setLength: 0]; } I then check the bools value in connectionDidFinnishLoading. If I log the html NSString I get the source of the webpage so i know that it isn't an empty string. -(void)connectionDidFinishLoading:(NSURLConnection *)connection { NSString *html = [[NSString alloc] initWithBytes: [webData mutableBytes] length:[webData length] encoding:NSUTF8StringEncoding]; NSURL *url = [NSURL URLWithString:@"http://www.domain.com/"]; if (ok){ [webView loadHTMLString:html baseURL:url]; ok = FALSE; } else{ //Create a new request to www.domain.com } } webData is an instance variable and I load it in didReceiveData like this. -(void)connection:(NSURLConnection *)connection didReceiveData:(NSData *)data { [webData appendData:data]; }

    Read the article

  • Agile Development

    - by James Oloo Onyango
    Alot of literature has and is being written about agile developement and its surrounding philosophies. In my quest to find the best way to express the importance of agile methodologies, i have found Robert C. Martin's "A Satire Of Two Companies" to be both the most concise and thorough! Enjoy the read! Rufus Inc Project Kick Off Your name is Bob. The date is January 3, 2001, and your head still aches from the recent millennial revelry. You are sitting in a conference room with several managers and a group of your peers. You are a project team leader. Your boss is there, and he has brought along all of his team leaders. His boss called the meeting. "We have a new project to develop," says your boss's boss. Call him BB. The points in his hair are so long that they scrape the ceiling. Your boss's points are just starting to grow, but he eagerly awaits the day when he can leave Brylcream stains on the acoustic tiles. BB describes the essence of the new market they have identified and the product they want to develop to exploit this market. "We must have this new project up and working by fourth quarter October 1," BB demands. "Nothing is of higher priority, so we are cancelling your current project." The reaction in the room is stunned silence. Months of work are simply going to be thrown away. Slowly, a murmur of objection begins to circulate around the conference table.   His points give off an evil green glow as BB meets the eyes of everyone in the room. One by one, that insidious stare reduces each attendee to quivering lumps of protoplasm. It is clear that he will brook no discussion on this matter. Once silence has been restored, BB says, "We need to begin immediately. How long will it take you to do the analysis?" You raise your hand. Your boss tries to stop you, but his spitwad misses you and you are unaware of his efforts.   "Sir, we can't tell you how long the analysis will take until we have some requirements." "The requirements document won't be ready for 3 or 4 weeks," BB says, his points vibrating with frustration. "So, pretend that you have the requirements in front of you now. How long will you require for analysis?" No one breathes. Everyone looks around to see whether anyone has some idea. "If analysis goes beyond April 1, we have a problem. Can you finish the analysis by then?" Your boss visibly gathers his courage: "We'll find a way, sir!" His points grow 3 mm, and your headache increases by two Tylenol. "Good." BB smiles. "Now, how long will it take to do the design?" "Sir," you say. Your boss visibly pales. He is clearly worried that his 3 mms are at risk. "Without an analysis, it will not be possible to tell you how long design will take." BB's expression shifts beyond austere.   "PRETEND you have the analysis already!" he says, while fixing you with his vacant, beady little eyes. "How long will it take you to do the design?" Two Tylenol are not going to cut it. Your boss, in a desperate attempt to save his new growth, babbles: "Well, sir, with only six months left to complete the project, design had better take no longer than 3 months."   "I'm glad you agree, Smithers!" BB says, beaming. Your boss relaxes. He knows his points are secure. After a while, he starts lightly humming the Brylcream jingle. BB continues, "So, analysis will be complete by April 1, design will be complete by July 1, and that gives you 3 months to implement the project. This meeting is an example of how well our new consensus and empowerment policies are working. Now, get out there and start working. I'll expect to see TQM plans and QIT assignments on my desk by next week. Oh, and don't forget that your crossfunctional team meetings and reports will be needed for next month's quality audit." "Forget the Tylenol," you think to yourself as you return to your cubicle. "I need bourbon."   Visibly excited, your boss comes over to you and says, "Gosh, what a great meeting. I think we're really going to do some world shaking with this project." You nod in agreement, too disgusted to do anything else. "Oh," your boss continues, "I almost forgot." He hands you a 30-page document. "Remember that the SEI is coming to do an evaluation next week. This is the evaluation guide. You need to read through it, memorize it, and then shred it. It tells you how to answer any questions that the SEI auditors ask you. It also tells you what parts of the building you are allowed to take them to and what parts to avoid. We are determined to be a CMM level 3 organization by June!"   You and your peers start working on the analysis of the new project. This is difficult because you have no requirements. But from the 10-minute introduction given by BB on that fateful morning, you have some idea of what the product is supposed to do.   Corporate process demands that you begin by creating a use case document. You and your team begin enumerating use cases and drawing oval and stick diagrams. Philosophical debates break out among the team members. There is disagreement as to whether certain use cases should be connected with <<extends>> or <<includes>> relationships. Competing models are created, but nobody knows how to evaluate them. The debate continues, effectively paralyzing progress.   After a week, somebody finds the iceberg.com Web site, which recommends disposing entirely of <<extends>> and <<includes>> and replacing them with <<precedes>> and <<uses>>. The documents on this Web site, authored by Don Sengroiux, describes a method known as stalwart-analysis, which claims to be a step-by-step method for translating use cases into design diagrams. More competing use case models are created using this new scheme, but again, people can't agree on how to evaluate them. The thrashing continues. More and more, the use case meetings are driven by emotion rather than by reason. If it weren't for the fact that you don't have requirements, you'd be pretty upset by the lack of progress you are making. The requirements document arrives on February 15. And then again on February 20, 25, and every week thereafter. Each new version contradicts the previous one. Clearly, the marketing folks who are writing the requirements, empowered though they might be, are not finding consensus.   At the same time, several new competing use case templates have been proposed by the various team members. Each template presents its own particularly creative way of delaying progress. The debates rage on. On March 1, Prudence Putrigence, the process proctor, succeeds in integrating all the competing use case forms and templates into a single, all-encompassing form. Just the blank form is 15 pages long. She has managed to include every field that appeared on all the competing templates. She also presents a 159- page document describing how to fill out the use case form. All current use cases must be rewritten according to the new standard.   You marvel to yourself that it now requires 15 pages of fill-in-the-blank and essay questions to answer the question: What should the system do when the user presses Return? The corporate process (authored by L. E. Ott, famed author of "Holistic Analysis: A Progressive Dialectic for Software Engineers") insists that you discover all primary use cases, 87 percent of all secondary use cases, and 36.274 percent of all tertiary use cases before you can complete analysis and enter the design phase. You have no idea what a tertiary use case is. So in an attempt to meet this requirement, you try to get your use case document reviewed by the marketing department, which you hope will know what a tertiary use case is.   Unfortunately, the marketing folks are too busy with sales support to talk to you. Indeed, since the project started, you have not been able to get a single meeting with marketing, which has provided a never-ending stream of changing and contradictory requirements documents.   While one team has been spinning endlessly on the use case document, another team has been working out the domain model. Endless variations of UML documents are pouring out of this team. Every week, the model is reworked.   The team members can't decide whether to use <<interfaces>> or <<types>> in the model. A huge disagreement has been raging on the proper syntax and application of OCL. Others on the team just got back from a 5-day class on catabolism, and have been producing incredibly detailed and arcane diagrams that nobody else can fathom.   On March 27, with one week to go before analysis is to be complete, you have produced a sea of documents and diagrams but are no closer to a cogent analysis of the problem than you were on January 3. **** And then, a miracle happens.   **** On Saturday, April 1, you check your e-mail from home. You see a memo from your boss to BB. It states unequivocally that you are done with the analysis! You phone your boss and complain. "How could you have told BB that we were done with the analysis?" "Have you looked at a calendar lately?" he responds. "It's April 1!" The irony of that date does not escape you. "But we have so much more to think about. So much more to analyze! We haven't even decided whether to use <<extends>> or <<precedes>>!" "Where is your evidence that you are not done?" inquires your boss, impatiently. "Whaaa . . . ." But he cuts you off. "Analysis can go on forever; it has to be stopped at some point. And since this is the date it was scheduled to stop, it has been stopped. Now, on Monday, I want you to gather up all existing analysis materials and put them into a public folder. Release that folder to Prudence so that she can log it in the CM system by Monday afternoon. Then get busy and start designing."   As you hang up the phone, you begin to consider the benefits of keeping a bottle of bourbon in your bottom desk drawer. They threw a party to celebrate the on-time completion of the analysis phase. BB gave a colon-stirring speech on empowerment. And your boss, another 3 mm taller, congratulated his team on the incredible show of unity and teamwork. Finally, the CIO takes the stage to tell everyone that the SEI audit went very well and to thank everyone for studying and shredding the evaluation guides that were passed out. Level 3 now seems assured and will be awarded by June. (Scuttlebutt has it that managers at the level of BB and above are to receive significant bonuses once the SEI awards level 3.)   As the weeks flow by, you and your team work on the design of the system. Of course, you find that the analysis that the design is supposedly based on is flawedno, useless; no, worse than useless. But when you tell your boss that you need to go back and work some more on the analysis to shore up its weaker sections, he simply states, "The analysis phase is over. The only allowable activity is design. Now get back to it."   So, you and your team hack the design as best you can, unsure of whether the requirements have been properly analyzed. Of course, it really doesn't matter much, since the requirements document is still thrashing with weekly revisions, and the marketing department still refuses to meet with you.     The design is a nightmare. Your boss recently misread a book named The Finish Line in which the author, Mark DeThomaso, blithely suggested that design documents should be taken down to code-level detail. "If we are going to be working at that level of detail," you ask, "why don't we simply write the code instead?" "Because then you wouldn't be designing, of course. And the only allowable activity in the design phase is design!" "Besides," he continues, "we have just purchased a companywide license for Dandelion! This tool enables 'Round the Horn Engineering!' You are to transfer all design diagrams into this tool. It will automatically generate our code for us! It will also keep the design diagrams in sync with the code!" Your boss hands you a brightly colored shrinkwrapped box containing the Dandelion distribution. You accept it numbly and shuffle off to your cubicle. Twelve hours, eight crashes, one disk reformatting, and eight shots of 151 later, you finally have the tool installed on your server. You consider the week your team will lose while attending Dandelion training. Then you smile and think, "Any week I'm not here is a good week." Design diagram after design diagram is created by your team. Dandelion makes it very difficult to draw these diagrams. There are dozens and dozens of deeply nested dialog boxes with funny text fields and check boxes that must all be filled in correctly. And then there's the problem of moving classes between packages. At first, these diagram are driven from the use cases. But the requirements are changing so often that the use cases rapidly become meaningless. Debates rage about whether VISITOR or DECORATOR design patterns should be used. One developer refuses to use VISITOR in any form, claiming that it's not a properly object-oriented construct. Someone refuses to use multiple inheritance, since it is the spawn of the devil. Review meetings rapidly degenerate into debates about the meaning of object orientation, the definition of analysis versus design, or when to use aggregation versus association. Midway through the design cycle, the marketing folks announce that they have rethought the focus of the system. Their new requirements document is completely restructured. They have eliminated several major feature areas and replaced them with feature areas that they anticipate customer surveys will show to be more appropriate. You tell your boss that these changes mean that you need to reanalyze and redesign much of the system. But he says, "The analysis phase is system. But he says, "The analysis phase is over. The only allowable activity is design. Now get back to it."   You suggest that it might be better to create a simple prototype to show to the marketing folks and even some potential customers. But your boss says, "The analysis phase is over. The only allowable activity is design. Now get back to it." Hack, hack, hack, hack. You try to create some kind of a design document that might reflect the new requirements documents. However, the revolution of the requirements has not caused them to stop thrashing. Indeed, if anything, the wild oscillations of the requirements document have only increased in frequency and amplitude.   You slog your way through them.   On June 15, the Dandelion database gets corrupted. Apparently, the corruption has been progressive. Small errors in the DB accumulated over the months into bigger and bigger errors. Eventually, the CASE tool just stopped working. Of course, the slowly encroaching corruption is present on all the backups. Calls to the Dandelion technical support line go unanswered for several days. Finally, you receive a brief e-mail from Dandelion, informing you that this is a known problem and that the solution is to purchase the new version, which they promise will be ready some time next quarter, and then reenter all the diagrams by hand.   ****   Then, on July 1 another miracle happens! You are done with the design!   Rather than go to your boss and complain, you stock your middle desk drawer with some vodka.   **** They threw a party to celebrate the on-time completion of the design phase and their graduation to CMM level 3. This time, you find BB's speech so stirring that you have to use the restroom before it begins. New banners and plaques are all over your workplace. They show pictures of eagles and mountain climbers, and they talk about teamwork and empowerment. They read better after a few scotches. That reminds you that you need to clear out your file cabinet to make room for the brandy. You and your team begin to code. But you rapidly discover that the design is lacking in some significant areas. Actually, it's lacking any significance at all. You convene a design session in one of the conference rooms to try to work through some of the nastier problems. But your boss catches you at it and disbands the meeting, saying, "The design phase is over. The only allowable activity is coding. Now get back to it."   ****   The code generated by Dandelion is really hideous. It turns out that you and your team were using association and aggregation the wrong way, after all. All the generated code has to be edited to correct these flaws. Editing this code is extremely difficult because it has been instrumented with ugly comment blocks that have special syntax that Dandelion needs in order to keep the diagrams in sync with the code. If you accidentally alter one of these comments, the diagrams will be regenerated incorrectly. It turns out that "Round the Horn Engineering" requires an awful lot of effort. The more you try to keep the code compatible with Dandelion, the more errors Dandelion generates. In the end, you give up and decide to keep the diagrams up to date manually. A second later, you decide that there's no point in keeping the diagrams up to date at all. Besides, who has time?   Your boss hires a consultant to build tools to count the number of lines of code that are being produced. He puts a big thermometer graph on the wall with the number 1,000,000 on the top. Every day, he extends the red line to show how many lines have been added. Three days after the thermometer appears on the wall, your boss stops you in the hall. "That graph isn't growing quickly enough. We need to have a million lines done by October 1." "We aren't even sh-sh-sure that the proshect will require a m-million linezh," you blather. "We have to have a million lines done by October 1," your boss reiterates. His points have grown again, and the Grecian formula he uses on them creates an aura of authority and competence. "Are you sure your comment blocks are big enough?" Then, in a flash of managerial insight, he says, "I have it! I want you to institute a new policy among the engineers. No line of code is to be longer than 20 characters. Any such line must be split into two or more preferably more. All existing code needs to be reworked to this standard. That'll get our line count up!"   You decide not to tell him that this will require two unscheduled work months. You decide not to tell him anything at all. You decide that intravenous injections of pure ethanol are the only solution. You make the appropriate arrangements. Hack, hack, hack, and hack. You and your team madly code away. By August 1, your boss, frowning at the thermometer on the wall, institutes a mandatory 50-hour workweek.   Hack, hack, hack, and hack. By September 1st, the thermometer is at 1.2 million lines and your boss asks you to write a report describing why you exceeded the coding budget by 20 percent. He institutes mandatory Saturdays and demands that the project be brought back down to a million lines. You start a campaign of remerging lines. Hack, hack, hack, and hack. Tempers are flaring; people are quitting; QA is raining trouble reports down on you. Customers are demanding installation and user manuals; salespeople are demanding advance demonstrations for special customers; the requirements document is still thrashing, the marketing folks are complaining that the product isn't anything like they specified, and the liquor store won't accept your credit card anymore. Something has to give.    On September 15, BB calls a meeting. As he enters the room, his points are emitting clouds of steam. When he speaks, the bass overtones of his carefully manicured voice cause the pit of your stomach to roll over. "The QA manager has told me that this project has less than 50 percent of the required features implemented. He has also informed me that the system crashes all the time, yields wrong results, and is hideously slow. He has also complained that he cannot keep up with the continuous train of daily releases, each more buggy than the last!" He stops for a few seconds, visibly trying to compose himself. "The QA manager estimates that, at this rate of development, we won't be able to ship the product until December!" Actually, you think it's more like March, but you don't say anything. "December!" BB roars with such derision that people duck their heads as though he were pointing an assault rifle at them. "December is absolutely out of the question. Team leaders, I want new estimates on my desk in the morning. I am hereby mandating 65-hour work weeks until this project is complete. And it better be complete by November 1."   As he leaves the conference room, he is heard to mutter: "Empowermentbah!" * * * Your boss is bald; his points are mounted on BB's wall. The fluorescent lights reflecting off his pate momentarily dazzle you. "Do you have anything to drink?" he asks. Having just finished your last bottle of Boone's Farm, you pull a bottle of Thunderbird from your bookshelf and pour it into his coffee mug. "What's it going to take to get this project done? " he asks. "We need to freeze the requirements, analyze them, design them, and then implement them," you say callously. "By November 1?" your boss exclaims incredulously. "No way! Just get back to coding the damned thing." He storms out, scratching his vacant head.   A few days later, you find that your boss has been transferred to the corporate research division. Turnover has skyrocketed. Customers, informed at the last minute that their orders cannot be fulfilled on time, have begun to cancel their orders. Marketing is re-evaluating whether this product aligns with the overall goals of the company. Memos fly, heads roll, policies change, and things are, overall, pretty grim. Finally, by March, after far too many sixty-five hour weeks, a very shaky version of the software is ready. In the field, bug-discovery rates are high, and the technical support staff are at their wits' end, trying to cope with the complaints and demands of the irate customers. Nobody is happy.   In April, BB decides to buy his way out of the problem by licensing a product produced by Rupert Industries and redistributing it. The customers are mollified, the marketing folks are smug, and you are laid off.     Rupert Industries: Project Alpha   Your name is Robert. The date is January 3, 2001. The quiet hours spent with your family this holiday have left you refreshed and ready for work. You are sitting in a conference room with your team of professionals. The manager of the division called the meeting. "We have some ideas for a new project," says the division manager. Call him Russ. He is a high-strung British chap with more energy than a fusion reactor. He is ambitious and driven but understands the value of a team. Russ describes the essence of the new market opportunity the company has identified and introduces you to Jane, the marketing manager, who is responsible for defining the products that will address it. Addressing you, Jane says, "We'd like to start defining our first product offering as soon as possible. When can you and your team meet with me?" You reply, "We'll be done with the current iteration of our project this Friday. We can spare a few hours for you between now and then. After that, we'll take a few people from the team and dedicate them to you. We'll begin hiring their replacements and the new people for your team immediately." "Great," says Russ, "but I want you to understand that it is critical that we have something to exhibit at the trade show coming up this July. If we can't be there with something significant, we'll lose the opportunity."   "I understand," you reply. "I don't yet know what it is that you have in mind, but I'm sure we can have something by July. I just can't tell you what that something will be right now. In any case, you and Jane are going to have complete control over what we developers do, so you can rest assured that by July, you'll have the most important things that can be accomplished in that time ready to exhibit."   Russ nods in satisfaction. He knows how this works. Your team has always kept him advised and allowed him to steer their development. He has the utmost confidence that your team will work on the most important things first and will produce a high-quality product.   * * *   "So, Robert," says Jane at their first meeting, "How does your team feel about being split up?" "We'll miss working with each other," you answer, "but some of us were getting pretty tired of that last project and are looking forward to a change. So, what are you people cooking up?" Jane beams. "You know how much trouble our customers currently have . . ." And she spends a half hour or so describing the problem and possible solution. "OK, wait a second" you respond. "I need to be clear about this." And so you and Jane talk about how this system might work. Some of her ideas aren't fully formed. You suggest possible solutions. She likes some of them. You continue discussing.   During the discussion, as each new topic is addressed, Jane writes user story cards. Each card represents something that the new system has to do. The cards accumulate on the table and are spread out in front of you. Both you and Jane point at them, pick them up, and make notes on them as you discuss the stories. The cards are powerful mnemonic devices that you can use to represent complex ideas that are barely formed.   At the end of the meeting, you say, "OK, I've got a general idea of what you want. I'm going to talk to the team about it. I imagine they'll want to run some experiments with various database structures and presentation formats. Next time we meet, it'll be as a group, and we'll start identifying the most important features of the system."   A week later, your nascent team meets with Jane. They spread the existing user story cards out on the table and begin to get into some of the details of the system. The meeting is very dynamic. Jane presents the stories in the order of their importance. There is much discussion about each one. The developers are concerned about keeping the stories small enough to estimate and test. So they continually ask Jane to split one story into several smaller stories. Jane is concerned that each story have a clear business value and priority, so as she splits them, she makes sure that this stays true.   The stories accumulate on the table. Jane writes them, but the developers make notes on them as needed. Nobody tries to capture everything that is said; the cards are not meant to capture everything but are simply reminders of the conversation.   As the developers become more comfortable with the stories, they begin writing estimates on them. These estimates are crude and budgetary, but they give Jane an idea of what the story will cost.   At the end of the meeting, it is clear that many more stories could be discussed. It is also clear that the most important stories have been addressed and that they represent several months worth of work. Jane closes the meeting by taking the cards with her and promising to have a proposal for the first release in the morning.   * * *   The next morning, you reconvene the meeting. Jane chooses five cards and places them on the table. "According to your estimates, these cards represent about one perfect team-week's worth of work. The last iteration of the previous project managed to get one perfect team-week done in 3 real weeks. If we can get these five stories done in 3 weeks, we'll be able to demonstrate them to Russ. That will make him feel very comfortable about our progress." Jane is pushing it. The sheepish look on her face lets you know that she knows it too. You reply, "Jane, this is a new team, working on a new project. It's a bit presumptuous to expect that our velocity will be the same as the previous team's. However, I met with the team yesterday afternoon, and we all agreed that our initial velocity should, in fact, be set to one perfectweek for every 3 real-weeks. So you've lucked out on this one." "Just remember," you continue, "that the story estimates and the story velocity are very tentative at this point. We'll learn more when we plan the iteration and even more when we implement it."   Jane looks over her glasses at you as if to say "Who's the boss around here, anyway?" and then smiles and says, "Yeah, don't worry. I know the drill by now."Jane then puts 15 more cards on the table. She says, "If we can get all these cards done by the end of March, we can turn the system over to our beta test customers. And we'll get good feedback from them."   You reply, "OK, so we've got our first iteration defined, and we have the stories for the next three iterations after that. These four iterations will make our first release."   "So," says Jane, can you really do these five stories in the next 3 weeks?" "I don't know for sure, Jane," you reply. "Let's break them down into tasks and see what we get."   So Jane, you, and your team spend the next several hours taking each of the five stories that Jane chose for the first iteration and breaking them down into small tasks. The developers quickly realize that some of the tasks can be shared between stories and that other tasks have commonalities that can probably be taken advantage of. It is clear that potential designs are popping into the developers' heads. From time to time, they form little discussion knots and scribble UML diagrams on some cards.   Soon, the whiteboard is filled with the tasks that, once completed, will implement the five stories for this iteration. You start the sign-up process by saying, "OK, let's sign up for these tasks." "I'll take the initial database generation." Says Pete. "That's what I did on the last project, and this doesn't look very different. I estimate it at two of my perfect workdays." "OK, well, then, I'll take the login screen," says Joe. "Aw, darn," says Elaine, the junior member of the team, "I've never done a GUI, and kinda wanted to try that one."   "Ah, the impatience of youth," Joe says sagely, with a wink in your direction. "You can assist me with it, young Jedi." To Jane: "I think it'll take me about three of my perfect workdays."   One by one, the developers sign up for tasks and estimate them in terms of their own perfect workdays. Both you and Jane know that it is best to let the developers volunteer for tasks than to assign the tasks to them. You also know full well that you daren't challenge any of the developers' estimates. You know these people, and you trust them. You know that they are going to do the very best they can.   The developers know that they can't sign up for more perfect workdays than they finished in the last iteration they worked on. Once each developer has filled his or her schedule for the iteration, they stop signing up for tasks.   Eventually, all the developers have stopped signing up for tasks. But, of course, tasks are still left on the board.   "I was worried that that might happen," you say, "OK, there's only one thing to do, Jane. We've got too much to do in this iteration. What stories or tasks can we remove?" Jane sighs. She knows that this is the only option. Working overtime at the beginning of a project is insane, and projects where she's tried it have not fared well.   So Jane starts to remove the least-important functionality. "Well, we really don't need the login screen just yet. We can simply start the system in the logged-in state." "Rats!" cries Elaine. "I really wanted to do that." "Patience, grasshopper." says Joe. "Those who wait for the bees to leave the hive will not have lips too swollen to relish the honey." Elaine looks confused. Everyone looks confused. "So . . .," Jane continues, "I think we can also do away with . . ." And so, bit by bit, the list of tasks shrinks. Developers who lose a task sign up for one of the remaining ones.   The negotiation is not painless. Several times, Jane exhibits obvious frustration and impatience. Once, when tensions are especially high, Elaine volunteers, "I'll work extra hard to make up some of the missing time." You are about to correct her when, fortunately, Joe looks her in the eye and says, "When once you proceed down the dark path, forever will it dominate your destiny."   In the end, an iteration acceptable to Jane is reached. It's not what Jane wanted. Indeed, it is significantly less. But it's something the team feels that can be achieved in the next 3 weeks.   And, after all, it still addresses the most important things that Jane wanted in the iteration. "So, Jane," you say when things had quieted down a bit, "when can we expect acceptance tests from you?" Jane sighs. This is the other side of the coin. For every story the development team implements,   Jane must supply a suite of acceptance tests that prove that it works. And the team needs these long before the end of the iteration, since they will certainly point out differences in the way Jane and the developers imagine the system's behaviour.   "I'll get you some example test scripts today," Jane promises. "I'll add to them every day after that. You'll have the entire suite by the middle of the iteration."   * * *   The iteration begins on Monday morning with a flurry of Class, Responsibilities, Collaborators sessions. By midmorning, all the developers have assembled into pairs and are rapidly coding away. "And now, my young apprentice," Joe says to Elaine, "you shall learn the mysteries of test-first design!"   "Wow, that sounds pretty rad," Elaine replies. "How do you do it?" Joe beams. It's clear that he has been anticipating this moment. "OK, what does the code do right now?" "Huh?" replied Elaine, "It doesn't do anything at all; there is no code."   "So, consider our task; can you think of something the code should do?" "Sure," Elaine said with youthful assurance, "First, it should connect to the database." "And thereupon, what must needs be required to connecteth the database?" "You sure talk weird," laughed Elaine. "I think we'd have to get the database object from some registry and call the Connect() method. "Ah, astute young wizard. Thou perceives correctly that we requireth an object within which we can cacheth the database object." "Is 'cacheth' really a word?" "It is when I say it! So, what test can we write that we know the database registry should pass?" Elaine sighs. She knows she'll just have to play along. "We should be able to create a database object and pass it to the registry in a Store() method. And then we should be able to pull it out of the registry with a Get() method and make sure it's the same object." "Oh, well said, my prepubescent sprite!" "Hay!" "So, now, let's write a test function that proves your case." "But shouldn't we write the database object and registry object first?" "Ah, you've much to learn, my young impatient one. Just write the test first." "But it won't even compile!" "Are you sure? What if it did?" "Uh . . ." "Just write the test, Elaine. Trust me." And so Joe, Elaine, and all the other developers began to code their tasks, one test case at a time. The room in which they worked was abuzz with the conversations between the pairs. The murmur was punctuated by an occasional high five when a pair managed to finish a task or a difficult test case.   As development proceeded, the developers changed partners once or twice a day. Each developer got to see what all the others were doing, and so knowledge of the code spread generally throughout the team.   Whenever a pair finished something significant whether a whole task or simply an important part of a task they integrated what they had with the rest of the system. Thus, the code base grew daily, and integration difficulties were minimized.   The developers communicated with Jane on a daily basis. They'd go to her whenever they had a question about the functionality of the system or the interpretation of an acceptance test case.   Jane, good as her word, supplied the team with a steady stream of acceptance test scripts. The team read these carefully and thereby gained a much better understanding of what Jane expected the system to do. By the beginning of the second week, there was enough functionality to demonstrate to Jane. She watched eagerly as the demonstration passed test case after test case. "This is really cool," Jane said as the demonstration finally ended. "But this doesn't seem like one-third of the tasks. Is your velocity slower than anticipated?"   You grimace. You'd been waiting for a good time to mention this to Jane but now she was forcing the issue. "Yes, unfortunately, we are going more slowly than we had expected. The new application server we are using is turning out to be a pain to configure. Also, it takes forever to reboot, and we have to reboot it whenever we make even the slightest change to its configuration."   Jane eyes you with suspicion. The stress of last Monday's negotiations had still not entirely dissipated. She says, "And what does this mean to our schedule? We can't slip it again, we just can't. Russ will have a fit! He'll haul us all into the woodshed and ream us some new ones."   You look Jane right in the eyes. There's no pleasant way to give someone news like this. So you just blurt out, "Look, if things keep going like they're going, we're not going to be done with everything by next Friday. Now it's possible that we'll figure out a way to go faster. But, frankly, I wouldn't depend on that. You should start thinking about one or two tasks that could be removed from the iteration without ruining the demonstration for Russ. Come hell or high water, we are going to give that demonstration on Friday, and I don't think you want us to choose which tasks to omit."   "Aw forchrisakes!" Jane barely manages to stifle yelling that last word as she stalks away, shaking her head. Not for the first time, you say to yourself, "Nobody ever promised me project management would be easy." You are pretty sure it won't be the last time, either.   Actually, things went a bit better than you had hoped. The team did, in fact, have to drop one task from the iteration, but Jane had chosen wisely, and the demonstration for Russ went without a hitch. Russ was not impressed with the progress, but neither was he dismayed. He simply said, "This is pretty good. But remember, we have to be able to demonstrate this system at the trade show in July, and at this rate, it doesn't look like you'll have all that much to show." Jane, whose attitude had improved dramatically with the completion of the iteration, responded to Russ by saying, "Russ, this team is working hard, and well. When July comes around, I am confident that we'll have something significant to demonstrate. It won't be everything, and some of it may be smoke and mirrors, but we'll have something."   Painful though the last iteration was, it had calibrated your velocity numbers. The next iteration went much better. Not because your team got more done than in the last iteration but simply because the team didn't have to remove any tasks or stories in the middle of the iteration.   By the start of the fourth iteration, a natural rhythm has been established. Jane, you, and the team know exactly what to expect from one another. The team is running hard, but the pace is sustainable. You are confident that the team can keep up this pace for a year or more.   The number of surprises in the schedule diminishes to near zero; however, the number of surprises in the requirements does not. Jane and Russ frequently look over the growing system and make recommendations or changes to the existing functionality. But all parties realize that these changes take time and must be scheduled. So the changes do not cause anyone's expectations to be violated. In March, there is a major demonstration of the system to the board of directors. The system is very limited and is not yet in a form good enough to take to the trade show, but progress is steady, and the board is reasonably impressed.   The second release goes even more smoothly than the first. By now, the team has figured out a way to automate Jane's acceptance test scripts. The team has also refactored the design of the system to the point that it is really easy to add new features and change old ones. The second release was done by the end of June and was taken to the trade show. It had less in it than Jane and Russ would have liked, but it did demonstrate the most important features of the system. Although customers at the trade show noticed that certain features were missing, they were very impressed overall. You, Russ, and Jane all returned from the trade show with smiles on your faces. You all felt as though this project was a winner.   Indeed, many months later, you are contacted by Rufus Inc. That company had been working on a system like this for its internal operations. Rufus has canceled the development of that system after a death-march project and is negotiating to license your technology for its environment.   Indeed, things are looking up!

    Read the article

  • Postfix - am I sending spam?

    - by olrehm
    today I received like 30 messages within 5 minutes telling me that some mail I send could not be delivered, mostly to *.ru email addresses which I did not send any mail to. I have my own webserver (postfix/dovecot) set up using this guide (http://workaround.org/ispmail/lenny) but adjusted a little bit for Ubuntu. I tested whether I am an Open Relay which I am apparently not. Now there are two possible reasons for the above mentioned emails: Either I am sending out spam, or somebody wants me to think that, correct? How can I check this? I selected one particular address that I supposedly send spam to. Then I searched my mail.log for this entry. I found two blocks that record that somebody from the server connected to my server and delivered some message to two different users. I cannot find an entry reporting that anyone from my server send an email to that server. Does this mean its just some mail to scare me or could it still have been send by me in the first place? Here is one such block from the log (I replaced some confidential stuff): Jun 26 23:23:28 mycustomernumber postfix/smtpd[29970]: connect from mx.webstyle.ru[195.144.251.97] Jun 26 23:23:29 mycustomernumber postfix/smtpd[29970]: 044991528995: client=mx.webstyle.ru[195.144.251.97] Jun 26 23:23:29 mycustomernumber postfix/cleanup[29974]: 044991528995: message-id=<[email protected]> Jun 26 23:23:29 mycustomernumber postfix/qmgr[3369]: 044991528995: from=<>, size=2198, nrcpt=1 (queue active) Jun 26 23:23:29 mycustomernumber amavis[28598]: (28598-11) ESMTP::10024 /var/lib/amavis/tmp/amavis-20110626T223137-28598: <> -> <[email protected]> SIZE=2198 Received: from mycustomernumber.stratoserver.net ([127.0.0.1]) by localhost (rehmsen.de [127.0.0.1]) (amavisd-new, port 10024) with ESMTP for <[email protected]>; Sun, 26 Jun 2011 23:23:29 +0200 (CEST) Jun 26 23:23:29 mycustomernumber amavis[28598]: (28598-11) Checking: YakjkrdFq6A8 [195.144.251.97] <> -> <[email protected]> Jun 26 23:23:29 mycustomernumber postfix/smtpd[29970]: disconnect from mx.webstyle.ru[195.144.251.97] Jun 26 23:23:29 mycustomernumber amavis[28598]: (28598-11) lookup_sql_field(id) (WARN: no such field in the SQL table), "[email protected]" result=undef Jun 26 23:23:32 mycustomernumber postfix/smtpd[29979]: connect from localhost.localdomain[127.0.0.1] Jun 26 23:23:32 mycustomernumber postfix/smtpd[29979]: 0A1FA1528A21: client=localhost.localdomain[127.0.0.1] Jun 26 23:23:32 mycustomernumber postfix/cleanup[29974]: 0A1FA1528A21: message-id=<[email protected]> Jun 26 23:23:32 mycustomernumber postfix/qmgr[3369]: 0A1FA1528A21: from=<>, size=2841, nrcpt=1 (queue active) Jun 26 23:23:32 mycustomernumber postfix/smtpd[29979]: disconnect from localhost.localdomain[127.0.0.1] Jun 26 23:23:32 mycustomernumber amavis[28598]: (28598-11) FWD via SMTP: <> -> <[email protected]>,BODY=7BIT 250 2.0.0 Ok, id=28598-11, from MTA([127.0.0.1]:10025): 250 2.0.0 Ok: queued as 0A1FA1528A21 Jun 26 23:23:32 mycustomernumber amavis[28598]: (28598-11) Passed CLEAN, [195.144.251.97] [195.144.251.97] <> -> <[email protected]>, Message-ID: <[email protected]>, mail_id: YakjkrdFq6A8, Hits: 2.249, size: 2197, queued_as: 0A1FA1528A21, 2882 ms Jun 26 23:23:32 mycustomernumber postfix/smtp[29975]: 044991528995: to=<[email protected]>, relay=127.0.0.1[127.0.0.1]:10024, delay=3.3, delays=0.39/0.01/0.01/2.9, dsn=2.0.0, status=sent (250 2.0.0 Ok, id=28598-11, from MTA([127.0.0.1]:10025): 250 2.0.0 Ok: queued as 0A1FA1528A21) Jun 26 23:23:32 mycustomernumber postfix/qmgr[3369]: 044991528995: removed Jun 26 23:23:33 mycustomernumber postfix/smtp[29980]: 0A1FA1528A21: to=<[email protected]>, orig_to=<[email protected]>, relay=mx3.hotmail.com[65.54.188.110]:25, delay=1.2, delays=0.15/0.02/0.51/0.55, dsn=2.0.0, status=sent (250 <[email protected]> Queued mail for delivery) Jun 26 23:23:33 mycustomernumber postfix/qmgr[3369]: 0A1FA1528A21: removed Jun 26 23:26:49 mycustomernumber postfix/anvil[29972]: statistics: max connection rate 1/60s for (smtp:195.144.251.97) at Jun 26 23:23:28 Jun 26 23:26:49 mycustomernumber postfix/anvil[29972]: statistics: max connection count 1 for (smtp:195.144.251.97) at Jun 26 23:23:28 Jun 26 23:26:49 mycustomernumber postfix/anvil[29972]: statistics: max cache size 1 at Jun 26 23:23:28 I can provide more info if you tell me what you need to know. Thank you for you help!

    Read the article

  • Postfix/SMTPD Relay Access Denied when sending outside the network

    - by David
    I asked a very similar question some 4 or 5 months ago, but haven't tracked down a suitable answer. I decided to post a new question so that I can ... a) Post updated info b) post my most current postconf -n output When a user sends mail from inside the network (via webmail) to email addresses both inside and outside the network, the email is delivered. When a user with an email account on the system sends mail from outside the network, using the server as the relay, to addresses inside the network, the email is delivered. But [sometimes] when a user connects via SMTPD to send email to an external address, a Relay Access Denied error is returned: Feb 25 19:33:49 myers postfix/smtpd[8044]: NOQUEUE: reject: RCPT from host-68-169-158-182.WISOLT2.epbfi.com[68.169.158.182]: 554 5.7.1 <host-68-169-158-182.WISOLT2.epbfi.com[68.169.158.182]>: Client host rejected: Access denied; from=<[email protected]> to=<[email protected]> proto=ESMTP helo=<my-computer-name> Feb 25 19:33:52 myers postfix/smtpd[8044]: disconnect from host-68-169-158-182.WISOLT2.epbfi.com[68.169.158.182] Sending this through Microsoft Outlook 2003 generates the above log. However, sending through my iPhone, with the exact same settings, goes through fine: Feb 25 19:37:18 myers postfix/qmgr[3619]: A2D861302C9: from=<[email protected]>, size=1382, nrcpt=1 (queue active) Feb 25 19:37:18 myers amavis[2799]: (02799-09) FWD via SMTP: <[email protected]> -> <[email protected]>,BODY=7BIT 250 2.0.0 Ok, id=02799-09, from MTA([127.0.0.1]:10025): 250 2.0.0 Ok: queued as A2D861302C9 Feb 25 19:37:18 myers amavis[2799]: (02799-09) Passed CLEAN, [68.169.158.182] [68.169.158.182] <[email protected]> -> <[email protected]>, Message-ID: <[email protected]>, mail_id: yMLvzVQJloFV, Hits: -9.607, size: 897, queued_as: A2D861302C9, 6283 ms Feb 25 19:37:18 myers postfix/lmtp[8752]: 2ED3A1302C8: to=<[email protected]>, relay=127.0.0.1[127.0.0.1]:10024, delay=6.6, delays=0.25/0.01/0.19/6.1, dsn=2.0.0, status=sent (250 2.0.0 Ok, id=02799-09, from MTA([127.0.0.1]:10025): 250 2.0.0 Ok: queued as A2D861302C9) Feb 25 19:37:18 myers postfix/qmgr[3619]: 2ED3A1302C8: removed Outgoing Settings on Outlook 2003 match the settings on my iPhone: SMTP server: mail.my-domain.com Username: My full email address Uses SSL Server Port 587 Now, here's postconf -n. I realize the "My Networks" Parameter is a bit nasty. I have these IP addresses in here for just this reason, as others have been complaining of this problem too: alias_database = hash:/etc/postfix/aliases alias_maps = $alias_database append_dot_mydomain = no biff = no broken_sasl_auth_clients = yes command_directory = /usr/sbin config_directory = /etc/postfix content_filter = amavisfeed:[127.0.0.1]:10024 daemon_directory = /usr/libexec/postfix debug_peer_level = 2 disable_vrfy_command = yes html_directory = no inet_interfaces = all mail_owner = postfix mail_spool_directory = /var/spool/mail mailbox_size_limit = 0 mailq_path = /usr/bin/mailq manpage_directory = /usr/share/man message_size_limit = 20480000 mydestination = $myhostname, localhost, localhost.$mydomain mydomain = my-domain.com myhostname = myers.my-domain.com mynetworks = 127.0.0.0/8, 74.125.113.27, 74.125.82.49, 74.125.79.27, 209.85.161.0/24, 209.85.214.0/24, 209.85.216.0/24, 209.85.212.0/24, 209.85.160.0/24 myorigin = $myhostname newaliases_path = /usr/bin/newaliases queue_directory = /var/spool/postfix readme_directory = /usr/share/doc/postfix-2.3.3/README_FILES receive_override_options = no_address_mappings recipient_delimiter = + relay_domains = $mydestination sample_directory = /usr/share/doc/postfix-2.3.3/samples sendmail_path = /usr/sbin/sendmail setgid_group = postdrop smtp_bind_address = my-primary-server's IP address smtpd_banner = mail.my-domain.com smtpd_helo_required = yes smtpd_sasl_auth_enable = yes smtpd_sasl_path = private/auth smtpd_sasl_type = dovecot smtpd_tls_auth_only = yes smtpd_tls_cert_file = /etc/ssl/mailserver/postfix.pem smtpd_tls_key_file = /etc/ssl/mailserver/private/postfix.pem smtpd_tls_loglevel = 3 smtpd_tls_received_header = no smtpd_tls_session_cache_timeout = 3600s smtpd_use_tls = yes tls_random_source = dev:/dev/urandom unknown_local_recipient_reject_code = 554 virtual_alias_maps = mysql:/etc/postfix/mysql-virtual-alias-maps.cf,mysql:/etc/postfix/mysql-email2email.cf virtual_gid_maps = static:5000 virtual_mailbox_base = /var/vmail virtual_mailbox_domains = mysql:/etc/postfix/mysql-virtual-mailbox-domains.cf virtual_mailbox_maps = mysql:/etc/postfix/mysql-virtual-mailbox-maps.cf virtual_minimum_uid = 5000 virtual_transport = dovecot virtual_uid_maps = static:5000 If anyone has any ideas and can help me finally solve this issue once and for all, I'd be eternally grateful.

    Read the article

  • Building nginx 1.0.4 on Amazon EC2 micro - perl and python problems

    - by digitaltoast
    I'd like to run nginx as a reverse proxy with apache2 on my EC2 micro instance. yum install nginx gives me nginx-0.8.53-1.2.amzn1.x86_64.rpm The current nginx is 1.0.4 I found and followed this guide: http://kdn2.info/2011/05/install-nginx-on-amazon-ec2/ It works fine up to and including "make". When I get to checkinstall --fstrans=no I get ERROR: ld.so: object '/usr/lib/installwatch.so' from LD_PRELOAD cannot be preloaded: ignored. test -d '/var/log/nginx' || mkdir -p '/var/log/nginx' ERROR: ld.so: object '/usr/lib/installwatch.so' from LD_PRELOAD cannot be preloaded: ignored. make[1]: Leaving directory `/root/src/nginx-1.0.4' ======================== Installation successful ========================== Copying documentation directory... ./ ./CHANGES ./LICENSE ./README cp: cannot stat `//var/tmp/gRWoVgIcdbmjfTjoVGBM/newfiles.tmp': No such file or directory Copying files to the temporary directory...OK Striping ELF binaries and libraries...OK Compressing man pages...OK Building file list...OK Building RPM package... FAILED! *** Failed to build the package ...and the logfile is full of: Building target platforms: x86_64 Building for target x86_64 Processing files: nginx-1.0.4-1.x86_64 error: File not found: /usr/src/rpm/BUILDROOT/nginx-1.0.4-1.x86_64/usr error: File not found: /usr/src/rpm/BUILDROOT/nginx-1.0.4-1.x86_64/usr/doc There IS /usr/src/rpm/BUILDROOT/nginx-1.0.4-1.x86_64/ but no /usr Following further down the page, it says: "If we want to use, for example, PHP 5.2 we can download PHP and Nginx compatible with Amazon Kernel(Xen Kernel) from the CentosALT Repository." So I install the two repositories, but when I yum install http://centos.alt.ru/pub/nginx/1.0/RPMS/x86_64/nginx-stable-1.0.4-1.el5.x86_64.rpm I get Error: Package: nginx-stable-1.0.4-1.el5.x86_64 (/nginx-stable-1.0.4-1.el5.x86_64) Requires: perl(:MODULE_COMPAT_5.8.8) You could try using --skip-broken to work around the problem but that doesn't fix it. When I do yum update, I get --> Finished Dependency Resolution Error: Package: python-distribute-0.6.19-10.1.x86_64 (devel_languages_python) Requires: python < 2.5 Installed: 1:python-2.6-1.19.amzn1.noarch (@amzn-main) python = 1:2.6-1.19.amzn1 Error: Package: python-distribute-0.6.19-10.1.i586 (devel_languages_python) Requires: python < 2.5 Installed: 1:python-2.6-1.19.amzn1.noarch (@amzn-main) python = 1:2.6-1.19.amzn1 I've tried everything - yum clean all and various other suggestions found on other sites. If anyone has any suggestions or a known package of the current 1.04 nginx working on EC2 Micro (Linux ip-10-56-63-85 2.6.35.11-83.9.amzn1.x86_64 #1 SMP Sat Feb 19 23:42:04 UTC 2011 x86_64 x86_64 x86_64 GNU/Linux - which I think is RHEL 5?) then I'd be grateful. Incidentally, does this repolist look right? repo id repo name status CentALT CentALT Packages for Enterprise Linux 5 - x86_64 enabled: 112+157 amzn-main amzn-main-Base enabled: 2,706 amzn-main-debuginfo amzn-main-debuginfo disabled amzn-main-nosrc amzn-main-nosrc disabled amzn-updates amzn-updates-Base enabled: 328 amzn-updates-debuginfo amzn-updates-debuginfo disabled amzn-updates-nosrc amzn-updates-nosrc disabled devel_languages_python Python and Python Modules (SLE_10) enabled: 1,452+768 epel Extra Packages for Enterprise Linux 5 - x86_64 enabled: 5,892+604 epel-debuginfo Extra Packages for Enterprise Linux 5 - x86_64 - Debug disabled epel-source Extra Packages for Enterprise Linux 5 - x86_64 - Source disabled epel-testing Extra Packages for Enterprise Linux 5 - Testing - x86_64 disabled epel-testing-debuginfo Extra Packages for Enterprise Linux 5 - Testing - x86_64 - Debug disabled epel-testing-source Extra Packages for Enterprise Linux 5 - Testing - x86_64 - Source disabled s3tools Tools for managing Amazon S3 - Simple Storage Service (RHEL_6) enabled: 2+1 repolist: 10,492

    Read the article

  • Ubuntu 10.04: OpenVZ Kernel and pure-ftpd issues on HOST (no guest setup yet)

    - by Seidr
    After compiling and installing the OpenVZ flavour of kernel under Ubuntu 10.04, I am unable to browse to certain directories when connecting to the pure-ftpd server. The clients are dropping into PASSIVE mode, which is fine. This behaviour was happening before the change of kernel, however now when I browse to certain directories the connection just gets dropped. This only happens with a few directories under one login (web in specific), where as with another login it happens as soon as I connect. I've got the nf_conntrack_ftp kernel module installed (required to keep track of passive FTP connections as I understand, and an alias of the ip_conntrack_ftp module), however this has provided no alleviation of my problem. This module was actually required upon initial setup of my OS to get passive FTP working correctly, however when I compiled the OpenVZ kernel a lot of these modules were missing (iptables, conntrack etc). I recompiled the kernel with the missing modules, but to no effect. I've turned verbosity for the pure-ftpd server up, and still no clues have been spotted in either syslog or the transfer log. Neither did an strace provide any clues (that I could discern anyway) - although one strange thing is both in the output to the client and in the strace I notice that it does infact probe the directory and return the number of matches - it just fails after that. One more thing to mention is that if I FTP using the same credentials locally, everything works fine. This suggests that it is in fact an issue with either the conntrack_ftp module not functioning as expected, or a deeper networking issue. The Kernel was compiled and installed following the instructions at https://help.ubuntu.com/community/OpenVZ - bar the changes to the Kernel configuration (such as add iptables as a module). Below is an example of the log sent to the data (under FileZilla). Status: Resolving address of xxxx.co.uk Status: Connecting to 78.46.xxx.xxx:21... Status: Connection established, waiting for welcome message... Response: 220---------- Welcome to Pure-FTPd [privsep] [TLS] ---------- Response: 220-You are user number 4 of 10 allowed. Response: 220-Local time is now 08:52. Server port: 21. Response: 220-This is a private system - No anonymous login Response: 220-IPv6 connections are also welcome on this server. Response: 220 You will be disconnected after 15 minutes of inactivity. Command: USER xxx Response: 331 User xxx OK. Password required Command: PASS ******** Response: 230-User xxx has group access to: client1 sshusers Response: 230 OK. Current restricted directory is / Command: OPTS UTF8 ON Response: 200 OK, UTF-8 enabled Status: Connected Status: Retrieving directory listing... Command: PWD Response: 257 "/" is your current location Status: Directory listing successful Status: Retrieving directory listing... Command: CWD /web Response: 250 OK. Current directory is /web Command: TYPE I Response: 200 TYPE is now 8-bit binary Command: PORT 10,0,2,30,14,143 Response: 500 I won't open a connection to 10.0.2.30 (only to 188.220.xxx.xxx) Command: PASV Response: 227 Entering Passive Mode (78,46,79,147,234,110) Command: MLSD Response: 150 Accepted data connection Response: 226-ASCII Response: 226-Options: -a -l Response: 226 57 matches total Error: Could not read from transfer socket: ECONNRESET - Connection reset by peer Error: Failed to retrieve directory listing Any suggestions please? I'm willing to try anything!

    Read the article

  • NPM not installing dependencies?

    - by neezer
    Having trouble getting NPM to install dependencies with npm install -d in my project directory with a defined package.json file. Here's my package.json: https://gist.github.com/3068312 And after wiping my project root's node modules folder (rm -rf node_modules), I run npm install -d in my project root and am greeted with this: (ssh) /vagrant git:master ? npm install -d npm info it worked if it ends with ok npm info using [email protected] npm info using [email protected] npm info preinstall [email protected] npm http GET https://registry.npmjs.org/sinon npm http GET https://registry.npmjs.org/underscore npm http GET https://registry.npmjs.org/mocha npm http GET https://registry.npmjs.org/request npm http 304 https://registry.npmjs.org/sinon npm http 304 https://registry.npmjs.org/underscore npm http 304 https://registry.npmjs.org/mocha npm http 304 https://registry.npmjs.org/request npm info into /vagrant [email protected] npm info into /vagrant [email protected] npm info into /vagrant [email protected] npm info into /vagrant [email protected] npm info installOne [email protected] npm info installOne [email protected] npm info installOne [email protected] npm info installOne [email protected] npm info unbuild /vagrant/node_modules/underscore npm info unbuild /vagrant/node_modules/mocha npm info unbuild /vagrant/node_modules/sinon npm info unbuild /vagrant/node_modules/request npm ERR! error installing [email protected] npm info unbuild /vagrant/node_modules/underscore npm ERR! error rolling back [email protected] Error: UNKNOWN, unknown error '/vagrant/node_modules/underscore' npm ERR! Error: ENOENT, no such file or directory '/vagrant/node_modules/underscore/package.json' npm ERR! You may report this log at: npm ERR! <http://bugs.debian.org/npm> npm ERR! or use npm ERR! reportbug --attach /vagrant/npm-debug.log npm npm ERR! npm ERR! System Linux 3.2.0-23-generic npm ERR! command "node" "/usr/bin/npm" "install" "-d" npm ERR! cwd /vagrant npm ERR! node -v v0.6.12 npm ERR! npm -v 1.1.4 npm ERR! path /vagrant/node_modules/underscore/package.json npm ERR! code ENOENT npm ERR! message ENOENT, no such file or directory '/vagrant/node_modules/underscore/package.json' npm ERR! errno {} npm ERR! error installing [email protected] npm info unbuild /vagrant/node_modules/request npm ERR! error rolling back [email protected] Error: UNKNOWN, unknown error '/vagrant/node_modules/request' npm ERR! npm ERR! Additional logging details can be found in: npm ERR! /vagrant/npm-debug.log npm not ok If I rerun npm install -d, the error changes to whatever the next package is... if I keep running it it over and over again, it eventually doesn't complain anymore and outputs: (ssh) /vagrant git:master ? npm install -d npm info it worked if it ends with ok npm info using [email protected] npm info using [email protected] npm info preinstall [email protected] npm info build /vagrant npm info linkStuff [email protected] npm info install [email protected] npm info postinstall [email protected] npm info ok However, none of the dependencies for any of these packages get installed. For instance, cheerio has a few dependencies, so when I try running my test suite, I'm greeted with: (ssh) /vagrant git:master ? mocha --compilers coffee:coffee-script --watch spec/* node.js:201 throw e; // process.nextTick error, or 'error' event on first tick ^ Error: Cannot find module 'cheerio-select' at Function._resolveFilename (module.js:332:11) at Function._load (module.js:279:25) at Module.require (module.js:354:17) What gives? I'm on Ubuntu Precise64 in a Vagrant virtual box.

    Read the article

  • How Expedia Made My New Bride Cry

    - by Lance Robinson
    Tweet this? Email Expedia and ask them to give me and my new wife our honeymoon? When Expedia followed up their failure with our honeymoon trip with a complete and total lack of acknowledgement of any responsibility for the problem and endless loops of explaining the issue over and over again - I swore that they would make it right. When they brought my new bride to tears, I got an immediate and endless supply of motivation. I hope you will help me make them make it right by posting our story on Twitter, Facebook, your blog, on Expedia itself, and when talking to your friends in person about their own travel plans.   If you are considering using them now for an important trip - reconsider. Short summary: We arrived early for a flight - but Expedia had made a mistake with the data they supplied to JetBlue and Emirates, which resulted in us not being able to check in (one leg of our trip was missing)!  At the time of this post, three people (myself, my wife, and an exceptionally patient JetBlue employee named Mary) each spent hours on the phone with Expedia.  I myself spent right at 3 hours (according to iPhone records), Lauren spent an hour and a half or so, and poor Mary was probably on the phone for a good 3.5 hours.  This is after 5 hours total at the airport.  If you add up our phone time, that is nearly 8 hours of phone time over a 5 hour period with little or no help, stall tactics (?), run-around, denial, shifting of blame, and holding. Details below (times are approximate): First, my wife and I were married yesterday - June 18th, the 3 year anniversary of our first date. She is awesome. She is the nicest person I have ever known, a ton of fun, absolutely beautiful in every way. Ok enough mushy - here are the dirty details. 2:30 AM - Early Check-in Attempt - we attempted to check-in for our flight online. Some sort of technology error on website, instructed to checkin at desk. 4:30 AM - Arrive at airport. Try to check-in at kiosk, get the same error. We got to the JetBlue desk at RDU International Airport, where Mary helped us. Mary discovered that the Expedia provided itinerary does not match the Expedia provided tickets. We are informed that when that happens American, JetBlue, and others that use the same software cannot check you in for the flight because. Why? Because the itinerary was missing a leg of our flight! Basically we were not shown in the system as definitely being able to make it home. Mary called Expedia and was put on hold by their automated system. 4:55 AM - Mary, myself, and my brand new bride all waited for about 25 minutes when finally I decided I would make a call myself on my iPhone while Mary was on the airport phone. In their automated system, I chose "make a new reservation", thinking they might answer a little more quickly than "customer service". Not surprisingly I was connected to an Expedia person within 1 minute. They informed me that they would have to forward me to a customer service specialist. I explained to them that we were already on hold for that and had been for nearly half an hour, that we were going on our honeymoon and that our flight would be leaving soon - could they please help us. "Yes, I will help you". I hand the phone to JetBlue Mary who explains the situation 3 or 4 times. Obviously I couldn't hear both ends of the conversation at this point, but the Expedia person explained what the problem was by stating exactly what Mary had just spent 15 minutes explaining. Mary calmly confirms that this is the problem, and asks Expedia to re-issue the itinerary. Expedia tells Mary that they'll have to transfer her to customer service. Mary asks for someone specific so that we get an answer this time, and goes on hold. Mary get's connected, explains the situation, and then Mary's connection gets terminated. 5:10 AM - Mary calls back to the Expedia automated system again, and we wait for about 5 minutes on hold this time before I pick up my iPhone and call Expedia again myself. Again I go to sales, a person picks up the phone in less than a minute. I explain the situation and let them know that we are now very close to missing our flight for our honeymoon, could they please help us. "Yes, I will help you". Again I give the phone to Mary who provides them with a call back number in case we get disconnected again and explains the situation again. More back and forth with Expedia doing nothing but repeating the same questions, Mary answering the questions with the same information she provided in the original explanation, and Expedia simply restating the problem. Mary again asks them to re-issue the itinerary, and explains that doing so will fix the problem. Expedia again repeats the problem instead of fixing it, and Mary's connection gets terminated. 5:20 AM - Mary again calls back to Expedia. My beautiful bride also calls on her own phone. At this point she is struggling to hold back her tears, stumbling through an explanation of all that has happened and that we are about to miss our flight. Please help us. "Yes, I will help". My beautiful bride's connection gets terminated. Ok, maybe this disconnection isn't an accident. We've now been disconnected 3 times on two different phones. 5:45 AM - I walk away and pleadingly beg a person to help me. They "escalate" the issue to "Rosy" (sp?) at Expedia. I go through the whole song and dance again with Rosy, who gives me the same treatment Mary was given. Rosy blames JetBlue for now having the correct data. Meanwhile Mary is on the phone with Emirates Air (the airline for the second leg of our trip), who agrees with JetBlue that Expedia's data isn't up to date. We are informed by two airport employees that issues like this with Expedia are not uncommon, and that the fix is simple. On the phone iwth Rosy, I ask her to re-issue the itinerary because we are about to miss our flight. She again explains the problem to me. At this point, I am standing at the window, pleading with Rosy to help us get to our honeymoon, watching our airplane. Then our airplane leaves without us. 6:03 AM - At this point we have missed our flight. Re-issuing the itinerary is no longer a solution. I ask Rosy to start from the beginning and work us up a new trip. She says that she cannot do that. She says that she needs to talk to JetBlue and Emirates and find out why we cannot check-in for our flight. I remind Rosy that our flight has already left - I just watched it taxi away - it no longer matters why (not to mention the fact that we already knew why, and have known why since 4:30 AM), and have known the solution since 4:30 AM. Rosy, can you please book a new trip? Yes, but it will cost $400. Excuse me? Now you can, but it will cost ME to fix your mistake? Rosy says that she can escalate the situation to her supervisor but that will take 1.5 hours. 6:15 AM - I told Rosy that if they had re-issued the itinerary as JetBlue asked (at 4:30 AM), my new wife and I might be on the airplane now instead of dealing with this on the phone and missing the beginning (and how much more?) of our honeymoon. Rosy said that it was not necessary to re-issue the itinerary. Out of curiosity, i asked Rosy if there was some financial burden on them to re-issue the itinerary. "No", said Rosy. I asked her if it was a large time burden on Expedia to re-issue the itinerary. "No", said Rosy. I directly asked Rosy: Why wouldn't Expedia have re-issued the itinerary when JetBlue asked? No answer. I asked Rosy: If you had re-issued the itinerary at 4:30, isn't it possible that I would be on that flight right now? She actually surprised me by answering "Yes" to that question. So I pointed out that it followed that Expedia was responsible for the fact that we missed out flight, and she immediately went into more about how the problem was with JetBlue - but now it was ALSO an Emirates Air problem as well. I tell Rosy to go ahead and escalate the issue again, and please call me back in that 1.5 hours (which how is about 1 hour and 10 minutes away). 6:30 AM - I start tweeting my frustration with iPhone. It's now pretty much impossible for us to make it to The Maldives by 3pm, which is the time at which we would need to arrive in order to be allowed service to the actual island where we are staying. Expedia has now given me the run-around for 2 hours, caused me to miss my flight, and worst of all caused my amazing new wife Lauren to miss our honeymoon. You think I was mad? No. Furious. Its ok to make mistakes - but to refuse to fix them and to ruin our honeymoon? No, not ok, Expedia. I swore right then that Expedia would make this right. 7:45 AM - JetBlue mary is still talking her tail off to other people in JetBlue and Emirates Air. Mary works it out so that if Expedia simply books a new trip, JetBlue and Emirates will both waive all the fees. Now we just have to convince Expedia to fix their mistake and get us on our way! Around this time Expedia Rosy calls me back! I inform her of the excellent work of JetBlue Mary - that JetBlue and Emirates both will waive the fees so Expedia can fix their mistake and get us going on our way. She says that she sees documentation of this in her system and that she needs to put me on hold "for 1 to 10 minutes" to talk to Emirates Air (why I'm not exactly sure). I say ok. 8:45 AM - After an hour on hold, Rosy comes on the line and asks me to hold more. I ask her to call me back. 9:35 AM - I put down the iPhone Twitter app and picks up the laptop. You think I made some noise with my iPhone? Heh 11:25 AM - Expedia follows me and sends a canned "We're sorry, DM us the details".  If you look at their Twitter feed, 16 out of the most recent 20 tweets are exactly the same canned response.  The other 4?  Ads.  Um - #MultiFAIL? To Expedia:  You now have had (as explained above) 8 hours of 3 different people explaining our situation, you know the email address of our Expedia account, you know my web blog, you know my Twitter address, you know my phone number.  You also know how upset you have made both me and my new bride by treating us with such a ... non caring, scripted, uncooperative, argumentative, and possibly even deceitful manner.  In the wise words of the great Kenan Thompson of SNL: "FIX IT!".  And no, I'm NOT going away until you make this right. Period. 11:45 AM - Expedia corporate office called.  The woman I spoke to was very nice and apologetic.  She listened to me tell the story again, she says she understands the problem and she is going to work to resolve it.  I don't have any details on what exactly that resolution might me, she said she will call me back in 20 minutes.  She found out about the problem via Twitter.  Thank you Twitter, and all of you who helped.  Hopefully social media will win my wife and I our honeymoon, and hopefully Expedia will encourage their customer service teams treat their customers properly. 12:22 PM - Spoke to Fran again from Expedia corporate office.  She has a flight for us tonight.  She is booking it now.  We will arrive at our honeymoon destination of beautiful Veligandu Island Resort only 1 day late.  She cannot confirm today, but she expects that Expedia will pay for the lost honeymoon night.  Thank you everyone for your help.  I will reflect more on this whole situation and confirm its resolution after our flight is 100% confirmed.  For now, I'm going to take a breather and go kiss my wonderful wife! 1:50 PM - Have not yet received the promised phone call.  We did receive an email with a new itinerary for a flight but the booking is not for specific seats, so there is no guarantee that my wife and I will be able to sit together.  With the original booking I carefully selected our seats for every segment of our trip.  I decided to call into the phone number that Fran from the Expedia corporate office gave me.  Its automated voice system identified itself as "Tier 3 Support".  I am currently still on hold with them, I have not gotten through to a human yet. 1:55 PM - Fran from Expedia called me back.  She confirmed us as booked.  She called the airlines to confirm.  Unfortunately, Expedia was unwilling or unable to allow us any type of seat selection.  It is possible that i won't get to sit next to the woman I married less than a day ago on our 40 total hours of flight time (there and back).  In addition, our seats could be the worst seats on the planes, with no reclining seat back or right next to the restroom.  Despite this fact (which in my opinion is huge), the horrible inconvenience, the hours at the airport, and the negative Internet publicity that Expedia is receiving, Expedia declined to offer us any kind of upgrade or to mark us as SFU (suitable for upgrade).  Since they didn't offer - I asked, and was rejected.  I am grateful to finally be heading in the right direction, but not only did Expedia horribly botch this job from the very beginning, they followed that botch job with near zero customer service, followed by a verbally apologetic but otherwise half-hearted resolution.  If this works out favorably for us, great.  If not - I'm not done making noise, Expedia.  You owe us, and I expect you to make it right.  You haven't quite done that yet. Thanks - Thank you to Twitter.  Thanks to all those who sympathize with us and helped us get the attention of Expedia, since three people (one of them an airline employee) using Expedia's normal channels of communication for many hours didn't help.  Thanks especially to my PowerShell and Sharepoint friends, my local friends, and those connectors who encouraged me and spread my story. 5:15 PM - Love Wins - After all this, Lauren and I are exhausted.  We both took a short nap, and when we woke up we talked about the last 24 hours.  It was a big, amazing, story-filled 24 hours.  I said that Expedia won, but Lauren said no.  She pointed out how lucky we are.  We are in love and married.  We have wonderful family and friends.  We are both hard-working successful people who love what they do.  We get to go to an amazing exotic destination for our honeymoon like Veligandu in The Maldives...  That's a lot of good.  Expedia didn't win.  This was (is) a big loss for Expedia.  It is a public blemish for all to see.  But Lauren and I did win, big time.  Expedia may not have made things right - but things are right for us.  Post in progress... I will relay any further comments (or lack of) from Expedia soon, as well as an update on confirmation of their repayment of our lost resort room rates.  I'll also post a picture of us on our honeymoon as soon as I can!

    Read the article

  • Setting up a new Silverlight 4 Project with WCF RIA Services

    - by Kevin Grossnicklaus
    Many of my clients are actively using Silverlight 4 and RIA Services to build powerful line of business applications.  Getting things set up correctly is critical to being to being able to take full advantage of the RIA services plumbing and when developers struggle with the setup they tend to shy away from the solution as a whole.  I’m a big proponent of RIA services and wanted to take the opportunity to share some of my experiences in setting up these types of projects.  In late 2010 I presented a RIA Services Master Class here in St. Louis, MO through my firm (ArchitectNow) and the information shared in this post was promised during that presentation. One other thing I want to mention before diving in is the existence of a number of other great posts on this subject.  I’ve learned a lot from many of them and wanted to call out a few of them.  The purpose of my post is to point out some of the gotchas that people get caught up on in the process but I would still encourage you to do as much additional research as you can to find the perfect setup for your needs. Here are a few additional blog posts and articles you should check out on the subject: http://msdn.microsoft.com/en-us/library/ee707351(VS.91).aspx http://adam-thompson.com/post/2010/07/03/Getting-Started-with-WCF-RIA-Services-for-Silverlight-4.aspx Technologies I don’t intend for this post to turn into a full WCF RIA Services tutorial but I did want to point out what technologies we will be using: Visual Studio.NET 2010 Silverlight 4.0 WCF RIA Services for Visual Studio 2010 Entity Framework 4.0 I also wanted to point out that the screenshots came from my personal development box which has a number of additional plug-ins and frameworks loaded so a few of the screenshots might not match 100% with what you see on your own machines. If you do not have Visual Studio 2010 you can download the express version from http://www.microsoft.com/express.  The Silverlight 4.0 tools and the WCF RIA Services components are installed via the Web Platform Installer (http://www.microsoft.com/web/download). Also, the examples given in this post are done in C#…sorry to you VB folks but the concepts are 100% identical. Setting up anew RIA Services Project This section will provide a step-by-step walkthrough of setting up a new RIA services project using a shared DLL for server side code and a simple Entity Framework model for data access.  All projects are created with the consistent ArchitectNow.RIAServices filename prefix and default namespace.  This would be modified to match your companies standards. First, open Visual Studio and open the new project window via File->New->Project.  In the New Project window, select the Silverlight folder in the Installed Templates section on the left and select “Silverlight Application” as your project type.  Verify your solution name and location are set appropriately.  Note that the project name we specified in the example below ends with .Client.  This indicates the name which will be given to our Silverlight project. I consider Silverlight a client-side technology and thus use this name to reflect that.  Click Ok to continue. During the creation on a new Silverlight 4 project you will be prompted with the following dialog to create a new web ASP.NET web project to host your Silverlight content.  As we are demonstrating the setup of a WCF RIA Services infrastructure, make sure the “Enable WCF RIA Services” option is checked and click OK.  Obviously, there are some other options here which have an effect on your solution and you are welcome to look around.  For our example we are going to leave the ASP.NET Web Application Project selected.  If you are interested in having your Silverlight project hosted in an MVC 2 application or a Web Site project these options are available as well.  Also, whichever web project type you select, the name can be modified here as well.  Note that it defaults to the same name as your Silverlight project with the addition of a .Web suffix. At this point, your full Silverlight 4 project and host ASP.NET Web Application should be created and will now display in your Visual Studio solution explorer as part of a single Visual Studio solution as follows: Now we want to add our WCF RIA Services projects to this same solution.  To do so, right-click on the Solution node in the solution explorer and select Add->New Project.  In the New Project dialog again select the Silverlight folder under the Visual C# node on the left and, in the main area of the screen, select the WCF RIA Services Class Library project template as shown below.  Make sure your project name is set appropriately as well.  For the sample below, we will name the project “ArchitectNow.RIAServices.Server.Entities”.   The .Server.Entities suffix we use is meant to simply indicate that this particular project will contain our WCF RIA Services entity classes (as you will see below).  Click OK to continue. Once you have created the WCF RIA Services Class Library specified above, Visual Studio will automatically add TWO projects to your solution.  The first will be an project called .Server.Entities (using our naming conventions) and the other will have the same name with a .Web extension.  The full solution (with all 4 projects) is shown in the image below.  The .Entities project will essentially remain empty and is actually a Silverlight 4 class library that will contain generated RIA Services domain objects.  It will be referenced by our front-end Silverlight project and thus allow for simplified sharing of code between the client and the server.   The .Entities.Web project is a .NET 4.0 class library into which we will put our data access code (via Entity Framework).  This is our server side code and business logic and the RIA Services plumbing will maintain a link between this project and the front end.  Specific entities such as our domain objects and other code we set to be shared will be copied automatically into the .Entities project to be used in both the front end and the back end. At this point, we want to do a little cleanup of the projects in our solution and we will do so by deleting the “Class1.cs” class from both the .Entities project and the .Entities.Web project.  (Has anyone ever intentionally named a class “Class1”?) Next, we need to configure a few references to make RIA Services work.  THIS IS A KEY STEP THAT CAUSES MANY HEADACHES FOR DEVELOPERS NEW TO THIS INFRASTRUCTURE! Using the Add References dialog in Visual Studio, add a project reference from the *.Client project (our Silverlight 4 client) to the *.Entities project (our RIA Services class library).  Next, again using the Add References dialog in Visual Studio, add a project reference from the *.Client.Web project (our ASP.NET host project) to the *.Entities.Web project (our back-end data services DLL).  To get to the Add References dialog, simply right-click on the project you with to add a reference to in the Visual Studio solution explorer and select “Add Reference” from the resulting context menu.  You will want to make sure these references are added as “Project” references to simplify your future debugging.  To reiterate the reference direction using the project names we have utilized in this example thus far:  .Client references .Entities and .Client.Web reference .Entities.Web.  If you have opted for a different naming convention, then the Silverlight project must reference the RIA Services Silverlight class library and the ASP.NET host project must reference the server-side class library. Next, we are going to add a new Entity Framework data model to our data services project (.Entities.Web).  We will do this by right clicking on this project (ArchitectNow.Server.Entities.Web in the above diagram) and selecting Add->New Project.  In the New Project dialog we will select ADO.NET Entity Data Model as in the following diagram.  For now we will call this simply SampleDataModel.edmx and click OK. It is worth pointing out that WCF RIA Services is in no way tied to the Entity Framework as a means of accessing data and any data access technology is supported (as long as the server side implementation maps to the RIA Services pattern which is a topic beyond the scope of this post).  We are using EF to quickly demonstrate the RIA Services concepts and setup infrastructure, as such, I am not providing a database schema with this post but am instead connecting to a small sample database on my local machine.  The following diagram shows a simple EF Data Model with two tables that I reverse engineered from a local data store.   If you are putting together your own solution, feel free to reverse engineer a few tables from any local database to which you have access. At this point, once you have an EF data model generated as an EDMX into your .Entites.Web project YOU MUST BUILD YOUR SOLUTION.  I know it seems strange to call that out but it important that the solution be built at this point for the next step to be successful.  Obviously, if you have any build errors, these must be addressed at this point. At this point we will add a RIA Services Domain Service to our .Entities.Web project (our server side code).  We will need to right-click on the .Entities.Web project and select Add->New Item.  In the Add New Item dialog, select Domain Service Class and verify the name of your new Domain Service is correct (ours is called SampleService.cs in the image below).  Next, click "Add”. After clicking “Add” to include the Domain Service Class in the selected project, you will be presented with the following dialog.  In it, you can choose which entities from the selected EDMX to include in your services and if they should be allowed to be edited (i.e. inserted, updated, or deleted) via this service.  If the “Available DataContext/ObjectContext classes” dropdown is empty, this indicates you have not yes successfully built your project after adding your EDMX.  I would also recommend verifying that the “Generate associated classes for metadata” option is selected.  Once you have selected the appropriate options, click “OK”. Once you have added the domain service class to the .Entities.Web project, the resulting solution should look similar to the following: Note that in the solution you now have a SampleDataModel.edmx which represents your EF data mapping to your database and a SampleService.cs which will contain a large amount of generated RIA Services code which RIA Services utilizes to access this data from the Silverlight front-end.  You will put all your server side data access code and logic into the SampleService.cs class.  The SampleService.metadata.cs class is for decorating the generated domain objects with attributes from the System.ComponentModel.DataAnnotations namespace for validation purposes. FINAL AND KEY CONFIGURATION STEP!  One key step that causes significant headache to developers configuring RIA Services for the first time is the fact that, when we added the EDMX to the .Entities.Web project for our EF data access, a connection string was generated and placed within a newly generated App.Context file within that project.  While we didn’t point it out at the time you can see it in the image above.  This connection string will be required for the EF data model to successfully locate it’s data.  Also, when we added the Domain Service class to the .Entities.Web project, a number of RIA Services configuration options were added to the same App.Config file.   Unfortunately, when we ultimately begin to utilize the RIA Services infrastructure, our Silverlight UI will be making RIA services calls through the ASP.NET host project (i.e. .Client.Web).  This host project has a reference to the .Entities.Web project which actually contains the code so all will pass through correctly EXCEPT the fact that the host project will utilize it’s own Web.Config for any configuration settings.  For this reason we must now merge all the sections of the App.Config file in the .Entities.Web project into the Web.Config file in the .Client.Web project.  I know this is a bit tedious and I wish there were a simpler solution but it is required for our RIA Services Domain Service to be made available to the front end Silverlight project.  Much of this manual merge can be achieved by simply cutting and pasting from App.Config into Web.Config.  Unfortunately, the <system.webServer> section will exist in both and the contents of this section will need to be manually merged.  Fortunately, this is a step that needs to be taken only once per solution.  As you add additional data structures and Domain Services methods to the server no additional changes will be necessary to the Web.Config. Next Steps At this point, we have walked through the basic setup of a simple RIA services solution.  Unfortunately, there is still a lot to know about RIA services and we have not even begun to take advantage of the plumbing which we just configured (meaning we haven’t even made a single RIA services call).  I plan on posting a few more introductory posts over the next few weeks to take us to this step.  If you have any questions on the content in this post feel free to reach out to me via this Blog and I’ll gladly point you in (hopefully) the right direction. Resources Prior to closing out this post, I wanted to share a number or resources to help you get started with RIA services.  While I plan on posting more on the subject, I didn’t invent any of this stuff and wanted to give credit to the following areas for helping me put a lot of these pieces into place.   The books and online resources below will go a long way to making you extremely productive with RIA services in the shortest time possible.  The only thing required of you is the dedication to take advantage of the resources available. Books Pro Business Applications with Silverlight 4 http://www.amazon.com/Pro-Business-Applications-Silverlight-4/dp/1430272074/ref=sr_1_2?ie=UTF8&qid=1291048751&sr=8-2 Silverlight 4 in Action http://www.amazon.com/Silverlight-4-Action-Pete-Brown/dp/1935182374/ref=sr_1_1?ie=UTF8&qid=1291048751&sr=8-1 Pro Silverlight for the Enterprise (Books for Professionals by Professionals) http://www.amazon.com/Pro-Silverlight-Enterprise-Books-Professionals/dp/1430218673/ref=sr_1_3?ie=UTF8&qid=1291048751&sr=8-3 Web Content RIA Services http://channel9.msdn.com/Blogs/RobBagby/NET-RIA-Services-in-5-Minutes http://silverlight.net/riaservices/ http://www.silverlight.net/learn/videos/all/net-ria-services-intro/ http://www.silverlight.net/learn/videos/all/ria-services-support-visual-studio-2010/ http://channel9.msdn.com/learn/courses/Silverlight4/SL4BusinessModule2/SL4LOB_02_01_RIAServices http://www.myvbprof.com/MainSite/index.aspx#/zSL4_RIA_01 http://channel9.msdn.com/blogs/egibson/silverlight-firestarter-ria-services http://msdn.microsoft.com/en-us/library/ee707336%28v=VS.91%29.aspx Silverlight www.silverlight.net http://msdn.microsoft.com/en-us/silverlight4trainingcourse.aspx http://channel9.msdn.com/shows/silverlighttv

    Read the article

  • Install Control Center Agent on Oracle Application Server

    - by qianqian.wu
    Control Center Agent (CCA) The Control Center Agent is the OWB component that runs the Template Mappings in the Oracle Containers for J2EE (OC4J) server; also referred to as the J2EE Runtime. The Control Center Agent provides a Java-based runtime environment that can be installed on Oracle and non-Oracle database hosts. The Control Center Agent provides fundamental infrastructure for the heterogeneous, Code Template-based mapping support and Web services-related features of OWB in this release. In Oracle Warehouse Builder 11gR2 the Control Center Agent, by default will run in the built-in OC4J that is bundled in the Oracle Home. Besides that, you also have ability to install the Control Center Agent in an Oracle Application Server install. In this article, you will find step-by-step instructions how to install the Control Center Agent on an Oracle Application Server instance. The instructions cover the following tasks: Task 1: Install and Configure the Application Server Task 2: Deploy the Control Center Agent to the Application Server Task 3: Optional Configuration Tasks   Task 1: Install and Configure the Application Server Before configuring the Application Server, you need to install it from Oracle Application Server CD-ROM, or by downloading the installation program from Oracle Technology Network (OTN). Once the installation is completed, you are ready to configure the Application Server. The purpose of the configuration task is to make sure the Control Center Agent ear file can be deployed and runs in the Application Server successfully. The essential configuration tasks are outlined below: · Modify the OC4J Startup Script · Set up Control Center Agent Server Side Logging · Set up Audit Table Data Source · Copy ct_permissions.properties File · Set up Security Roles for Control Center Agent · Create JMS Queues · Install JDBC Drivers to OC4J Modify the OC4J Startup Script The OC4J startup script “opmn.xml” is located in Application Server configuration directory, $AS_HOME/opmn/conf. $AS_HOME stands for the root home directory of the application server. Open the file opmn.xml in a text editor, and alter the contents of the file as displayed in the following sample. You need to make sure that: The MaxPerSize is set to 128M. This is to ensure that you allocate enough PermGen space to OC4J to run Control Center Agent. This will prevent java.lang.OutOfMemoryError when running the agent. The Python.path sets the path for the Python library files used by the Control Center Agent: jython_lib.zip and jython_owblib.jar. These two files are in the $OWB_HOME/owb/lib/int directory, where $OWB_HOME is the directory where owb is installed. · The km_security_needed determines whether restrictions will be applied to the kinds of operating system commands allowed to be executed by the OWB Code Template script executed by Control Center Agent. Setting km_security_needed to “true” enforces such restriction while setting it to “false” removes such restrictions. Set up Control Center Agent Server Side Logging Ensure that you are in the Application Server configuration directory, $AS_HOME/j2ee/home/config. Open the file j2ee-logging.xml in a text editor and add the following lines to the log handler section. The jrt-internal-log-handler is the handler used by Control Center Agent runtime logger to create log files. Then add the following entry into the loggers section to create the logger for Control Center Agent runtime auditing. Set up Audit Table Data Source To enable Audit Table logging, a managed data source and connection pool need to be set up before Control Center Agent deployment. Ensure that you are in the Application Server configuration directory, $AS_HOME/j2ee/home/config. Open the file data-sources.xml in a text editor. Define the audit data source shown below in the file, <managed-data-source name="AuditDS" connection-pool-name="OWBSYS Audit   Connection Pool" jndi-name="jdbc/AuditDS"/> <connection-pool name="OWBSYS Audit Connection Pool">   <connection-factory factory-class="oracle.jdbc.pool.OracleDataSource"     user="owbsys_audit" password="owbsys_audit"     url="jdbc:oracle:thin:@//localhost:1521/ORCL"/> </connection-pool> Copy ct_permissions.properties File The ct_permissions.properties can be obtained from $OWB_HOME /owb/jrt/config/ directory. You need to copy the file to $AS_HOME/j2ee/home/config directory.This properties file takes effect when the setting km-security is set to true in Control Center Agent. By default the ALLOWED_CMD is commented out in ct_permissions.properties file. This prevents all system command from being invoked from scripts executed in Control Center Agent (when km-security is set to true). To allow certain system commands to be invoked, ALLOWED_CMD needs to be uncommented out, and the system commands (allowed to be invoked) need to be added to the ALLOWED_CMD. Set up Security Roles for Control Center Agent You can set up the Control Center Agent security roles through Oracle Enterprise Manager. In a web browser, navigate to Enterprise Manager Homepage (e.g. http://hostname:8889/em). 1. Log in using the oc4jadmin credentials. After the Cluster Topology page is loaded, click home (the OC4J instance). This takes you to the home page of the OC4J instance. On the OC4J home screen, click the Administration tab. On the Administration Tasks screen, expand Security. Click the task icon next to Security Providers. 2. On Security Providers page click on the button “Instance Level Security”. On Instance Level Security page, go to “Realms” tab. You will see a row for the default realm “jazn.com” in the results table. It has a “Roles” column and a “Users” column. Click on the number in “Roles” column. In the “Roles” page it will display all the roles available for the realm. Click on “Create” button to create a new role “OWB_J2EE_ EXECUTOR”. 3. On the Add Role screen, enter Name OWB_J2EE_EXECUTOR, and click OK. 4. Follow the same steps as before, and create a new role “OWB_J2EE_OPERATOR”. 5. Assign role “oc4j-administrators” and “OWB_J2EE_EXECUTOR” to the role “OWB_J2EE_OPERATOR” by moving these roles from “Available Roles” and click “OK” to save. 6. Go back to Instance Level Security page and create a new role “OWB_J2EE_ADMINISTRATOR”. 7. Assign roles “OWB_J2EE_ OPERATOR” and “OWB_J2EE_EXECUTOR” to the role “OWB_J2EE_ ADMINISTRATOR” by moving these roles from “Available Roles” and click “OK” to save. 8.Go back to Instance Level Security page. This time, click on the number in “Users” column for the realm “jazn.com”. In the “Users” page, it shows all the users defined for this realm. Locate the user “oc4jadmin” in the results table and click on it. 9. Assign the roles “OWB_J2EE_ADMINISTRATOR” and “oc4j-app-administrators” to this user by moving the role from the “Available Roles” selection box to “Selected Roles” box and click “Apply” to save. 10. Go back to Instance Level Security page and create a new role “OWB_INTERNAL_USERS”, assign no user or role to this role. Simply click “OK” to create this role. Now you have finished creating the security roles required for Control Center Agent. Create JMS Queues You need to create two JMS queues for Control Center Agent: owbQueue and abort_owbQueue. 1. Now go to OC4J home Page. On the OC4J home screen, click the Administration tab. On the Administration Tasks screen, expand Services and then expand Enterprise Messaging Service. Click the task icon next to JMS Destinations. 2. On JMS Destinations page, click “Create New” button to create a new JMS queue. On Add Destination page, choose “Queue” as Destination Type. Put “owbQueue” as Destination Name. Select “In Memory Persistence Only” as the Persistence Type and put “jms/owbQueue” as JNDI Location and click on “OK” to finish. 3. Follow the same instruction as above to create the owb_abortQueue. Now you have finished creating the JMS queues required for Control Center Agent. Install JDBC Drivers to OC4J In order to execute Code Templates using commercial databases other than Oracle, e.g. DB2, SQL Server etc, the corresponding jdbc driver files need to be added to $AS_HOME/j2ee/home/applib directory. 1. To install other JDBC drivers to OC4J, first obtain the .jar file containing the JDBC driver. All the external JDBC drivers .jar files can be found in the directory: $OWB_HOME/owb/lib/ext/. For DB2, the files needed are db2jcc.jar and db2jcc_license_cu.jar. For SQL Server the file is sqljdbc.jar. For sunopsis JDBC drivers, the file needed is snpsxmlo.jar. 2. Copy the required JDBC driver file into the directory $AS_HOME/j2ee/home/applib. Now you have finished the Application Server configuration. To make the configuration to take an effect, you need to restart the Application Server.   Task 2: Deploy the Control Center Agent to the Application Server Now you can deploy the Control Center Agent to the Application Server. In a web browser, navigate to Enterprise Manager Homepage (e.g. http://hostname:8889/em). 1. Log in using the oc4jadmin credentials. After the Cluster Topology page is loaded, click home (the OC4J instance). This takes you to the home page of the OC4J instance. On the OC4J home screen, click the Applications tab. Click Deploy to begin deploying Control Center Agent. 2. On the Deploy: Select Archive screen, under Archive, select Archive is present on local host. Upload the archive to the server where Application Server Control is running. Click Browse and locate the jrt.ear file in the $OWB_HOME/owb/jrt/applications directory. Under Deployment Plan, select Automatically create a new deployment plan. Click Next. 3. Wait for the ear file to be uploaded to Application Server. On the Deploy: Application Attributes screen, enter Application Name jrt, and Context Root jrt. Leave the other attributes at their default values. Click Next. 4. On Deploy: Deployment Settings screen, leave all attributes at their default values, and click Deploy. This will take about 1 minute or so and when the application is deployed successfully, a confirmation message will be displayed. Now the Control Center Agent is started automatically. Go back to OC4J home page and click on Applications tab to make sure the deployed application jrt is showing in the applications list.   Task 3: Optional Configuration Tasks The optional configuration tasks contain: · Secure Control Center Agent Web Service · Setting the PATH Environment Variable Secure Control Center Agent Web Service If you want to use JRTWebService with a secure website, you need to do the following steps, 1. Create a file “secure-web-site.xml” in the $AS_HOME/j2ee/home/config directory. The file can be obtained from $OWB_HOME/owb/jrt/config directory. A sample secure-web-site.xml is shown as below. We need to modify the “protocol” to “https”, and “secure” to “true”, also choose an port as the secure http port. Also we need to add the entry “ssl-config” in the file. Remember to use the absolute path for the key store file. 2. Modify the file “server.xml” that is located at $AS_HOME/j2ee/home/config directory. Then add the <web-site> element in the file for the secure-web-site. 3. Create a key store file “serverkeystore.jks” in the $AS_HOME/j2ee/home/config directory. The file can be obtained from $OWB_HOME/owb/jrt/config directory. After the three files are altered, restart the application server. Now you can access the JRTWebService in SSL way through https://hostname:4443/jrt/webservice. Setting the PATH Environment Variable Sometimes, some system commands such as linux ls, sh etc, can not be executed successfully during the script execution due to they are not found in PATH. To ensure they work normally, you can setup the environment variable PATH. Let’s navigate to the Enterprise Manager Homepage. 1. Go to OC4J home screen and click the Administration tab. Expand Administration Tasks, then expand Properties. Click the task icon next to Server Properties. 2. On the Server Properties screen, scroll down to Environment Variables section. Under Environment Variables, click Add Another Row. Enter PATH in Name, and fill Value with directories that contain the system commands. Click Apply.   After you work through this article, I believe you have developed a deeper understanding of the Control Center Agent installation process, and you can apply this knowledge in other installation plan such as Control Center Agent installation on Standalone OC4J.

    Read the article

  • Unexpected start of already-primary server processes when heartbeat on secondary is stopped.

    - by vorik
    Hi, I've got an active-passive Heartbeat cluster with Apache, MySQL, ActiveMQ and DRBD. Today, I wanted to perform hardware-maintenance on the secondary node (node04), so I stopped the heartbeat service before shutting it down. Then, the primary node (node03) received a shutdown notice from the secondary node (node04). This logging comes from the primary node: node03 heartbeat[4458]: 2010/03/08_08:52:56 info: Received shutdown notice from 'node04.companydomain.nl'. heartbeat[4458]: 2010/03/08_08:52:56 info: Resources being acquired from node04.companydomain.nl. harc[27522]: 2010/03/08_08:52:56 info: Running /etc/ha.d/rc.d/status status heartbeat[27523]: 2010/03/08_08:52:56 info: Local Resource acquisition completed. mach_down[27567]: 2010/03/08_08:52:56 info: /usr/share/heartbeat/mach_down: nice_failback: foreign resources acquired mach_down[27567]: 2010/03/08_08:52:56 info: mach_down takeover complete for node node04.companydomain.nl. heartbeat[4458]: 2010/03/08_08:52:56 info: mach_down takeover complete. harc[27620]: 2010/03/08_08:52:56 info: Running /etc/ha.d/rc.d/ip-request-resp ip-request-resp ip-request-resp[27620]: 2010/03/08_08:52:56 received ip-request-resp drbddisk OK yes ResourceManager[27645]: 2010/03/08_08:52:56 info: Acquiring resource group: node03.companydomain.nl drbddisk Filesystem::/dev/drbd0::/data::ext3 mysql apache::/etc/httpd/conf/httpd.conf LVSSyncDaemonSwap::master monitor activemq tivoli-cluster MailTo::[email protected]::DRBDFailureDrisAcc MailTo::[email protected]::DRBDFailureDrisAcc 1.2.3.212 ResourceManager[27645]: 2010/03/08_08:52:56 info: Running /etc/ha.d/resource.d/drbddisk start Filesystem[27700]: 2010/03/08_08:52:57 INFO: Running OK ResourceManager[27645]: 2010/03/08_08:52:57 info: Running /etc/ha.d/resource.d/mysql start mysql[27783]: 2010/03/08_08:52:57 Starting MySQL[ OK ] apache[27853]: 2010/03/08_08:52:57 INFO: Running OK ResourceManager[27645]: 2010/03/08_08:52:57 info: Running /etc/ha.d/resource.d/monitor start monitor[28160]: 2010/03/08_08:52:58 ResourceManager[27645]: 2010/03/08_08:52:58 info: Running /etc/ha.d/resource.d/activemq start activemq[28210]: 2010/03/08_08:52:58 Starting ActiveMQ Broker... ActiveMQ Broker is already running. ResourceManager[27645]: 2010/03/08_08:52:58 ERROR: Return code 1 from /etc/ha.d/resource.d/activemq ResourceManager[27645]: 2010/03/08_08:52:58 CRIT: Giving up resources due to failure of activemq ResourceManager[27645]: 2010/03/08_08:52:58 info: Releasing resource group: node03.companydomain.nl drbddisk Filesystem::/dev/drbd0::/data::ext3 mysql apache::/etc/httpd/conf/httpd.conf LVSSyncDaemonSwap::master monitor activemq tivoli-cluster MailTo::[email protected]::DRBDFailureDrisAcc MailTo::[email protected]::DRBDFailureDrisAcc 1.2.3.212 ResourceManager[27645]: 2010/03/08_08:52:58 info: Running /etc/ha.d/resource.d/IPaddr 1.2.3.212 stop IPaddr[28329]: 2010/03/08_08:52:58 INFO: ifconfig eth0:0 down IPaddr[28312]: 2010/03/08_08:52:58 INFO: Success ResourceManager[27645]: 2010/03/08_08:52:58 info: Running /etc/ha.d/resource.d/MailTo [email protected] DRBDFailureDrisAcc stop MailTo[28378]: 2010/03/08_08:52:58 INFO: Success ResourceManager[27645]: 2010/03/08_08:52:58 info: Running /etc/ha.d/resource.d/MailTo [email protected] DRBDFailureDrisAcc stop MailTo[28433]: 2010/03/08_08:52:58 INFO: Success ResourceManager[27645]: 2010/03/08_08:52:58 info: Running /etc/ha.d/resource.d/tivoli-cluster stop ResourceManager[27645]: 2010/03/08_08:52:58 info: Running /etc/ha.d/resource.d/activemq stop activemq[28503]: 2010/03/08_08:53:01 Stopping ActiveMQ Broker... Stopped ActiveMQ Broker. ResourceManager[27645]: 2010/03/08_08:53:01 info: Running /etc/ha.d/resource.d/monitor stop monitor[28681]: 2010/03/08_08:53:01 ResourceManager[27645]: 2010/03/08_08:53:01 info: Running /etc/ha.d/resource.d/LVSSyncDaemonSwap master stop LVSSyncDaemonSwap[28714]: 2010/03/08_08:53:02 info: ipvs_syncmaster down LVSSyncDaemonSwap[28714]: 2010/03/08_08:53:02 info: ipvs_syncbackup up LVSSyncDaemonSwap[28714]: 2010/03/08_08:53:02 info: ipvs_syncmaster released ResourceManager[27645]: 2010/03/08_08:53:02 info: Running /etc/ha.d/resource.d/apache /etc/httpd/conf/httpd.conf stop apache[28782]: 2010/03/08_08:53:03 INFO: Killing apache PID 18390 apache[28782]: 2010/03/08_08:53:03 INFO: apache stopped. apache[28771]: 2010/03/08_08:53:03 INFO: Success ResourceManager[27645]: 2010/03/08_08:53:03 info: Running /etc/ha.d/resource.d/mysql stop mysql[28851]: 2010/03/08_08:53:24 Shutting down MySQL.....................[ OK ] ResourceManager[27645]: 2010/03/08_08:53:24 info: Running /etc/ha.d/resource.d/Filesystem /dev/drbd0 /data ext3 stop Filesystem[29010]: 2010/03/08_08:53:25 INFO: Running stop for /dev/drbd0 on /data Filesystem[29010]: 2010/03/08_08:53:25 INFO: Trying to unmount /data Filesystem[29010]: 2010/03/08_08:53:25 ERROR: Couldn't unmount /data; trying cleanup with SIGTERM Filesystem[29010]: 2010/03/08_08:53:25 INFO: Some processes on /data were signalled Filesystem[29010]: 2010/03/08_08:53:27 INFO: unmounted /data successfully Filesystem[28999]: 2010/03/08_08:53:27 INFO: Success ResourceManager[27645]: 2010/03/08_08:53:27 info: Running /etc/ha.d/resource.d/drbddisk stop heartbeat[4458]: 2010/03/08_08:53:29 WARN: node node04.companydomain.nl: is dead heartbeat[4458]: 2010/03/08_08:53:29 info: Dead node node04.companydomain.nl gave up resources. heartbeat[4458]: 2010/03/08_08:53:29 info: Link node04.companydomain.nl:eth0 dead. heartbeat[4458]: 2010/03/08_08:53:29 info: Link node04.companydomain.nl:eth1 dead. hb_standby[29193]: 2010/03/08_08:53:57 Going standby [foreign]. heartbeat[4458]: 2010/03/08_08:53:57 info: node03.companydomain.nl wants to go standby [foreign] Soo... What just happened here??? Heartbeat on node04 stopped and told node03, which was the active node at the time. Somehow, node03 decided to start the cluster processes that were already running. (For the processes that are not critical, I always return a 0 from the startupscript so it does not stops the entire cluster when a non-essential part fails.) When starting ActiveMQ, it returns status 1 because it is already running. This fails the node and shuts everything down. As heartbeat is not running on the secondary node, it cannot failover to there. When I tried to run ha_takeover to restart the resources, absolutely nothing happened. Only after I restarted heartbeat on the primary node the resources could be started (after a delay of 2 minutes). These are my questions: Why does heartbeat on the primary node try to start the cluster processes again? Why did ha_takeover not work? What can I do to prevent this from happening? Server configuration: DRBD: version: 8.3.7 (api:88/proto:86-91) GIT-hash: ea9e28dbff98e331a62bcbcc63a6135808fe2917 build by [email protected], 2010-01-20 09:14:48 0: cs:Connected ro:Secondary/Primary ds:UpToDate/UpToDate B r---- ns:0 nr:6459432 dw:6459432 dr:0 al:0 bm:301 lo:0 pe:0 ua:0 ap:0 ep:1 wo:d oos:0 uname -a Linux node04 2.6.18-164.11.1.el5 #1 SMP Wed Jan 6 13:26:04 EST 2010 x86_64 x86_64 x86_64 GNU/Linux haresources node03.companydomain.nl \ drbddisk \ Filesystem::/dev/drbd0::/data::ext3 \ mysql \ apache::/etc/httpd/conf/httpd.conf \ LVSSyncDaemonSwap::master \ monitor \ activemq \ tivoli-cluster \ MailTo::[email protected]::DRBDFailureDrisAcc \ MailTo::[email protected]::DRBDFailureDrisAcc \ 1.2.3.212 ha.cf debugfile /var/log/ha-debug logfile /var/log/ha-log keepalive 500ms deadtime 30 warntime 10 initdead 120 udpport 694 mcast eth0 225.0.0.3 694 1 0 mcast eth1 225.0.0.4 694 1 0 auto_failback off node node03.companydomain.nl node node04.companydomain.nl respawn hacluster /usr/lib64/heartbeat/dopd apiauth dopd gid=haclient uid=hacluster Thank you very much in advance, Ger Apeldoorn

    Read the article

  • openvpn: after changing to server mode, client does not create TUN device

    - by lurscher
    i had a previously working configuration with the config files used in a previous question However, i've changed this now to the following configuration using server mode, everything on the logs seem fine, however the client doesn't create any tun interface, so i don't have anything to connect to, presumably, i need to add or push some route commands, but i don't have any idea at this point what i need to do. I am posting all my relevant configuration files server.conf: dev tun server 10.8.117.0 255.255.255.0 ifconfig-pool-persist ipp.txt tls-server dh /home/lurscher/keys/dh1024.pem ca /home/lurscher/keys/ca.crt cert /home/lurscher/keys/vpnCh8TestServer.crt key /home/lurscher/keys/vpnCh8TestServer.key status openvpn-status.log log openvpn.log comp-lzo verb 3 and client.conf: dev tun remote my.server.com tls-client ca /home/chuckq/keys/ca.crt cert /home/chuckq/keys/vpnCh8TestClient.crt key /home/chuckq/keys/vpnCh8TestClient.key ns-cert-type server ; port 1194 ; user nobody ; group nogroup status openvpn-status.log log openvpn.log comp-lzo verb 3 the server ifconfig shows a tun device: tun0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 inet addr:10.8.117.1 P-t-P:10.8.117.2 Mask:255.255.255.255 UP POINTOPOINT RUNNING NOARP MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:100 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) However the client ifconfig does not show any tun interface! $ ifconfig tun0 tun0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 POINTOPOINT NOARP MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:100 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) the client log says: Tue May 17 23:27:09 2011 OpenVPN 2.1.0 i686-pc-linux-gnu [SSL] [LZO2] [EPOLL] [PKCS11] [MH] [PF_INET6] [eurephia] built on Jul 12 2010 Tue May 17 23:27:09 2011 IMPORTANT: OpenVPN's default port number is now 1194, based on an official port number assignment by IANA. OpenVPN 2.0-beta16 and earlier used 5000 as the default port. Tue May 17 23:27:09 2011 NOTE: the current --script-security setting may allow this configuration to call user-defined scripts Tue May 17 23:27:09 2011 /usr/bin/openssl-vulnkey -q -b 1024 -m <modulus omitted> Tue May 17 23:27:09 2011 LZO compression initialized Tue May 17 23:27:09 2011 Control Channel MTU parms [ L:1542 D:138 EF:38 EB:0 ET:0 EL:0 ] Tue May 17 23:27:09 2011 TUN/TAP device tun0 opened Tue May 17 23:27:09 2011 TUN/TAP TX queue length set to 100 Tue May 17 23:27:09 2011 Data Channel MTU parms [ L:1542 D:1450 EF:42 EB:135 ET:0 EL:0 AF:3/1 ] Tue May 17 23:27:09 2011 Local Options hash (VER=V4): '41690919' Tue May 17 23:27:09 2011 Expected Remote Options hash (VER=V4): '530fdded' Tue May 17 23:27:09 2011 Socket Buffers: R=[114688->131072] S=[114688->131072] Tue May 17 23:27:09 2011 UDPv4 link local (bound): [undef] Tue May 17 23:27:09 2011 UDPv4 link remote: [AF_INET]192.168.0.101:1194 Tue May 17 23:27:09 2011 TLS: Initial packet from [AF_INET]192.168.0.101:1194, sid=8e8bdc33 f4275407 Tue May 17 23:27:09 2011 VERIFY OK: depth=1, /C=CA/ST=Out/L=There/O=Ubuntu/OU=Home/CN=Ubuntu_CA/name=lurscher/[email protected] Tue May 17 23:27:09 2011 VERIFY OK: nsCertType=SERVER Tue May 17 23:27:09 2011 VERIFY OK: depth=0, /C=CA/ST=Out/L=There/O=Ubuntu/OU=Home/CN=vpnCh8TestServer/name=lurscher/[email protected] Tue May 17 23:27:09 2011 Data Channel Encrypt: Cipher 'BF-CBC' initialized with 128 bit key Tue May 17 23:27:09 2011 Data Channel Encrypt: Using 160 bit message hash 'SHA1' for HMAC authentication Tue May 17 23:27:09 2011 Data Channel Decrypt: Cipher 'BF-CBC' initialized with 128 bit key Tue May 17 23:27:09 2011 Data Channel Decrypt: Using 160 bit message hash 'SHA1' for HMAC authentication Tue May 17 23:27:09 2011 Control Channel: TLSv1, cipher TLSv1/SSLv3 DHE-RSA-AES256-SHA, 1024 bit RSA Tue May 17 23:27:09 2011 [vpnCh8TestServer] Peer Connection Initiated with [AF_INET]192.168.0.101:1194 Tue May 17 23:27:10 2011 Initialization Sequence Completed the client status log: OpenVPN STATISTICS Updated,Tue May 17 23:30:09 2011 TUN/TAP read bytes,0 TUN/TAP write bytes,0 TCP/UDP read bytes,5604 TCP/UDP write bytes,4244 Auth read bytes,0 pre-compress bytes,0 post-compress bytes,0 pre-decompress bytes,0 post-decompress bytes,0 END and the server log says: Tue May 17 23:18:25 2011 OpenVPN 2.1.0 x86_64-pc-linux-gnu [SSL] [LZO2] [EPOLL] [PKCS11] [MH] [PF_INET6] [eurephia] built on Jul 12 2010 Tue May 17 23:18:25 2011 IMPORTANT: OpenVPN's default port number is now 1194, based on an official port number assignment by IANA. OpenVPN 2.0-beta16 and earlier used 5000 as the default port. Tue May 17 23:18:25 2011 WARNING: --keepalive option is missing from server config Tue May 17 23:18:25 2011 NOTE: your local LAN uses the extremely common subnet address 192.168.0.x or 192.168.1.x. Be aware that this might create routing conflicts if you connect to the VPN server from public locations such as internet cafes that use the same subnet. Tue May 17 23:18:25 2011 NOTE: the current --script-security setting may allow this configuration to call user-defined scripts Tue May 17 23:18:25 2011 Diffie-Hellman initialized with 1024 bit key Tue May 17 23:18:25 2011 /usr/bin/openssl-vulnkey -q -b 1024 -m <modulus omitted> Tue May 17 23:18:25 2011 TLS-Auth MTU parms [ L:1542 D:138 EF:38 EB:0 ET:0 EL:0 ] Tue May 17 23:18:25 2011 ROUTE default_gateway=192.168.0.1 Tue May 17 23:18:25 2011 TUN/TAP device tun0 opened Tue May 17 23:18:25 2011 TUN/TAP TX queue length set to 100 Tue May 17 23:18:25 2011 /sbin/ifconfig tun0 10.8.117.1 pointopoint 10.8.117.2 mtu 1500 Tue May 17 23:18:25 2011 /sbin/route add -net 10.8.117.0 netmask 255.255.255.0 gw 10.8.117.2 Tue May 17 23:18:25 2011 Data Channel MTU parms [ L:1542 D:1450 EF:42 EB:135 ET:0 EL:0 AF:3/1 ] Tue May 17 23:18:25 2011 Socket Buffers: R=[126976->131072] S=[126976->131072] Tue May 17 23:18:25 2011 UDPv4 link local (bound): [undef] Tue May 17 23:18:25 2011 UDPv4 link remote: [undef] Tue May 17 23:18:25 2011 MULTI: multi_init called, r=256 v=256 Tue May 17 23:18:25 2011 IFCONFIG POOL: base=10.8.117.4 size=62 Tue May 17 23:18:25 2011 IFCONFIG POOL LIST Tue May 17 23:18:25 2011 vpnCh8TestClient,10.8.117.4 Tue May 17 23:18:25 2011 Initialization Sequence Completed Tue May 17 23:27:22 2011 MULTI: multi_create_instance called Tue May 17 23:27:22 2011 192.168.0.104:1194 Re-using SSL/TLS context Tue May 17 23:27:22 2011 192.168.0.104:1194 LZO compression initialized Tue May 17 23:27:22 2011 192.168.0.104:1194 Control Channel MTU parms [ L:1542 D:138 EF:38 EB:0 ET:0 EL:0 ] Tue May 17 23:27:22 2011 192.168.0.104:1194 Data Channel MTU parms [ L:1542 D:1450 EF:42 EB:135 ET:0 EL:0 AF:3/1 ] Tue May 17 23:27:22 2011 192.168.0.104:1194 Local Options hash (VER=V4): '530fdded' Tue May 17 23:27:22 2011 192.168.0.104:1194 Expected Remote Options hash (VER=V4): '41690919' Tue May 17 23:27:22 2011 192.168.0.104:1194 TLS: Initial packet from [AF_INET]192.168.0.104:1194, sid=8972b565 79323f68 Tue May 17 23:27:22 2011 192.168.0.104:1194 VERIFY OK: depth=1, /C=CA/ST=Out/L=There/O=Ubuntu/OU=Home/CN=Ubuntu_CA/name=lurscher/[email protected] Tue May 17 23:27:22 2011 192.168.0.104:1194 VERIFY OK: depth=0, /C=CA/ST=Out/L=There/O=Ubuntu/OU=Home/CN=Ubuntu_CA/name=lurscher/[email protected] Tue May 17 23:27:22 2011 192.168.0.104:1194 Data Channel Encrypt: Cipher 'BF-CBC' initialized with 128 bit key Tue May 17 23:27:22 2011 192.168.0.104:1194 Data Channel Encrypt: Using 160 bit message hash 'SHA1' for HMAC authentication Tue May 17 23:27:22 2011 192.168.0.104:1194 Data Channel Decrypt: Cipher 'BF-CBC' initialized with 128 bit key Tue May 17 23:27:22 2011 192.168.0.104:1194 Data Channel Decrypt: Using 160 bit message hash 'SHA1' for HMAC authentication Tue May 17 23:27:22 2011 192.168.0.104:1194 Control Channel: TLSv1, cipher TLSv1/SSLv3 DHE-RSA-AES256-SHA, 1024 bit RSA Tue May 17 23:27:22 2011 192.168.0.104:1194 [vpnCh8TestClient] Peer Connection Initiated with [AF_INET]192.168.0.104:1194 Tue May 17 23:27:22 2011 vpnCh8TestClient/192.168.0.104:1194 MULTI: Learn: 10.8.117.6 -> vpnCh8TestClient/192.168.0.104:1194 Tue May 17 23:27:22 2011 vpnCh8TestClient/192.168.0.104:1194 MULTI: primary virtual IP for vpnCh8TestClient/192.168.0.104:1194: 10.8.117.6 finally, the server status log: OpenVPN CLIENT LIST Updated,Tue May 17 23:36:25 2011 Common Name,Real Address,Bytes Received,Bytes Sent,Connected Since vpnCh8TestClient,192.168.0.104:1194,4244,5604,Tue May 17 23:27:22 2011 ROUTING TABLE Virtual Address,Common Name,Real Address,Last Ref 10.8.117.6,vpnCh8TestClient,192.168.0.104:1194,Tue May 17 23:27:22 2011 GLOBAL STATS Max bcast/mcast queue length,0 END

    Read the article

  • Remote Desktop to Your Azure Virtual Machine

    - by Shaun
    The Windows Azure Team had just published their new development portal this week and the SDK 1.3. Within this new release there are a lot of cool feature available. The one I’m looking forward to is Remote Desktop Access to your running Windows Azure Virtual Machine.   Configuration Remote Desktop Access It would be very simple to make the azure service enable the remote desktop access. First of all let’s create a new windows azure project from the Visual Studio. In this example I just created a normal MVC 2 web role without any modifications. Then we right-click the azure project node in the solution explorer window and select “Publish”. Then let’s select the “Deploy your Windows Azure project to Windows Azure” on the top radio button. And then select the credential, deployment service/slot, storage and label as susal. You must have the Management API Certificates uploaded to your Windows Azure account, and install the certification on you machine before in order to use this one-click deployment feature. If you are familiar with this dialog you will notice that there’s a linkage named “Configure Remote Desktop connections”. Here is where you need to make this service enable the remote desktop feature. After clicked this link we will set the configuration of the remote desktop access authorization information. There are 4 steps we need to do to configure our access. Certificates: We need either create or select a certificate file in order to encypt the access cerdenticals. In this example I will use the certificate file for my Management API. Username: The remote desktop user name to access the virtual machine. Password: The password for the access. Expiration: The access cerdentals would be expired after 1 month by default but we can amend here. After that we clicked the OK button to back to the publish dialog.   The next step is to back to the new windows azure portal and navigate to the hosted services list. I created a new hosted service and upload the certificate file onto this service. The user name and password access to the azure machine must be encrypted from the local machine, and then send to the windows azure platform, then decrypted on the azure side by the same file. This is why we need to upload the certificate file onto azure. We navigated to the “Hosted Services, Storage Accounts & CDN"” from the left panel and created a new hosted service named “SDK13” and selected the “Certificates” node. Then we clicked the “Add Certificates” button. Then we select the local certificate file and the password to install it into this azure service.   The final step would be back to our Visual Studio and in the pulish dialog just click the OK button. The Visual Studio will upload our package and the configuration into our service with the remote desktop settings.   Remote Desktop Access to Azure Virtual Machine All things had been done, let’s have a look back on the Windows Azure Development Portal. If I selected the web role that I had just published we can see on the toolbar there’s a section named “Remote Access”. In this section the Enable checkbox had been checked which means this role has the Remote Desktop Access feature enabled. If we want to modify the access cerdentals we can simply click the Configure button. Then we can update the user name, password, certificates and the expiration date.   Let’s select the instance node under the web role. In this case I just created one instance for demo. We can see that when we selected the instance node, the Connect button turned enabled. After clicked this button there will be a RDP file downloaded. This is a Remote Desctop configuration file that we can use to access to our azure virtual machine. Let’s download it to our local machine and execute. We input the user name and password we specified when we published our application to azure and then click OK. There might be some certificates warning dislog appeared. This is because the certificates we use to encryption is not signed by a trusted provider. Just select OK in these cases as we know the certificate is safty to us. Finally, the virtual machine of Windows Azure appeared.   A Quick Look into the Azure Virtual Machine Let’s just have a very quick look into our virtual machine. There are 3 disks available for us: C, D and E. Disk C: Store the local resource, diagnosis information, etc. Disk D: System disk which contains the OS, IIS, .NET Frameworks, etc. Disk E: Sotre our application code. The IIS which hosting our webiste on Azure. The IP configuration of the azure virtual machine.   Summary In this post I covered one of the new feature of the Azure SDK 1.3 – Remote Desktop Access. We can set the access per service and all of the instances of this service could be accessed through the remote desktop tool. With this feature we can deep into the virtual machines of our instances to see the inner information such as the system event, IIS log, system information, etc. But we should pay attention to modify the system settings. 2 reasons from what I know for now: 1. If we have more than one instances against our service we should ensure that all system settings we modifed are applied to all instances/virtual machines. Otherwise, as the machines are under the azure load balance proxy our application process may doesn’t work due to the defferent settings between the instances. 2. When the virtual machine encounted some problem and need to be translated to another physical machine all settings we made would be disappeared.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Getting MySQL work with Entity Framework 4.0

    - by DigiMortal
    Does MySQL work with Entity Framework 4.0? The answer is: yes, it works! I just put up one experimental project to play with MySQL and Entity Framework 4.0 and in this posting I will show you how to get MySQL data to EF. Also I will give some suggestions how to deploy your applications to hosting and cloud environments. MySQL stuff As you may guess you need MySQL running somewhere. I have MySQL installed to my development machine so I can also develop stuff when I’m offline. The other thing you need is MySQL Connector for .NET Framework. Currently there is available development version of MySQL Connector/NET 6.3.5 that supports Visual Studio 2010. Before you start download MySQL and Connector/NET: MySQL Community Server Connector/NET 6.3.5 If you are not big fan of phpMyAdmin then you can try out free desktop client for MySQL – HeidiSQL. I am using it and I am really happy with this program. NB! If you just put up MySQL then create also database with couple of table there. To use all features of Entity Framework 4.0 I suggest you to use InnoDB or other engine that has support for foreign keys. Connecting MySQL to Entity Framework 4.0 Now create simple console project using Visual Studio 2010 and go through the following steps. 1. Add new ADO.NET Entity Data Model to your project. For model insert the name that is informative and that you are able later recognize. Now you can choose how you want to create your model. Select “Generate from database” and click OK. 2. Set up database connection Change data connection and select MySQL Database as data source. You may also need to set provider – there is only one choice. Select it if data provider combo shows empty value. Click OK and insert connection information you are asked about. Don’t forget to click test connection button to see if your connection data is okay. If everything works then click OK. 3. Insert context name Now you should see the following dialog. Insert your data model name for application configuration file and click OK. Click next button. 4. Select tables for model Now you can select tables and views your classes are based on. I have small database with events data. Uncheck the checkbox “Include foreign key columns in the model” – it is damn annoying to get them away from model later. Also insert informative and easy to remember name for your model. Click finish button. 5. Define your classes Now it’s time to define your classes. Here you can see what Entity Framework generated for you. Relations were detected automatically – that’s why we needed foreign keys. The names of classes and their members are not nice yet. After some modifications my class model looks like on the following diagram. Note that I removed attendees navigation property from person class. Now my classes look nice and they follow conventions I am using when naming classes and their members. NB! Don’t forget to see properties of classes (properties windows) and modify their set names if set names contain numbers (I changed set name for Entity from Entity1 to Entities). 6. Let’s test! Now let’s write simple testing program to see if MySQL data runs through Entity Framework 4.0 as expected. My program looks for events where I attended. using(var context = new MySqlEntities()) {     var myEvents = from e in context.Events                     from a in e.Attendees                     where a.Person.FirstName == "Gunnar" &&                             a.Person.LastName == "Peipman"                     select e;       Console.WriteLine("My events: ");       foreach(var e in myEvents)     {         Console.WriteLine(e.Title);     } }   Console.ReadKey(); And when I run it I get the result shown on screenshot on right. I checked out from database and these results are correct. At first run connector seems to work slow but this is only the effect of first run. As connector is loaded to memory by Entity Framework it works fast from this point on. Now let’s see what we have to do to get our program work in hosting and cloud environments where MySQL connector is not installed. Deploying application to hosting and cloud environments If your hosting or cloud environment has no MySQL connector installed you have to provide MySQL connector assemblies with your project. Add the following assemblies to your project’s bin folder and include them to your project (otherwise they are not packaged by WebDeploy and Azure tools): MySQL.Data MySQL.Data.Entity MySQL.Web You can also add references to these assemblies and mark references as local so these assemblies are copied to binary folder of your application. If you have references to these assemblies then you don’t have to include them to your project from bin folder. Also add the following block to your application configuration file. <?xml version="1.0" encoding="utf-8"?> <configuration> ...   <system.data>     <DbProviderFactories>         <add              name=”MySQL Data Provider”              invariant=”MySql.Data.MySqlClient”              description=”.Net Framework Data Provider for MySQL”              type=”MySql.Data.MySqlClient.MySqlClientFactory, MySql.Data,                   Version=6.2.0.0, Culture=neutral,                   PublicKeyToken=c5687fc88969c44d”          />     </DbProviderFactories>   </system.data> ... </configuration> Conclusion It was not hard to get MySQL connector installed and MySQL connected to Entity Framework 4.0. To use full power of Entity Framework we used InnoDB engine because it supports foreign keys. It was also easy to query our model. To get our project online we needed some easy modifications to our project and configuration files.

    Read the article

  • Convert DVD to MP4 / H.264 with HD Decrypter and Handbrake

    - by DigitalGeekery
    Are you looking for a way to convert your DVD collection to high quality MP4 files? Today we are going to take a look at using DVDFab HD Decrypter along with Handbrake to convert DVDs to MP4 using the H.264 codec.  Process Overview Handbrake is a great file conversion application, but it unfortunately can’t handle DVD copy protection. For that we will use DVDFab’s HD Decrypter. HD Decrypter is the always free portion of the DVDFab application. What HD Decrypter will do, is remove the copy protection from your DVD, and copy the Video-TS and Audio-TS folders to your hard drive. Once the copy protection is gone, we will use Handbrake to convert the files to MP4 format with H.264 compression. Note: You’ll get full access to all the options in DVDFab  during the 30 trial period. However, the HD Decrypter is free and will continue to work. Ripping the DVD Install both Handbrake and DVDFab HD Decrypter. (Download links below) Once the applications are installed, place your DVD into your DVD drive and open DVDFab. On the welcome screen, click “Start DVDFab.”   You’ll be prompted to choose your region. Click “OK.” The disc is analyzed and opened… You’ll be brought to the main interface. Make sure you have the Full Disc option selected at the left panel and “Copy DVD-Video (VIDEO_TS folder) is selected. Click “Start.” Don’t be confused by the “DVD to DVD” option pop up. We won’t actually be burning to DVD. The HD Decrypter portion of the DVDFab suite is part of the DVD to DVD option. Click “OK.” The DVD will be ripped to your hard drive. When the copy process is complete, you’ll be prompted to insert media to start the write process. We aren’t going to be burning to disc, so just click Cancel then close out of DVDFab.   Converting to MP4 Now we are ready to convert Open Handbrake and click on the “Source” button at the top left. Select DVD / VIDEO_TS folder from the drop down list. Now we need to browse for the location where DVDFab HD Decrypter copied your movie. By default, that location will be the \DVDFab\Temp\FullDisc directory in your Documents folder. For example, in Windows 7, it would be: C:\Users\%username%\Documents\DVDFab\Temp\FullDisc\[Name of Your DVD] Select the folder, and click “OK.” You may be prompted to set a default path in Handbrake. This is an optional step. Click “OK.” If you’d like to set a default destination folder, Go to Tools on the top menu, select Options. On the General tab, click “Browse” to select a destination output folder. Click “Close” when Finished.   Next, click the dropdown list next to “Title.” Select the title that matches the length of the movie. It’s possible you may have see more than one title with a similar length. If so, consult the DVD information, or a site like IMDB.com, to find the proper movie title length. Select your container under Output Settings. This will be your final output file extension. We will be using MP4 for this example. You also have the option of MKV.   If you didn’t set up a default destination folder, you’ll need to select one by clicking the “Browse” button. You can manually customize the output file name and change the output file extension to .mp4 (Unless you prefer the iPod friendly .m4v extension). Settings There are a variety of custom settings that can be changed either through the tabs listed under Output Settings, or by selecting one of the Presets to the right. If converting exclusively for any of the devices listed in the preset list, simply click on that device and the settings will be automatically applied in the Output Settings tabs. For more Universal (non-Apple) devices or output, select the Normal profile.   For the most part, the presets will suit quite nicely. However, you can further customize settings if you’d like. The Picture tab allows you to tweak the size or cropping region. You must change Anamorphic to Loose or Custom to change the size.   The Video tab allows you to choose your codec. H.264 is the default. You also have the option to choose a target (output) size. The Constant Quality is recommended to be set between 59% – 63%. Anything over 70% will likely result in an output file larger than the input without any improved quality. On the Subtitles tab, you can select an available subtitle from the dropdown list and click “Add” to add it to the output file. When you’ve finished any customizations you are ready to begin the conversion process. Click “Start.” A Command window will open and you can follow the process. You’ll probably want to find something to do in the meantime as the process could take a couple of hours. When the process completes, you’re ready to watch your video.   Although it’s a time consuming process that involves a couple steps, this method will give you high quality H.264 video files. If you want to rip and burn your DVD’s to ISO check out our article on how to rip and convert DVD’s to an ISO image. Links Download DVDFab HD Decrypter (Part of the DVDFab suite) Download Handbrake Similar Articles Productive Geek Tips Enjoy Quick & Easy Unit Conversion with Convert for WindowsConvert Older Excel Documents to Excel 2007 FormatCalculate with Qalculate on LinuxHow To Convert Video Files to MP3 with VLCConvert a Row to a Column in Excel the Easy Way TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Use Quick Translator to Translate Text in 50 Languages (Firefox) Get Better Windows Search With UltraSearch Scan News With NY Times Article Skimmer SpeedyFox Claims to Speed up your Firefox Beware Hover Kitties Test Drive Mobile Phones Online With TryPhone

    Read the article

  • New Features in ASP.NET Web API 2 - Part I

    - by dwahlin
    I’m a big fan of ASP.NET Web API. It provides a quick yet powerful way to build RESTful HTTP services that can easily be consumed by a variety of clients. While it’s simple to get started using, it has a wealth of features such as filters, formatters, and message handlers that can be used to extend it when needed. In this post I’m going to provide a quick walk-through of some of the key new features in version 2. I’ll focus on some two of my favorite features that are related to routing and HTTP responses and cover additional features in a future post.   Attribute Routing Routing has been a core feature of Web API since it’s initial release and something that’s built into new Web API projects out-of-the-box. However, there are a few scenarios where defining routes can be challenging such as nested routes (more on that in a moment) and any situation where a lot of custom routes have to be defined. For this example, let’s assume that you’d like to define the following nested route:   /customers/1/orders   This type of route would select a customer with an Id of 1 and then return all of their orders. Defining this type of route in the standard WebApiConfig class is certainly possible, but it isn’t the easiest thing to do for people who don’t understand routing well. Here’s an example of how the route shown above could be defined:   public static class WebApiConfig { public static void Register(HttpConfiguration config) { config.Routes.MapHttpRoute( name: "CustomerOrdersApiGet", routeTemplate: "api/customers/{custID}/orders", defaults: new { custID = 0, controller = "Customers", action = "Orders" } ); config.Routes.MapHttpRoute( name: "DefaultApi", routeTemplate: "api/{controller}/{id}", defaults: new { id = RouteParameter.Optional } ); GlobalConfiguration.Configuration.Formatters.Insert(0, new JsonpFormatter()); } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   With attribute based routing, defining these types of nested routes is greatly simplified. To get started you first need to make a call to the new MapHttpAttributeRoutes() method in the standard WebApiConfig class (or a custom class that you may have created that defines your routes) as shown next:   public static class WebApiConfig { public static void Register(HttpConfiguration config) { // Allow for attribute based routes config.MapHttpAttributeRoutes(); config.Routes.MapHttpRoute( name: "DefaultApi", routeTemplate: "api/{controller}/{id}", defaults: new { id = RouteParameter.Optional } ); } } Once attribute based routes are configured, you can apply the Route attribute to one or more controller actions. Here’s an example:   [HttpGet] [Route("customers/{custId:int}/orders")] public List<Order> Orders(int custId) { var orders = _Repository.GetOrders(custId); if (orders == null) { throw new HttpResponseException(new HttpResponseMessage(HttpStatusCode.NotFound)); } return orders; }   This example maps the custId route parameter to the custId parameter in the Orders() method and also ensures that the route parameter is typed as an integer. The Orders() method can be called using the following route: /customers/2/orders   While this is extremely easy to use and gets the job done, it doesn’t include the default “api” string on the front of the route that you might be used to seeing. You could add “api” in front of the route and make it “api/customers/{custId:int}/orders” but then you’d have to repeat that across other attribute-based routes as well. To simply this type of task you can add the RoutePrefix attribute above the controller class as shown next so that “api” (or whatever the custom starting point of your route is) is applied to all attribute routes: [RoutePrefix("api")] public class CustomersController : ApiController { [HttpGet] [Route("customers/{custId:int}/orders")] public List<Order> Orders(int custId) { var orders = _Repository.GetOrders(custId); if (orders == null) { throw new HttpResponseException(new HttpResponseMessage(HttpStatusCode.NotFound)); } return orders; } }   There’s much more that you can do with attribute-based routing in ASP.NET. Check out the following post by Mike Wasson for more details.   Returning Responses with IHttpActionResult The first version of Web API provided a way to return custom HttpResponseMessage objects which were pretty easy to use overall. However, Web API 2 now wraps some of the functionality available in version 1 to simplify the process even more. A new interface named IHttpActionResult (similar to ActionResult in ASP.NET MVC) has been introduced which can be used as the return type for Web API controller actions. To return a custom response you can use new helper methods exposed through ApiController such as: Ok NotFound Exception Unauthorized BadRequest Conflict Redirect InvalidModelState Here’s an example of how IHttpActionResult and the helper methods can be used to cleanup code. This is the typical way to return a custom HTTP response in version 1:   public HttpResponseMessage Delete(int id) { var status = _Repository.DeleteCustomer(id); if (status) { return new HttpResponseMessage(HttpStatusCode.OK); } else { throw new HttpResponseException(HttpStatusCode.NotFound); } } With version 2 we can replace HttpResponseMessage with IHttpActionResult and simplify the code quite a bit:   public IHttpActionResult Delete(int id) { var status = _Repository.DeleteCustomer(id); if (status) { //return new HttpResponseMessage(HttpStatusCode.OK); return Ok(); } else { //throw new HttpResponseException(HttpStatusCode.NotFound); return NotFound(); } } You can also cleanup post (insert) operations as well using the helper methods. Here’s a version 1 post action:   public HttpResponseMessage Post([FromBody]Customer cust) { var newCust = _Repository.InsertCustomer(cust); if (newCust != null) { var msg = new HttpResponseMessage(HttpStatusCode.Created); msg.Headers.Location = new Uri(Request.RequestUri + newCust.ID.ToString()); return msg; } else { throw new HttpResponseException(HttpStatusCode.Conflict); } } This is what the code looks like in version 2:   public IHttpActionResult Post([FromBody]Customer cust) { var newCust = _Repository.InsertCustomer(cust); if (newCust != null) { return Created<Customer>(Request.RequestUri + newCust.ID.ToString(), newCust); } else { return Conflict(); } } More details on IHttpActionResult and the different helper methods provided by the ApiController base class can be found here. Conclusion Although there are several additional features available in Web API 2 that I could cover (CORS support for example), this post focused on two of my favorites features. If you have .NET 4.5.1 available then I definitely recommend checking the new features out. Additional articles that cover features in ASP.NET Web API 2 can be found here.

    Read the article

  • Charms and the App Bar

    - by Dennis Vroegop
    Ok. I admit. I made a mistake in the last post about our planespotter app. I have dedicated a full part of the hub to Social. I also had a section called Friends but that made sense since I said that “Friends” is a special group of people that connect to each other through our app and only our app. Social however is sharing our spots with Twitter, Facebook and so on. Now, we could write that functionality in our app in a different section but there is one small problem with that: users don’t expect that. Ok, I admit. The mistake was quite deliberate to give me an excuse to write this part. But still: the mistake is one I see a lot. People are trying to do stuff in their application that they shouldn’t be doing. This always strike me as slightly odd: why do some work when others have already done it for you and you can just use it? After all: good developers are lazy (lazy people will always try to find the easiest way to do something and in development land this usually means the cleanest and best to support way…) So. What is that part that Microsoft has done for us and we don’t have to do ourselves? The answer lies on the right hand of your Win8 screen: This is a screenshot of my tablet (as you can see I am writing this right now….) When I swipe my finger from out of the screen on the right inside the screen (or move the mouse to the upper right corner) this menu will appear. Next to settings and the start menu button we’ll find the Search and the Share charms. These are two ways that your app can share the information it contains with the rest of the world, or at least: the rest of your system. So don’t write a Search feature in your app. Don’t write a Share feature in your app. It’s here already. Users, once they are used to Windows 8, will use that feature and expect it to work. If it doesn’t, they won’t like your app and you can kiss you dreams of everlasting fame goodbye. So use these two. What are they? Well, simply they are parts of a contract. In your app you say somewhere in code that you are supporting Search and Share. So when the user selects Share the system will interrogate the current app in the foreground if it supports this feature. Your app will say “But why, yes, I do!” Then the system will ask the app “Ok then, wisecrack, then share!” and you will have to provide the system with some information about the format. Other applications have subscribed to be at the receiving end of the Share contract. They have told the system that they support Sharing (receiving) and which formats they understand. If one or more of them support the formats you specify, the user will see them. The user clicks / taps on the app of their choice and data is moved from your app to the new one. So if you say you support Facebook and Twitter users can post data from your app to these networks by selecting Share. The same applies to Search. Don’t make a “search” button in your app but use the contract to tell the system that you support search and use that instead. Users will be grateful (remember that bar with men/women/creatures that are waiting for you?) The more and more people get to know Windows 8, the more they will use this. And if you are one of the people who wrote an app that helped them learn the system, well, that’s even better. So. We don’t have a Share or a Search button. We do have other buttons. Most important: we probably need a “New Spot” button. And a “Filter” might be useful. Or someway to open the camera so you can add a picture to the spot. Where will be put those? The answer is the “Appbar” . This is a application / context aware menu that slides up from the bottom of the screen when you move your finger / mouse from below the screen into it. From above downwards works just as well. Here you see an example of the appbar from the People app. (click on it for a larger version). This appears whenever you slide your finger up from below of down from above. This is where you put your commands. Remember, this is context aware so this menu will change when you are in different parts of your app or when you have selected different items. There are a few conventions when you create this appbar. First, the items on the right are “General” items, meaning they have little to do with what is on the screen right now. I think this would be a great place to add our “New Spot” icon. On the far left are items associated with the current selected item or screen. So if you have a spot selected, the button for Add Photo should be visible here and on the left hand side. Not everything is as clear as this, but this is what you should strive for. Group items together. And please note: this is the only place in Metro design where we are allowed to use lines as separators. So when you want to separate a group of icons from another group, add a line. Also note the simplicity of the buttons. No colors, no lights or shadows, no 3D. After a couple of years of fancy almost realistic looking icons people have finally decided that hey, this is a virtual world: it’s ok to look virtual as well. So make things as readable and clear as possible and don’t try to duplicate nature. It’s all about the information, remember? (If you don’t remember I’d like to point you to a older blog post of mine about the what and why of Metro). So.. think about the buttons a bit and think about Share and Search. What will you put there? Remember: this is the way the users interact with your apps and while you shouldn’t judge a book by its covers when it comes to people, this isn’t entirely so when it comes to apps. People DO judge an app by its looks and the way it feels. Take advantage of that. History has learned that a crappy app with a GREAT user interface gets better reviews than a GREAT app with a lousy UI… I know: developers will find this extremely unfair but that’s the world we live in (No, I am not saying you should deliver rubbish apps). Next time: we’ll start by building the darn thing!

    Read the article

  • Applications: How to create a custom dialog box for Windows Mobile 6 (native)

    - by TechTwaddle
    Ashraf, on the MSDN forum, asks, “Is there a way to make a default choice for the messagebox that happens after a period of time if the user doesn't choose (Clicked ) Yes or No buttons.” To elaborate, the requirement is to show a message box to the user with certain options to select, and if the user does not respond within a predefined time limit (say 8 seconds) then the message box must dismiss itself and select a default option. Now such a functionality is not available with the MessageBox() api, you will have to write your own custom dialog box. Surely, creating a dialog box is quite a simple task using the DialogBox() api, and we have been creating full screen dialog boxes all the while. So how will this custom message box be any different? It’s not much different from a regular dialog box except for a few changes in its properties. First, it has a title bar but no buttons on the title bar (no ‘x’ or ‘ok’ button on the title bar), it doesn’t occupy full screen and it contains the controls that you put into it, thus justifying the title ‘custom’. So in this post we create a custom dialog box with two buttons, ‘Black’ and ‘White’. The user is given 8 seconds to select one of those colours, if the user doesn’t make a selection in 8 seconds, the default option ‘Black’ is selected. Before going into the implementation here is a video of how the dialog box works; Custom dialog box To start off, add a new dialog resource into your application, size it appropriately and add whatever controls you need to the dialog. In my case, I added two static text labels and two buttons, as below; Now we need to write up the window procedure for this dialog, here is the complete function; BOOL CALLBACK CustomDialogProc(HWND hDlg, UINT uMessage, WPARAM wParam, LPARAM lParam) {     int wmID, wmEvent;     PAINTSTRUCT ps;     HDC hdc;     static int timeCount = 0;     switch(uMessage)     {         case WM_INITDIALOG:             {                 SHINITDLGINFO shidi;                 memset(&shidi, 0, sizeof(shidi));                 shidi.dwMask = SHIDIM_FLAGS;                 //shidi.dwFlags = SHIDIF_DONEBUTTON | SHIDIF_SIPDOWN | SHIDIF_SIZEDLGFULLSCREEN | SHIDIF_EMPTYMENU;                 shidi.dwFlags = SHIDIF_SIPDOWN | SHIDIF_EMPTYMENU;                 shidi.hDlg = hDlg;                 SHInitDialog(&shidi);                 SHDoneButton(hDlg, SHDB_HIDE);                 timeCount = 0;                 SetWindowText(GetDlgItem(hDlg, IDC_STATIC_TIME_REMAINING), L"Time remaining: 8 second(s)");                 SetTimer(hDlg, MY_TIMER, 1000, NULL);             }             return TRUE;         case WM_COMMAND:             {                 wmID = LOWORD(wParam);                 wmEvent = HIWORD(wParam);                 switch(wmID)                 {                     case IDC_BUTTON_BLACK:                         KillTimer(hDlg, MY_TIMER);                         EndDialog(hDlg, IDC_BUTTON_BLACK);                         break;                     case IDC_BUTTON_WHITE:                         KillTimer(hDlg, MY_TIMER);                         EndDialog(hDlg, IDC_BUTTON_WHITE);                         break;                 }             }             break;         case WM_TIMER:             {                 if (wParam == MY_TIMER)                 {                     WCHAR wszText[128];                     memset(&wszText, 0, sizeof(wszText));                     timeCount++;                     //8 seconds are over, dismiss the dialog, select def value                     if (timeCount >= 8)                     {                         KillTimer(hDlg, MY_TIMER);                         EndDialog(hDlg, IDC_BUTTON_BLACK_DEF);                     }                     wsprintf(wszText, L"Time remaining: %d second(s)", 8-timeCount);                     SetWindowText(GetDlgItem(hDlg, IDC_STATIC_TIME_REMAINING), wszText);                     UpdateWindow(GetDlgItem(hDlg, IDC_STATIC_TIME_REMAINING));                 }             }             break;         case WM_PAINT:             {                 hdc = BeginPaint(hDlg, &ps);                 EndPaint(hDlg, &ps);             }             break;     }     return FALSE; } The MSDN documentation mentions that you need to specify the flag WS_NONAVDONEBUTTON, but I got an error saying that the value could not be found, so we can ignore this for now. Next up, while calling SHInitDialog() for your custom dialog, make sure that you don’t specify SHDIF_DONEBUTTON in the dwFlags member of the SHINITDIALOG structure, this member makes the ‘ok’ button appear on the dialog title bar. Finally, we need to call SHDoneButton() with SHDB_HIDE flag to, well, hide the Done button. The ‘Done’ button is the same as the ‘ok’ button, so this step might seem redundant, and the dialog works fine without calling SHDoneButton() too, but it’s better to stick with the documentation (; So you can see that we have followed all these steps above, under WM_INITDIALOG. We also setup a few things like a variable to keep track of the time, and setting off a one second timer. Every time the timer fires, we receive a WM_TIMER message. We then update the static label displaying the amount of time left to the user. If 8 seconds go by without the user selecting any option, we kill the timer and end the dialog with IDC_BUTTON_BLACK_DEF. This is just a #define’d integer value, make sure it’s unique. You’ll see why this is important. If the user makes a selection, either Black or White, we kill the timer and end the dialog with corresponding selection the user made, that is, either IDC_BUTTON_BLACK or IDC_BUTTON_WHITE. Ok, so now our custom dialog is ready to be used. I invoke the custom dialog from a menu entry in the main windows as below, case IDM_MENU_CUSTOMDLG:     {         int ret = DialogBox(g_hInst, MAKEINTRESOURCE(IDD_CUSTOM_DIALOG), hWnd, CustomDialogProc);         switch(ret)         {             case IDC_BUTTON_BLACK_DEF:                 SetWindowText(g_hStaticSelection, L"You Selected: Black (default)");                 break;             case IDC_BUTTON_BLACK:                 SetWindowText(g_hStaticSelection, L"You Selected: Black");                 break;             case IDC_BUTTON_WHITE:                 SetWindowText(g_hStaticSelection, L"You Selected: White");                 break;         }         UpdateWindow(g_hStaticSelection);     }     break; So you see why ending the dialog with the corresponding value was important, that’s what the DialogBox() api returns with. And in the main window I update a static text label to show which option was selected. I cranked this out in about an hour, and unfortunately don’t have time for a managed C# version. That will have to be another post, if I manage to get it working that is (;

    Read the article

  • Neo4J and Azure and VS2012 and Windows 8

    - by Chris Skardon
    Now, I know that this has been written about, but both of the main places (http://www.richard-banks.org/2011/02/running-neo4j-on-azure.html and http://blog.neo4j.org/2011/02/announcing-neo4j-on-windows-azure.html) utilise VS2010, and well, I’m on VS2012 and Windows 8. Not that I think Win 8 had anything to do with it really, anyhews! I’m going to begin from the beginning, this is my first foray into running something on Azure, so it’s been a bit of a learning curve. But luckily the Neo4J guys have got us started, so let’s download the VS2010 solution: http://neo4j.org/get?file=Neo4j.Azure.Server.zip OK, the other thing we’ll need is the VS2012 Azure SDK, so let’s get that as well: http://www.windowsazure.com/en-us/develop/downloads/ (I just did the full install). Now, unzip the VS2010 solution and let’s open it in VS2012: <your location>\Neo4j.Azure.Server\Neo4j.Azure.Server.sln One-way-upgrade? Yer! Ignore the migration report – we don’t care! Let’s build that sucker… Ahhh 14 errors… WindowsAzure does not exist in the namespace ‘Microsoft’ Not a problem right? We’ve installed the SDK, just need to update the references: We can ignore the Test projects, they don’t use Azure, we’re interested in the other projects, so what we’ll do is remove the broken references, and add the correct ones, so expand the references bit of each project: hunt out those yellow exclamation marks, and delete them! You’ll need to add the right ones back in (listed below), when you go to the ‘Add Reference’ dialog make sure you have ‘Assemblies’ and ‘Framework’ selected before you seach (and search for ‘microsoft.win’ to narrow it down) So the references you need for each project are: CollectDiagnosticsData Microsoft.WindowsAzure.Diagnostics Microsoft.WindowsAzure.StorageClient Diversify.WindowsAzure.ServiceRuntime Microsoft.WindowsAzure.CloudDrive Microsoft.WindowsAzure.ServiceRuntime Microsoft.WindowsAzure.StorageClient Right, so let’s build again… Sweet! No errors.   Now we need to setup our Blobs, I’m assuming you are using the most up-to-date Java you happened to have downloaded :) in my case that’s JRE7, and that is located in: C:\Program Files (x86)\Java\jre7 So, zip up that folder into whatever you want to call it, I went with jre7.zip, and stuck it in a temp folder for now. In that same temp folder I also copied the neo4j zip I was using: neo4j-community-1.7.2-windows.zip OK, now, we need to get these into our Blob storage, this is where a lot of stuff becomes unstuck - I didn’t find any applications that helped me use the blob storage, one would crash (because my internet speed is so slow) and the other just didn’t work – sure it looked like it had worked, but when push came to shove it didn’t. So this is how I got my files into Blob (local first): 1. Run the ‘Storage Emulator’ (just search for that in the start menu) 2. That takes a little while to start up so fire up another instance of Visual Studio in the mean time, and create a new Console Application. 3. Manage Nuget Packages for that solution and add ‘Windows Azure Storage’ Now you’re set up to add the code: public static void Main() { CloudStorageAccount cloudStorageAccount = CloudStorageAccount.DevelopmentStorageAccount; CloudBlobClient client = cloudStorageAccount.CreateCloudBlobClient(); client.Timeout = TimeSpan.FromMinutes(30); CloudBlobContainer container = client.GetContainerReference("neo4j"); //This will create it as well   UploadBlob(container, "jre7.zip", "c:\\temp\\jre7.zip"); UploadBlob(container, "neo4j-community-1.7.2-windows.zip", "c:\\temp\\neo4j-community-1.7.2-windows.zip"); }   private static void UploadBlob(CloudBlobContainer container, string blobName, string filename) { CloudBlob blob = container.GetBlobReference(blobName);   using (FileStream fileStream = File.OpenRead(filename)) blob.UploadFromStream(fileStream); } This will upload the files to your local storage account (to switch to an Azure one, you’ll need to create a storage account, and use those credentials when you make your CloudStorageAccount above) To test you’ve got them uploaded correctly, go to: http://localhost:10000/devstoreaccount1/neo4j/jre7.zip and you will hopefully download the zip file you just uploaded. Now that those files are there, we are ready for some final configuration… Right click on the Neo4jServerHost role in the Neo4j.Azure.Server cloud project: Click on the ‘Settings’ tab and we’ll need to do some changes – by default, the 1.7.2 edition of neo4J unzips to: neo4j-community-1.7.2 So, we need to update all the ‘neo4j-1.3.M02’ directories to be ‘neo4j-community-1.7.2’, we also need to update the Java runtime location, so we start with this: and end with this: Now, I also changed the Endpoints settings, to be HTTP (from TCP) and to have a port of 7410 (mainly because that’s straight down on the numpad) The last ‘gotcha’ is some hard coded consts, which had me looking for ages, they are in the ‘ConfigSettings’ class of the ‘Neo4jServerHost’ project, and the ones we’re interested in are: Neo4jFileName JavaZipFileName Change those both to what that should be. OK Nearly there (I promise)! Run the ‘Compute Emulator’ (same deal with the Start menu), in your system tray you should have an Azure icon, when the compute emulator is up and running, right click on the icon and select ‘Show Compute Emulator UI’ The last steps! Make sure the ‘Neo4j.Azure.Server’ cloud project is set up as the start project and let’s hit F5 tension mounts, the build takes place (you need to accept the UAC warning) and VS does it’s stuff. If you look at the Compute Emulator UI you’ll see some log stuff (which you’ll need if this goes awry – but it won’t don’t worry!) In a bit, the console and a Java window will pop up: Then the console will bog off, leaving just the Java one, and if we switch back to the Compute Emulator UI and scroll up we should be able to see a line telling us the port number we’ve been assigned (in my case 7411): (If you can’t see it, don’t worry.. press CTRL+A on the emulator, then CTRL+C, copy all the text and paste it into something like Notepad, then just do a Find for ‘port’ you’ll soon see it) Go to your favourite browser, and head to: http://localhost:YOURPORT/ and you should see the WebAdmin! See you on the cloud side hopefully! Chris PS Other gotchas! OK, I’ve been caught out a couple of times: I had an instance of Neo4J running as a service on my machine, the Azure instance wanted to run the https version of the server on the same port as the Service was running on, and so Java would complain that the port was already in use.. The first time I converted the project, it didn’t update the version of the Azure library to load, in the App.Config of the Neo4jServerHost project, and VS would throw an exception saying it couldn’t find the Azure dll version 1.0.0.0.

    Read the article

  • Connect to running web role on Azure using Remote Desktop Connection and VS2012

    - by Magnus Karlsson
    We want to be able to collect IntelliTrace information from our running app and also use remote desktop to connect to the IIS and look around(probably debugging). 1. Create certificate 1.1 Right-click the cloud project (marked in red) and select “Configure remote desktop”. 1.2 In the drop down list of certificates, choose <create> at the bottom. 1.3. Follow the instructions, you can set it up with default values. 1.4 When done. Choose the certificate and click “Copy to File…” as seen in the left of the picture above. 1.5. Save the file with any name you want. Now we will save it to local storage to be able to import it to our solution through the azure configuration manager in step 3. 2. Save certificate to local storage Now we need to attach it to our local certificate storage to be able to reach it from our confiuguration manager in visual studio. Microsoft provides the following steps for doing this: http://support.microsoft.com/kb/232137 In order to view the Certificates store on the local computer, perform the following steps: Click Start, and then click Run. Type "MMC.EXE" (without the quotation marks) and click OK. Click Console in the new MMC you created, and then click Add/Remove Snap-in. In the new window, click Add. Highlight the Certificates snap-in, and then click Add. Choose the Computer option and click Next. Select Local Computer on the next screen, and then click OK. Click Close , and then click OK. You have now added the Certificates snap-in, which will allow you to work with any certificates in your computer's certificate store. You may want to save this MMC for later use. Now that you have access to the Certificates snap-in, you can import the server certificate into you computer's certificate store by following these steps: Open the Certificates (Local Computer) snap-in and navigate to Personal, and then Certificates. Note: Certificates may not be listed. If it is not, that is because there are no certificates installed. Right-click Certificates (or Personal if that option does not exist.) Choose All Tasks, and then click Import. When the wizard starts, click Next. Browse to the PFX file you created containing your server certificate and private key. Click Next. Enter the password you gave the PFX file when you created it. Be sure the Mark the key as exportable option is selected if you want to be able to export the key pair again from this computer. As an added security measure, you may want to leave this option unchecked to ensure that no one can make a backup of your private key. Click Next, and then choose the Certificate Store you want to save the certificate to. You should select Personal because it is a Web server certificate. If you included the certificates in the certification hierarchy, it will also be added to this store. Click Next. You should see a summary of screen showing what the wizard is about to do. If this information is correct, click Finish. You will now see the server certificate for your Web server in the list of Personal Certificates. It will be denoted by the common name of the server (found in the subject section of the certificate). Now that you have the certificate backup imported into the certificate store, you can enable Internet Information Services 5.0 to use that certificate (and the corresponding private key). To do this, perform the following steps: Open the Internet Services Manager (under Administrative Tools) and navigate to the Web site you want to enable secure communications (SSL/TLS) on. Right-click on the site and click Properties. You should now see the properties screen for the Web site. Click the Directory Security tab. Under the Secure Communications section, click Server Certificate. This will start the Web Site Certificate Wizard. Click Next. Choose the Assign an existing certificate option and click Next. You will now see a screen showing that contents of your computer's personal certificate store. Highlight your Web server certificate (denoted by the common name), and then click Next. You will now see a summary screen showing you all the details about the certificate you are installing. Be sure that this information is correct or you may have problems using SSL or TLS in HTTP communications. Click Next, and then click OK to exit the wizard. You should now have an SSL/TLS-enabled Web server. Be sure to protect your PFX files from any unwanted personnel. Image of a typical MMC.EXE with the certificates up.   3. Import the certificate to you visual studio project. 3.1 Now right click your equivalent to the MvcWebRole1 (as seen in the first picture under the red oval) and choose properties. 3.2 Choose Certificates. Right click the ellipsis to the right of the “thumbprint” and you should be able to select your newly created certificate here. After selecting it- save the file.   4. Upload the certificate to your Azure subscription. 4.1 Go to the azure management portal, click the services menu icon to the left and choose the service. Click Upload in the bottom menu.     5. Connect to server. Since I tried to use account settings(have to use another name) we have to set up a new name for the connection. No biggie. 5.1 Go to azure management portal, select your service and in the bottom menu, choose “REMOTE”. This will display the configuration for remote connection. It will actually change your ServiceConfiguration.cscfg file. After you change It here it might be good to choose download and replace the one in your project. Set a name that is not your windows azure account name and not Administrator. 5.2 Goto visual studio, click Server Explorer. Choose as selected in the picture below and click “COnnect using remote desktop”.   5.2 You will now be able to log in with the name and password set up in step 5.1. and voila! Windows server 2012, IIS and other nice stuff!   To do this one I’ve been using http://msdn.microsoft.com/en-us/library/windowsazure/ff683671.aspx where you can collect some of this information and additional one.

    Read the article

  • first install for windows eight.....da beta

    - by raysmithequip
    The W8 preview is now installed and I am enjoying it.  I remember the learning curve of my first unix machine back in the eighties, this ain't that.It is normal for me to do the first os install with a keyboard and low end monitor...you never know what you'll encounter out in the field.  The OS took like a fish to water.  I used a low end INTEL motherboard dp55w I gathered on the cheap, an 1157 i5 from the used bin a pair of 6 gig ddr3 sticks, a rosewell 550 watt power supply a cheap used twenty buck sub 200g wd sata drive, a half working dvd burner and an asus fanless nvidia vid card, not a great one but Sub 50.00 on newey eggey...I did have to hunt the ms forums for a key and of course to activate the thing, if dos would of needed this outmoded ritual, we would still be on cpm and osborne would be a household name, of course little do people know that this ritual was common as far back as the seventies on att unix installs....not, but it was possible, I used to joke about when I ran a bbs, what hell would of been wrought had dos 3.2 machines been required to dial into my bbs to send fido mail to ms and wait for an acknowledgement.  All in all the thing was pushing a seven on the ms richter scale, not including the vid card, sadly it came in at just a tad over three....I wanted to evaluate it for a possible replacement on critical machines that in the past went down due to a vid card fan failure....you have no idea what a customer thinks when you show them a failed vid card fan..."you mean that little plastic piece of junk caused all this!!??!!!"...yea man.  Some production machines don't need any sort of vid, I will at least keep it on the maybe list for those, MTBF is a very important factor, some big box stores should put percentage of failure rate within 24 month estimates on the outside of the carton for sure.  And a warning that the power supplies are already at their limit.  Let's face it, today even 550w can be iffy.A few neat eye candy improvements over the earlier windows is nice, the metro screen is nice, anyone who has used a newer phone recently will intuitively drag their fingers across the screen....lot of good that was with no mouse or touch screen though.  Lucky me, I have been using windows since day one, I still have a copy of win 2.0 (and every other version) for no good reason.  Still the old ix collection of disks is much larger, recompiling any kernal is another silly ritual, same machine, different day, same recompile...argh. Rh is my all time fav, mandrake was always missing something, like it rewrote the init file or something, novell is ok as long as you stay on the beaten path and of course ubuntu normally recompiles with the same errors consistantly....makes life easy that way....no errors on windows eight, just a screen that did not match the installed hardware, natuarally I alt tabbed right out of it, then hit the flag key to find the start menu....no start button. I miss the start button already. Keyboard cowboy funnin and I was browsing the harddrive, nothing stunning there, I like that, means I can find stuff. Only I can't find what I want, the start button....the start menu is that first screen for touch tablets. No biggie for useruser, that is where they will want to be, I can see that. Admins won't want to be there, it is easy enough to get the control panel a bazzilion other ways though, just not the start button. (see a pattern here?). Personally, from the keyboard I find it fun to hit the carets along the location bar at the top of the explorer screen with tabs and arrows and choose SHOW ALL CONTROL PANEL ITEMS, or thereabouts. Bottom line, I love seven and I'll love eight even more!...very happy I did not have to follow the normal rule of thumb (a customer watching me build a system and asking questions said "oh I get it, so every piece you put in there is basically a hundred bucks, right?)...ok, sure, pretty much, more or less, well, ya dude.  It will be WAY past october till I get a real touch screen but I did pick up a pair of cheap tatungs so I can try the NEW main start screen, I parse a lot of folders and have a vision of how a pair of touch screens will be easier than landing a rover on mars.  Ok.  fine, they are way smallish, and I don't expect multitouch to work but we are talking a few percent of a new 21 inch viewsonic touch screen.  Will this OS be a game changer?  I don't know.  Bottom line with all the pads and droids in the world, it is more of a catch up move at first glance.  Not something ms is used to.  An app store?  I can see ms's motivation, the others have it.  I gather there will not be gadgets there, go ahead and see what ms did  to the once populated gadget page...go ahead, google gadgets and take a gander, used to hundreds of gadgets, they are already gone.  They replaced gadgets?  sort of, I'll drop that, it's a bit of a sore point for me.  More of interest was what happened when I downloaded stuff off codeplex and some other normal programs that I like, like orbitron, top o' my list!!...cardware it is...anyways, click on the exe, get a screen, normal for windows, this one indicated that I was not running a normal windows program and had a button for  exit the install, naw, I hit details, a hidden run program anyways came into view....great, my path to the normal windows has detected a program tha.....yea ok, acl is on, fine, moving along I got orbitron installed in record time and was tracking the iss on the newest Microsoft OS, beta of course, felt like the first time I setup bsd all those year ago...FUN!!...I suppose I gotta start to think about budgeting for the real os when it comes out in october, by then I should have a rasberry pi and be done with fedora remixed.  Of course that sounds like fun too!!  I would use this OS on a tablet or phone.  I don't like the idea of being hearded to an app store, don't like that on anything, we are americans and want real choices not marketed hype, lest you are younger with opm (other peoples money).   This os would be neat on a zune, but I suspect the zune is a gonner, I am rooting for microsoft, after all their default password is not admin anymore, nor alpine,  it's blank. Others force a password, my first fawn password was so long I could not even log into it with the password in front of me, who the heck uses %$# anyways, and if I was writing a brute force attack what the heck kinda impasse is that anyways at .00001 microseconds of a code execution cycle (just a non qualified number, not a real clock speed)....AI is where it will be before too long, MS is on that path, perhaps soon someone will sit down and write an app for the kinect that watches your eyes while you scan the new main start screen, clicking on the big E icon when you blink.....boy is that going to be fun!!!! sure. Blink,dammit,blink,dammit...... OPM no doubt.I like windows eight, we are moving forwards, better keep a close eye on ubuntu.  The real clinch comes when open source becomes paid source......don't blink, I already see plenty of very expensive 'ix apps, some even in app stores already.  more to come.......

    Read the article

  • Nginx and client certificates from hierarchical OpenSSL-based certification authorities

    - by Fmy Oen
    I'm trying to set up root certification authority, subordinate certification authority and to generate the client certificates signed by any of this CA that nginx 0.7.67 on Debian Squeeze will accept. My problem is that root CA signed client certificate works fine while subordinate CA signed one results in "400 Bad Request. The SSL certificate error". Step 1: nginx virtual host configuration: server { server_name test.local; access_log /var/log/nginx/test.access.log; listen 443 default ssl; keepalive_timeout 70; ssl_protocols SSLv3 TLSv1; ssl_ciphers AES128-SHA:AES256-SHA:RC4-SHA:DES-CBC3-SHA:RC4-MD5; ssl_certificate /etc/nginx/ssl/server.crt; ssl_certificate_key /etc/nginx/ssl/server.key; ssl_client_certificate /etc/nginx/ssl/client.pem; ssl_verify_client on; ssl_session_cache shared:SSL:10m; ssl_session_timeout 5m; location / { proxy_pass http://testsite.local/; } } Step 2: PKI infrastructure organization for both root and subordinate CA (based on this article): # mkdir ~/pki && cd ~/pki # mkdir rootCA subCA # cp -v /etc/ssl/openssl.cnf rootCA/ # cd rootCA/ # mkdir certs private crl newcerts; touch serial; echo 01 > serial; touch index.txt; touch crlnumber; echo 01 > crlnumber # cp -Rvp * ../subCA/ Almost no changes was made to rootCA/openssl.cnf: [ CA_default ] dir = . # Where everything is kept ... certificate = $dir/certs/rootca.crt # The CA certificate ... private_key = $dir/private/rootca.key # The private key and to subCA/openssl.cnf: [ CA_default ] dir = . # Where everything is kept ... certificate = $dir/certs/subca.crt # The CA certificate ... private_key = $dir/private/subca.key # The private key Step 3: Self-signed root CA certificate generation: # openssl genrsa -out ./private/rootca.key -des3 2048 # openssl req -x509 -new -key ./private/rootca.key -out certs/rootca.crt -config openssl.cnf Enter pass phrase for ./private/rootca.key: You are about to be asked to enter information that will be incorporated into your certificate request. What you are about to enter is what is called a Distinguished Name or a DN. There are quite a few fields but you can leave some blank For some fields there will be a default value, If you enter '.', the field will be left blank. ----- Country Name (2 letter code) [AU]: State or Province Name (full name) [Some-State]: Locality Name (eg, city) []: Organization Name (eg, company) [Internet Widgits Pty Ltd]: Organizational Unit Name (eg, section) []: Common Name (eg, YOUR name) []:rootca Email Address []: Step 4: Subordinate CA certificate generation: # cd ../subCA # openssl genrsa -out ./private/subca.key -des3 2048 # openssl req -new -key ./private/subca.key -out subca.csr -config openssl.cnf Enter pass phrase for ./private/subca.key: You are about to be asked to enter information that will be incorporated into your certificate request. What you are about to enter is what is called a Distinguished Name or a DN. There are quite a few fields but you can leave some blank For some fields there will be a default value, If you enter '.', the field will be left blank. ----- Country Name (2 letter code) [AU]: State or Province Name (full name) [Some-State]: Locality Name (eg, city) []: Organization Name (eg, company) [Internet Widgits Pty Ltd]: Organizational Unit Name (eg, section) []: Common Name (eg, YOUR name) []:subca Email Address []: Please enter the following 'extra' attributes to be sent with your certificate request A challenge password []: An optional company name []: Step 5: Subordinate CA certificate signing by root CA certificate: # cd ../rootCA/ # openssl ca -in ../subCA/subca.csr -extensions v3_ca -config openssl.cnf Using configuration from openssl.cnf Enter pass phrase for ./private/rootca.key: Check that the request matches the signature Signature ok Certificate Details: Serial Number: 1 (0x1) Validity Not Before: Feb 4 10:49:43 2013 GMT Not After : Feb 4 10:49:43 2014 GMT Subject: countryName = AU stateOrProvinceName = Some-State organizationName = Internet Widgits Pty Ltd commonName = subca X509v3 extensions: X509v3 Subject Key Identifier: C9:E2:AC:31:53:81:86:3F:CD:F8:3D:47:10:FC:E5:8E:C2:DA:A9:20 X509v3 Authority Key Identifier: keyid:E9:50:E6:BF:57:03:EA:6E:8F:21:23:86:BB:44:3D:9F:8F:4A:8B:F2 DirName:/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=rootca serial:9F:FB:56:66:8D:D3:8F:11 X509v3 Basic Constraints: CA:TRUE Certificate is to be certified until Feb 4 10:49:43 2014 GMT (365 days) Sign the certificate? [y/n]:y 1 out of 1 certificate requests certified, commit? [y/n]y ... # cd ../subCA/ # cp -v ../rootCA/newcerts/01.pem certs/subca.crt Step 6: Server certificate generation and signing by root CA (for nginx virtual host): # cd ../rootCA # openssl genrsa -out ./private/server.key -des3 2048 # openssl req -new -key ./private/server.key -out server.csr -config openssl.cnf Enter pass phrase for ./private/server.key: You are about to be asked to enter information that will be incorporated into your certificate request. What you are about to enter is what is called a Distinguished Name or a DN. There are quite a few fields but you can leave some blank For some fields there will be a default value, If you enter '.', the field will be left blank. ----- Country Name (2 letter code) [AU]: State or Province Name (full name) [Some-State]: Locality Name (eg, city) []: Organization Name (eg, company) [Internet Widgits Pty Ltd]: Organizational Unit Name (eg, section) []: Common Name (eg, YOUR name) []:test.local Email Address []: Please enter the following 'extra' attributes to be sent with your certificate request A challenge password []: An optional company name []: # openssl ca -in server.csr -out certs/server.crt -config openssl.cnf Step 7: Client #1 certificate generation and signing by root CA: # openssl genrsa -out ./private/client1.key -des3 2048 # openssl req -new -key ./private/client1.key -out client1.csr -config openssl.cnf Enter pass phrase for ./private/client1.key: You are about to be asked to enter information that will be incorporated into your certificate request. What you are about to enter is what is called a Distinguished Name or a DN. There are quite a few fields but you can leave some blank For some fields there will be a default value, If you enter '.', the field will be left blank. ----- Country Name (2 letter code) [AU]: State or Province Name (full name) [Some-State]: Locality Name (eg, city) []: Organization Name (eg, company) [Internet Widgits Pty Ltd]: Organizational Unit Name (eg, section) []: Common Name (eg, YOUR name) []:Client #1 Email Address []: Please enter the following 'extra' attributes to be sent with your certificate request A challenge password []: An optional company name []: # openssl ca -in client1.csr -out certs/client1.crt -config openssl.cnf Step 8: Client #1 certificate converting to PKCS12 format: # openssl pkcs12 -export -out certs/client1.p12 -inkey private/client1.key -in certs/client1.crt -certfile certs/rootca.crt Step 9: Client #2 certificate generation and signing by subordinate CA: # cd ../subCA/ # openssl genrsa -out ./private/client2.key -des3 2048 # openssl req -new -key ./private/client2.key -out client2.csr -config openssl.cnf Enter pass phrase for ./private/client2.key: You are about to be asked to enter information that will be incorporated into your certificate request. What you are about to enter is what is called a Distinguished Name or a DN. There are quite a few fields but you can leave some blank For some fields there will be a default value, If you enter '.', the field will be left blank. ----- Country Name (2 letter code) [AU]: State or Province Name (full name) [Some-State]: Locality Name (eg, city) []: Organization Name (eg, company) [Internet Widgits Pty Ltd]: Organizational Unit Name (eg, section) []: Common Name (eg, YOUR name) []:Client #2 Email Address []: Please enter the following 'extra' attributes to be sent with your certificate request A challenge password []: An optional company name []: # openssl ca -in client2.csr -out certs/client2.crt -config openssl.cnf Step 10: Client #2 certificate converting to PKCS12 format: # openssl pkcs12 -export -out certs/client2.p12 -inkey private/client2.key -in certs/client2.crt -certfile certs/subca.crt Step 11: Passing server certificate and private key to nginx (performed with OS superuser privileges): # cd ../rootCA/ # cp -v certs/server.crt /etc/nginx/ssl/ # cp -v private/server.key /etc/nginx/ssl/ Step 12: Passing root and subordinate CA certificates to nginx (performed with OS superuser privileges): # cat certs/rootca.crt > /etc/nginx/ssl/client.pem # cat ../subCA/certs/subca.crt >> /etc/nginx/ssl/client.pem client.pem file look like this: # cat /etc/nginx/ssl/client.pem -----BEGIN CERTIFICATE----- MIID6TCCAtGgAwIBAgIJAJ/7VmaN048RMA0GCSqGSIb3DQEBBQUAMFYxCzAJBgNV BAYTAkFVMRMwEQYDVQQIEwpTb21lLVN0YXRlMSEwHwYDVQQKExhJbnRlcm5ldCBX aWRnaXRzIFB0eSBMdGQxDzANBgNVBAMTBnJvb3RjYTAeFw0xMzAyMDQxMDM1NTda ... -----END CERTIFICATE----- Certificate: Data: Version: 3 (0x2) Serial Number: 1 (0x1) ... -----BEGIN CERTIFICATE----- MIID4DCCAsigAwIBAgIBATANBgkqhkiG9w0BAQUFADBWMQswCQYDVQQGEwJBVTET MBEGA1UECBMKU29tZS1TdGF0ZTEhMB8GA1UEChMYSW50ZXJuZXQgV2lkZ2l0cyBQ dHkgTHRkMQ8wDQYDVQQDEwZyb290Y2EwHhcNMTMwMjA0MTA0OTQzWhcNMTQwMjA0 ... -----END CERTIFICATE----- It looks like everything is working fine: # service nginx reload # Reloading nginx configuration: Enter PEM pass phrase: # nginx. # Step 13: Installing *.p12 certificates in browser (Firefox in my case) gives the problem I've mentioned above. Client #1 = 200 OK, Client #2 = 400 Bad request/The SSL certificate error. Any ideas what should I do? Update 1: Results of SSL connection test attempts: # openssl s_client -connect test.local:443 -CAfile ~/pki/rootCA/certs/rootca.crt -cert ~/pki/rootCA/certs/client1.crt -key ~/pki/rootCA/private/client1.key -showcerts Enter pass phrase for tmp/testcert/client1.key: CONNECTED(00000003) depth=1 C = AU, ST = Some-State, O = Internet Widgits Pty Ltd, CN = rootca verify return:1 depth=0 C = AU, ST = Some-State, O = Internet Widgits Pty Ltd, CN = test.local verify return:1 --- Certificate chain 0 s:/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=test.local i:/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=rootca -----BEGIN CERTIFICATE----- MIIDpjCCAo6gAwIBAgIBAjANBgkqhkiG9w0BAQUFADBWMQswCQYDVQQGEwJBVTET MBEGA1UECBMKU29tZS1TdGF0ZTEhMB8GA1UEChMYSW50ZXJuZXQgV2lkZ2l0cyBQ dHkgTHRkMQ8wDQYDVQQDEwZyb290Y2EwHhcNMTMwMjA0MTEwNjAzWhcNMTQwMjA0 ... -----END CERTIFICATE----- 1 s:/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=rootca i:/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=rootca -----BEGIN CERTIFICATE----- MIID6TCCAtGgAwIBAgIJAJ/7VmaN048RMA0GCSqGSIb3DQEBBQUAMFYxCzAJBgNV BAYTAkFVMRMwEQYDVQQIEwpTb21lLVN0YXRlMSEwHwYDVQQKExhJbnRlcm5ldCBX aWRnaXRzIFB0eSBMdGQxDzANBgNVBAMTBnJvb3RjYTAeFw0xMzAyMDQxMDM1NTda ... -----END CERTIFICATE----- --- Server certificate subject=/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=test.local issuer=/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=rootca --- Acceptable client certificate CA names /C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=rootca /C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=subca --- SSL handshake has read 3395 bytes and written 2779 bytes --- New, TLSv1/SSLv3, Cipher is AES256-SHA Server public key is 2048 bit Secure Renegotiation IS supported Compression: zlib compression Expansion: zlib compression SSL-Session: Protocol : TLSv1 Cipher : AES256-SHA Session-ID: 15BFC2029691262542FAE95A48078305E76EEE7D586400F8C4F7C516B0F9D967 Session-ID-ctx: Master-Key: 23246CF166E8F3900793F0A2561879E5DB07291F32E99591BA1CF53E6229491FEAE6858BFC9AACAF271D9C3706F139C7 Key-Arg : None PSK identity: None PSK identity hint: None SRP username: None TLS session ticket: 0000 - c2 5e 1d d2 b5 6d 40 23-b2 40 89 e4 35 75 70 07 .^...m@#[email protected]. 0010 - 1b bb 2b e6 e0 b5 ab 10-10 bf 46 6e aa 67 7f 58 ..+.......Fn.g.X 0020 - cf 0e 65 a4 67 5a 15 ba-aa 93 4e dd 3d 6e 73 4c ..e.gZ....N.=nsL 0030 - c5 56 f6 06 24 0f 48 e6-38 36 de f1 b5 31 c5 86 .V..$.H.86...1.. ... 0440 - 4c 53 39 e3 92 84 d2 d0-e5 e2 f5 8a 6a a8 86 b1 LS9.........j... Compression: 1 (zlib compression) Start Time: 1359989684 Timeout : 300 (sec) Verify return code: 0 (ok) --- Everything seems fine with Client #2 and root CA certificate but request returns 400 Bad Request error: # openssl s_client -connect test.local:443 -CAfile ~/pki/rootCA/certs/rootca.crt -cert ~/pki/subCA/certs/client2.crt -key ~/pki/subCA/private/client2.key -showcerts Enter pass phrase for tmp/testcert/client2.key: CONNECTED(00000003) depth=1 C = AU, ST = Some-State, O = Internet Widgits Pty Ltd, CN = rootca verify return:1 depth=0 C = AU, ST = Some-State, O = Internet Widgits Pty Ltd, CN = test.local verify return:1 ... Compression: 1 (zlib compression) Start Time: 1359989989 Timeout : 300 (sec) Verify return code: 0 (ok) --- GET / HTTP/1.0 HTTP/1.1 400 Bad Request Server: nginx/0.7.67 Date: Mon, 04 Feb 2013 15:00:43 GMT Content-Type: text/html Content-Length: 231 Connection: close <html> <head><title>400 The SSL certificate error</title></head> <body bgcolor="white"> <center><h1>400 Bad Request</h1></center> <center>The SSL certificate error</center> <hr><center>nginx/0.7.67</center> </body> </html> closed Verification fails with Client #2 certificate and subordinate CA certificate: # openssl s_client -connect test.local:443 -CAfile ~/pki/subCA/certs/subca.crt -cert ~/pki/subCA/certs/client2.crt -key ~/pki/subCA/private/client2.key -showcerts Enter pass phrase for tmp/testcert/client2.key: CONNECTED(00000003) depth=1 C = AU, ST = Some-State, O = Internet Widgits Pty Ltd, CN = rootca verify error:num=19:self signed certificate in certificate chain verify return:0 ... Compression: 1 (zlib compression) Start Time: 1359990354 Timeout : 300 (sec) Verify return code: 19 (self signed certificate in certificate chain) --- GET / HTTP/1.0 HTTP/1.1 400 Bad Request ... Still getting 400 Bad Request error with concatenated CA certificates and Client #2 (but still everything ok with Client #1): # cat certs/rootca.crt ../subCA/certs/subca.crt > certs/concatenatedca.crt # openssl s_client -connect test.local:443 -CAfile ~/pki/rootCA/certs/concatenatedca.crt -cert ~/pki/subCA/certs/client2.crt -key ~/pki/subCA/private/client2.key -showcerts Enter pass phrase for tmp/testcert/client2.key: CONNECTED(00000003) depth=1 C = AU, ST = Some-State, O = Internet Widgits Pty Ltd, CN = rootca verify return:1 depth=0 C = AU, ST = Some-State, O = Internet Widgits Pty Ltd, CN = test.local verify return:1 --- ... Compression: 1 (zlib compression) Start Time: 1359990772 Timeout : 300 (sec) Verify return code: 0 (ok) --- GET / HTTP/1.0 HTTP/1.1 400 Bad Request ... Update 2: I've managed to recompile nginx with enabled debug. Here is the part of successfull conection by Client #1 track: 2013/02/05 14:08:23 [debug] 38701#0: *119 accept: <MY IP ADDRESS> fd:3 2013/02/05 14:08:23 [debug] 38701#0: *119 event timer add: 3: 60000:2856497512 2013/02/05 14:08:23 [debug] 38701#0: *119 kevent set event: 3: ft:-1 fl:0025 2013/02/05 14:08:23 [debug] 38701#0: *119 malloc: 28805200:660 2013/02/05 14:08:23 [debug] 38701#0: *119 malloc: 28834400:1024 2013/02/05 14:08:23 [debug] 38701#0: *119 posix_memalign: 28860000:4096 @16 2013/02/05 14:08:23 [debug] 38701#0: *119 http check ssl handshake 2013/02/05 14:08:23 [debug] 38701#0: *119 https ssl handshake: 0x16 2013/02/05 14:08:23 [debug] 38701#0: *119 SSL server name: "test.local" 2013/02/05 14:08:23 [debug] 38701#0: *119 SSL_do_handshake: -1 2013/02/05 14:08:23 [debug] 38701#0: *119 SSL_get_error: 2 2013/02/05 14:08:23 [debug] 38701#0: *119 SSL handshake handler: 0 2013/02/05 14:08:23 [debug] 38701#0: *119 verify:1, error:0, depth:1, subject:"/C=AU /ST=Some-State/O=Internet Widgits Pty Ltd/CN=rootca",issuer: "/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=rootca" 2013/02/05 14:08:23 [debug] 38701#0: *119 verify:1, error:0, depth:0, subject:"/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=Client #1",issuer: "/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=rootca" 2013/02/05 14:08:23 [debug] 38701#0: *119 SSL_do_handshake: 1 2013/02/05 14:08:23 [debug] 38701#0: *119 SSL: TLSv1, cipher: "AES256-SHA SSLv3 Kx=RSA Au=RSA Enc=AES(256) Mac=SHA1" 2013/02/05 14:08:23 [debug] 38701#0: *119 http process request line 2013/02/05 14:08:23 [debug] 38701#0: *119 SSL_read: -1 2013/02/05 14:08:23 [debug] 38701#0: *119 SSL_get_error: 2 2013/02/05 14:08:23 [debug] 38701#0: *119 http process request line 2013/02/05 14:08:23 [debug] 38701#0: *119 SSL_read: 1 2013/02/05 14:08:23 [debug] 38701#0: *119 SSL_read: 524 2013/02/05 14:08:23 [debug] 38701#0: *119 SSL_read: -1 2013/02/05 14:08:23 [debug] 38701#0: *119 SSL_get_error: 2 2013/02/05 14:08:23 [debug] 38701#0: *119 http request line: "GET / HTTP/1.1" And here is the part of unsuccessfull conection by Client #2 track: 2013/02/05 13:51:34 [debug] 38701#0: *112 accept: <MY_IP_ADDRESS> fd:3 2013/02/05 13:51:34 [debug] 38701#0: *112 event timer add: 3: 60000:2855488975 2013/02/05 13:51:34 [debug] 38701#0: *112 kevent set event: 3: ft:-1 fl:0025 2013/02/05 13:51:34 [debug] 38701#0: *112 malloc: 28805200:660 2013/02/05 13:51:34 [debug] 38701#0: *112 malloc: 28834400:1024 2013/02/05 13:51:34 [debug] 38701#0: *112 posix_memalign: 28860000:4096 @16 2013/02/05 13:51:34 [debug] 38701#0: *112 http check ssl handshake 2013/02/05 13:51:34 [debug] 38701#0: *112 https ssl handshake: 0x16 2013/02/05 13:51:34 [debug] 38701#0: *112 SSL server name: "test.local" 2013/02/05 13:51:34 [debug] 38701#0: *112 SSL_do_handshake: -1 2013/02/05 13:51:34 [debug] 38701#0: *112 SSL_get_error: 2 2013/02/05 13:51:34 [debug] 38701#0: *112 SSL handshake handler: 0 2013/02/05 13:51:34 [debug] 38701#0: *112 SSL_do_handshake: -1 2013/02/05 13:51:34 [debug] 38701#0: *112 SSL_get_error: 2 2013/02/05 13:51:34 [debug] 38701#0: *112 SSL handshake handler: 0 2013/02/05 13:51:34 [debug] 38701#0: *112 verify:0, error:20, depth:1, subject:"/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=subca",issuer: "/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=rootca" 2013/02/05 13:51:34 [debug] 38701#0: *112 verify:0, error:27, depth:1, subject:"/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=subca",issuer: "/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=rootca" 2013/02/05 13:51:34 [debug] 38701#0: *112 verify:1, error:27, depth:0, subject:"/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=Client #2",issuer: "/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=subca" 2013/02/05 13:51:34 [debug] 38701#0: *112 SSL_do_handshake: 1 2013/02/05 13:51:34 [debug] 38701#0: *112 SSL: TLSv1, cipher: "AES256-SHA SSLv3 Kx=RSA Au=RSA Enc=AES(256) Mac=SHA1" 2013/02/05 13:51:34 [debug] 38701#0: *112 http process request line 2013/02/05 13:51:34 [debug] 38701#0: *112 SSL_read: 1 2013/02/05 13:51:34 [debug] 38701#0: *112 SSL_read: 524 2013/02/05 13:51:34 [debug] 38701#0: *112 SSL_read: -1 2013/02/05 13:51:34 [debug] 38701#0: *112 SSL_get_error: 2 2013/02/05 13:51:34 [debug] 38701#0: *112 http request line: "GET / HTTP/1.1" So I'm getting OpenSSL error #20 and then #27. According to verify documentation: 20 X509_V_ERR_UNABLE_TO_GET_ISSUER_CERT_LOCALLY: unable to get local issuer certificate the issuer certificate could not be found: this occurs if the issuer certificate of an untrusted certificate cannot be found. 27 X509_V_ERR_CERT_UNTRUSTED: certificate not trusted the root CA is not marked as trusted for the specified purpose.

    Read the article

  • An Introduction to ASP.NET Web API

    - by Rick Strahl
    Microsoft recently released ASP.NET MVC 4.0 and .NET 4.5 and along with it, the brand spanking new ASP.NET Web API. Web API is an exciting new addition to the ASP.NET stack that provides a new, well-designed HTTP framework for creating REST and AJAX APIs (API is Microsoft’s new jargon for a service, in case you’re wondering). Although Web API ships and installs with ASP.NET MVC 4, you can use Web API functionality in any ASP.NET project, including WebForms, WebPages and MVC or just a Web API by itself. And you can also self-host Web API in your own applications from Console, Desktop or Service applications. If you're interested in a high level overview on what ASP.NET Web API is and how it fits into the ASP.NET stack you can check out my previous post: Where does ASP.NET Web API fit? In the following article, I'll focus on a practical, by example introduction to ASP.NET Web API. All the code discussed in this article is available in GitHub: https://github.com/RickStrahl/AspNetWebApiArticle [republished from my Code Magazine Article and updated for RTM release of ASP.NET Web API] Getting Started To start I’ll create a new empty ASP.NET application to demonstrate that Web API can work with any kind of ASP.NET project. Although you can create a new project based on the ASP.NET MVC/Web API template to quickly get up and running, I’ll take you through the manual setup process, because one common use case is to add Web API functionality to an existing ASP.NET application. This process describes the steps needed to hook up Web API to any ASP.NET 4.0 application. Start by creating an ASP.NET Empty Project. Then create a new folder in the project called Controllers. Add a Web API Controller Class Once you have any kind of ASP.NET project open, you can add a Web API Controller class to it. Web API Controllers are very similar to MVC Controller classes, but they work in any kind of project. Add a new item to this folder by using the Add New Item option in Visual Studio and choose Web API Controller Class, as shown in Figure 1. Figure 1: This is how you create a new Controller Class in Visual Studio   Make sure that the name of the controller class includes Controller at the end of it, which is required in order for Web API routing to find it. Here, the name for the class is AlbumApiController. For this example, I’ll use a Music Album model to demonstrate basic behavior of Web API. The model consists of albums and related songs where an album has properties like Name, Artist and YearReleased and a list of songs with a SongName and SongLength as well as an AlbumId that links it to the album. You can find the code for the model (and the rest of these samples) on Github. To add the file manually, create a new folder called Model, and add a new class Album.cs and copy the code into it. There’s a static AlbumData class with a static CreateSampleAlbumData() method that creates a short list of albums on a static .Current that I’ll use for the examples. Before we look at what goes into the controller class though, let’s hook up routing so we can access this new controller. Hooking up Routing in Global.asax To start, I need to perform the one required configuration task in order for Web API to work: I need to configure routing to the controller. Like MVC, Web API uses routing to provide clean, extension-less URLs to controller methods. Using an extension method to ASP.NET’s static RouteTable class, you can use the MapHttpRoute() (in the System.Web.Http namespace) method to hook-up the routing during Application_Start in global.asax.cs shown in Listing 1.using System; using System.Web.Routing; using System.Web.Http; namespace AspNetWebApi { public class Global : System.Web.HttpApplication { protected void Application_Start(object sender, EventArgs e) { RouteTable.Routes.MapHttpRoute( name: "AlbumVerbs", routeTemplate: "albums/{title}", defaults: new { symbol = RouteParameter.Optional, controller="AlbumApi" } ); } } } This route configures Web API to direct URLs that start with an albums folder to the AlbumApiController class. Routing in ASP.NET is used to create extensionless URLs and allows you to map segments of the URL to specific Route Value parameters. A route parameter, with a name inside curly brackets like {name}, is mapped to parameters on the controller methods. Route parameters can be optional, and there are two special route parameters – controller and action – that determine the controller to call and the method to activate respectively. HTTP Verb Routing Routing in Web API can route requests by HTTP Verb in addition to standard {controller},{action} routing. For the first examples, I use HTTP Verb routing, as shown Listing 1. Notice that the route I’ve defined does not include an {action} route value or action value in the defaults. Rather, Web API can use the HTTP Verb in this route to determine the method to call the controller, and a GET request maps to any method that starts with Get. So methods called Get() or GetAlbums() are matched by a GET request and a POST request maps to a Post() or PostAlbum(). Web API matches a method by name and parameter signature to match a route, query string or POST values. In lieu of the method name, the [HttpGet,HttpPost,HttpPut,HttpDelete, etc] attributes can also be used to designate the accepted verbs explicitly if you don’t want to follow the verb naming conventions. Although HTTP Verb routing is a good practice for REST style resource APIs, it’s not required and you can still use more traditional routes with an explicit {action} route parameter. When {action} is supplied, the HTTP verb routing is ignored. I’ll talk more about alternate routes later. When you’re finished with initial creation of files, your project should look like Figure 2.   Figure 2: The initial project has the new API Controller Album model   Creating a small Album Model Now it’s time to create some controller methods to serve data. For these examples, I’ll use a very simple Album and Songs model to play with, as shown in Listing 2. public class Song { public string AlbumId { get; set; } [Required, StringLength(80)] public string SongName { get; set; } [StringLength(5)] public string SongLength { get; set; } } public class Album { public string Id { get; set; } [Required, StringLength(80)] public string AlbumName { get; set; } [StringLength(80)] public string Artist { get; set; } public int YearReleased { get; set; } public DateTime Entered { get; set; } [StringLength(150)] public string AlbumImageUrl { get; set; } [StringLength(200)] public string AmazonUrl { get; set; } public virtual List<Song> Songs { get; set; } public Album() { Songs = new List<Song>(); Entered = DateTime.Now; // Poor man's unique Id off GUID hash Id = Guid.NewGuid().GetHashCode().ToString("x"); } public void AddSong(string songName, string songLength = null) { this.Songs.Add(new Song() { AlbumId = this.Id, SongName = songName, SongLength = songLength }); } } Once the model has been created, I also added an AlbumData class that generates some static data in memory that is loaded onto a static .Current member. The signature of this class looks like this and that's what I'll access to retrieve the base data:public static class AlbumData { // sample data - static list public static List<Album> Current = CreateSampleAlbumData(); /// <summary> /// Create some sample data /// </summary> /// <returns></returns> public static List<Album> CreateSampleAlbumData() { … }} You can check out the full code for the data generation online. Creating an AlbumApiController Web API shares many concepts of ASP.NET MVC, and the implementation of your API logic is done by implementing a subclass of the System.Web.Http.ApiController class. Each public method in the implemented controller is a potential endpoint for the HTTP API, as long as a matching route can be found to invoke it. The class name you create should end in Controller, which is how Web API matches the controller route value to figure out which class to invoke. Inside the controller you can implement methods that take standard .NET input parameters and return .NET values as results. Web API’s binding tries to match POST data, route values, form values or query string values to your parameters. Because the controller is configured for HTTP Verb based routing (no {action} parameter in the route), any methods that start with Getxxxx() are called by an HTTP GET operation. You can have multiple methods that match each HTTP Verb as long as the parameter signatures are different and can be matched by Web API. In Listing 3, I create an AlbumApiController with two methods to retrieve a list of albums and a single album by its title .public class AlbumApiController : ApiController { public IEnumerable<Album> GetAlbums() { var albums = AlbumData.Current.OrderBy(alb => alb.Artist); return albums; } public Album GetAlbum(string title) { var album = AlbumData.Current .SingleOrDefault(alb => alb.AlbumName.Contains(title)); return album; }} To access the first two requests, you can use the following URLs in your browser: http://localhost/aspnetWebApi/albumshttp://localhost/aspnetWebApi/albums/Dirty%20Deeds Note that you’re not specifying the actions of GetAlbum or GetAlbums in these URLs. Instead Web API’s routing uses HTTP GET verb to route to these methods that start with Getxxx() with the first mapping to the parameterless GetAlbums() method and the latter to the GetAlbum(title) method that receives the title parameter mapped as optional in the route. Content Negotiation When you access any of the URLs above from a browser, you get either an XML or JSON result returned back. The album list result for Chrome 17 and Internet Explorer 9 is shown Figure 3. Figure 3: Web API responses can vary depending on the browser used, demonstrating Content Negotiation in action as these two browsers send different HTTP Accept headers.   Notice that the results are not the same: Chrome returns an XML response and IE9 returns a JSON response. Whoa, what’s going on here? Shouldn’t we see the same result in both browsers? Actually, no. Web API determines what type of content to return based on Accept headers. HTTP clients, like browsers, use Accept headers to specify what kind of content they’d like to see returned. Browsers generally ask for HTML first, followed by a few additional content types. Chrome (and most other major browsers) ask for: Accept: text/html, application/xhtml+xml,application/xml; q=0.9,*/*;q=0.8 IE9 asks for: Accept: text/html, application/xhtml+xml, */* Note that Chrome’s Accept header includes application/xml, which Web API finds in its list of supported media types and returns an XML response. IE9 does not include an Accept header type that works on Web API by default, and so it returns the default format, which is JSON. This is an important and very useful feature that was missing from any previous Microsoft REST tools: Web API automatically switches output formats based on HTTP Accept headers. Nowhere in the server code above do you have to explicitly specify the output format. Rather, Web API determines what format the client is requesting based on the Accept headers and automatically returns the result based on the available formatters. This means that a single method can handle both XML and JSON results.. Using this simple approach makes it very easy to create a single controller method that can return JSON, XML, ATOM or even OData feeds by providing the appropriate Accept header from the client. By default you don’t have to worry about the output format in your code. Note that you can still specify an explicit output format if you choose, either globally by overriding the installed formatters, or individually by returning a lower level HttpResponseMessage instance and setting the formatter explicitly. More on that in a minute. Along the same lines, any content sent to the server via POST/PUT is parsed by Web API based on the HTTP Content-type of the data sent. The same formats allowed for output are also allowed on input. Again, you don’t have to do anything in your code – Web API automatically performs the deserialization from the content. Accessing Web API JSON Data with jQuery A very common scenario for Web API endpoints is to retrieve data for AJAX calls from the Web browser. Because JSON is the default format for Web API, it’s easy to access data from the server using jQuery and its getJSON() method. This example receives the albums array from GetAlbums() and databinds it into the page using knockout.js.$.getJSON("albums/", function (albums) { // make knockout template visible $(".album").show(); // create view object and attach array var view = { albums: albums }; ko.applyBindings(view); }); Figure 4 shows this and the next example’s HTML output. You can check out the complete HTML and script code at http://goo.gl/Ix33C (.html) and http://goo.gl/tETlg (.js). Figu Figure 4: The Album Display sample uses JSON data loaded from Web API.   The result from the getJSON() call is a JavaScript object of the server result, which comes back as a JavaScript array. In the code, I use knockout.js to bind this array into the UI, which as you can see, requires very little code, instead using knockout’s data-bind attributes to bind server data to the UI. Of course, this is just one way to use the data – it’s entirely up to you to decide what to do with the data in your client code. Along the same lines, I can retrieve a single album to display when the user clicks on an album. The response returns the album information and a child array with all the songs. The code to do this is very similar to the last example where we pulled the albums array:$(".albumlink").live("click", function () { var id = $(this).data("id"); // title $.getJSON("albums/" + id, function (album) { ko.applyBindings(album, $("#divAlbumDialog")[0]); $("#divAlbumDialog").show(); }); }); Here the URL looks like this: /albums/Dirty%20Deeds, where the title is the ID captured from the clicked element’s data ID attribute. Explicitly Overriding Output Format When Web API automatically converts output using content negotiation, it does so by matching Accept header media types to the GlobalConfiguration.Configuration.Formatters and the SupportedMediaTypes of each individual formatter. You can add and remove formatters to globally affect what formats are available and it’s easy to create and plug in custom formatters.The example project includes a JSONP formatter that can be plugged in to provide JSONP support for requests that have a callback= querystring parameter. Adding, removing or replacing formatters is a global option you can use to manipulate content. It’s beyond the scope of this introduction to show how it works, but you can review the sample code or check out my blog entry on the subject (http://goo.gl/UAzaR). If automatic processing is not desirable in a particular Controller method, you can override the response output explicitly by returning an HttpResponseMessage instance. HttpResponseMessage is similar to ActionResult in ASP.NET MVC in that it’s a common way to return an abstract result message that contains content. HttpResponseMessage s parsed by the Web API framework using standard interfaces to retrieve the response data, status code, headers and so on[MS2] . Web API turns every response – including those Controller methods that return static results – into HttpResponseMessage instances. Explicitly returning an HttpResponseMessage instance gives you full control over the output and lets you mostly bypass WebAPI’s post-processing of the HTTP response on your behalf. HttpResponseMessage allows you to customize the response in great detail. Web API’s attention to detail in the HTTP spec really shows; many HTTP options are exposed as properties and enumerations with detailed IntelliSense comments. Even if you’re new to building REST-based interfaces, the API guides you in the right direction for returning valid responses and response codes. For example, assume that I always want to return JSON from the GetAlbums() controller method and ignore the default media type content negotiation. To do this, I can adjust the output format and headers as shown in Listing 4.public HttpResponseMessage GetAlbums() { var albums = AlbumData.Current.OrderBy(alb => alb.Artist); // Create a new HttpResponse with Json Formatter explicitly var resp = new HttpResponseMessage(HttpStatusCode.OK); resp.Content = new ObjectContent<IEnumerable<Album>>( albums, new JsonMediaTypeFormatter()); // Get Default Formatter based on Content Negotiation //var resp = Request.CreateResponse<IEnumerable<Album>>(HttpStatusCode.OK, albums); resp.Headers.ConnectionClose = true; resp.Headers.CacheControl = new CacheControlHeaderValue(); resp.Headers.CacheControl.Public = true; return resp; } This example returns the same IEnumerable<Album> value, but it wraps the response into an HttpResponseMessage so you can control the entire HTTP message result including the headers, formatter and status code. In Listing 4, I explicitly specify the formatter using the JsonMediaTypeFormatter to always force the content to JSON.  If you prefer to use the default content negotiation with HttpResponseMessage results, you can create the Response instance using the Request.CreateResponse method:var resp = Request.CreateResponse<IEnumerable<Album>>(HttpStatusCode.OK, albums); This provides you an HttpResponse object that's pre-configured with the default formatter based on Content Negotiation. Once you have an HttpResponse object you can easily control most HTTP aspects on this object. What's sweet here is that there are many more detailed properties on HttpResponse than the core ASP.NET Response object, with most options being explicitly configurable with enumerations that make it easy to pick the right headers and response codes from a list of valid codes. It makes HTTP features available much more discoverable even for non-hardcore REST/HTTP geeks. Non-Serialized Results The output returned doesn’t have to be a serialized value but can also be raw data, like strings, binary data or streams. You can use the HttpResponseMessage.Content object to set a number of common Content classes. Listing 5 shows how to return a binary image using the ByteArrayContent class from a Controller method. [HttpGet] public HttpResponseMessage AlbumArt(string title) { var album = AlbumData.Current.FirstOrDefault(abl => abl.AlbumName.StartsWith(title)); if (album == null) { var resp = Request.CreateResponse<ApiMessageError>( HttpStatusCode.NotFound, new ApiMessageError("Album not found")); return resp; } // kinda silly - we would normally serve this directly // but hey - it's a demo. var http = new WebClient(); var imageData = http.DownloadData(album.AlbumImageUrl); // create response and return var result = new HttpResponseMessage(HttpStatusCode.OK); result.Content = new ByteArrayContent(imageData); result.Content.Headers.ContentType = new MediaTypeHeaderValue("image/jpeg"); return result; } The image retrieval from Amazon is contrived, but it shows how to return binary data using ByteArrayContent. It also demonstrates that you can easily return multiple types of content from a single controller method, which is actually quite common. If an error occurs - such as a resource can’t be found or a validation error – you can return an error response to the client that’s very specific to the error. In GetAlbumArt(), if the album can’t be found, we want to return a 404 Not Found status (and realistically no error, as it’s an image). Note that if you are not using HTTP Verb-based routing or not accessing a method that starts with Get/Post etc., you have to specify one or more HTTP Verb attributes on the method explicitly. Here, I used the [HttpGet] attribute to serve the image. Another option to handle the error could be to return a fixed placeholder image if no album could be matched or the album doesn’t have an image. When returning an error code, you can also return a strongly typed response to the client. For example, you can set the 404 status code and also return a custom error object (ApiMessageError is a class I defined) like this:return Request.CreateResponse<ApiMessageError>( HttpStatusCode.NotFound, new ApiMessageError("Album not found") );   If the album can be found, the image will be returned. The image is downloaded into a byte[] array, and then assigned to the result’s Content property. I created a new ByteArrayContent instance and assigned the image’s bytes and the content type so that it displays properly in the browser. There are other content classes available: StringContent, StreamContent, ByteArrayContent, MultipartContent, and ObjectContent are at your disposal to return just about any kind of content. You can create your own Content classes if you frequently return custom types and handle the default formatter assignments that should be used to send the data out . Although HttpResponseMessage results require more code than returning a plain .NET value from a method, it allows much more control over the actual HTTP processing than automatic processing. It also makes it much easier to test your controller methods as you get a response object that you can check for specific status codes and output messages rather than just a result value. Routing Again Ok, let’s get back to the image example. Using the original routing we have setup using HTTP Verb routing there's no good way to serve the image. In order to return my album art image I’d like to use a URL like this: http://localhost/aspnetWebApi/albums/Dirty%20Deeds/image In order to create a URL like this, I have to create a new Controller because my earlier routes pointed to the AlbumApiController using HTTP Verb routing. HTTP Verb based routing is great for representing a single set of resources such as albums. You can map operations like add, delete, update and read easily using HTTP Verbs. But you cannot mix action based routing into a an HTTP Verb routing controller - you can only map HTTP Verbs and each method has to be unique based on parameter signature. You can't have multiple GET operations to methods with the same signature. So GetImage(string id) and GetAlbum(string title) are in conflict in an HTTP GET routing scenario. In fact, I was unable to make the above Image URL work with any combination of HTTP Verb plus Custom routing using the single Albums controller. There are number of ways around this, but all involve additional controllers.  Personally, I think it’s easier to use explicit Action routing and then add custom routes if you need to simplify your URLs further. So in order to accommodate some of the other examples, I created another controller – AlbumRpcApiController – to handle all requests that are explicitly routed via actions (/albums/rpc/AlbumArt) or are custom routed with explicit routes defined in the HttpConfiguration. I added the AlbumArt() method to this new AlbumRpcApiController class. For the image URL to work with the new AlbumRpcApiController, you need a custom route placed before the default route from Listing 1.RouteTable.Routes.MapHttpRoute( name: "AlbumRpcApiAction", routeTemplate: "albums/rpc/{action}/{title}", defaults: new { title = RouteParameter.Optional, controller = "AlbumRpcApi", action = "GetAblums" } ); Now I can use either of the following URLs to access the image: Custom route: (/albums/rpc/{title}/image)http://localhost/aspnetWebApi/albums/PowerAge/image Action route: (/albums/rpc/action/{title})http://localhost/aspnetWebAPI/albums/rpc/albumart/PowerAge Sending Data to the Server To send data to the server and add a new album, you can use an HTTP POST operation. Since I’m using HTTP Verb-based routing in the original AlbumApiController, I can implement a method called PostAlbum()to accept a new album from the client. Listing 6 shows the Web API code to add a new album.public HttpResponseMessage PostAlbum(Album album) { if (!this.ModelState.IsValid) { // my custom error class var error = new ApiMessageError() { message = "Model is invalid" }; // add errors into our client error model for client foreach (var prop in ModelState.Values) { var modelError = prop.Errors.FirstOrDefault(); if (!string.IsNullOrEmpty(modelError.ErrorMessage)) error.errors.Add(modelError.ErrorMessage); else error.errors.Add(modelError.Exception.Message); } return Request.CreateResponse<ApiMessageError>(HttpStatusCode.Conflict, error); } // update song id which isn't provided foreach (var song in album.Songs) song.AlbumId = album.Id; // see if album exists already var matchedAlbum = AlbumData.Current .SingleOrDefault(alb => alb.Id == album.Id || alb.AlbumName == album.AlbumName); if (matchedAlbum == null) AlbumData.Current.Add(album); else matchedAlbum = album; // return a string to show that the value got here var resp = Request.CreateResponse(HttpStatusCode.OK, string.Empty); resp.Content = new StringContent(album.AlbumName + " " + album.Entered.ToString(), Encoding.UTF8, "text/plain"); return resp; } The PostAlbum() method receives an album parameter, which is automatically deserialized from the POST buffer the client sent. The data passed from the client can be either XML or JSON. Web API automatically figures out what format it needs to deserialize based on the content type and binds the content to the album object. Web API uses model binding to bind the request content to the parameter(s) of controller methods. Like MVC you can check the model by looking at ModelState.IsValid. If it’s not valid, you can run through the ModelState.Values collection and check each binding for errors. Here I collect the error messages into a string array that gets passed back to the client via the result ApiErrorMessage object. When a binding error occurs, you’ll want to return an HTTP error response and it’s best to do that with an HttpResponseMessage result. In Listing 6, I used a custom error class that holds a message and an array of detailed error messages for each binding error. I used this object as the content to return to the client along with my Conflict HTTP Status Code response. If binding succeeds, the example returns a string with the name and date entered to demonstrate that you captured the data. Normally, a method like this should return a Boolean or no response at all (HttpStatusCode.NoConent). The sample uses a simple static list to hold albums, so once you’ve added the album using the Post operation, you can hit the /albums/ URL to see that the new album was added. The client jQuery code to call the POST operation from the client with jQuery is shown in Listing 7. var id = new Date().getTime().toString(); var album = { "Id": id, "AlbumName": "Power Age", "Artist": "AC/DC", "YearReleased": 1977, "Entered": "2002-03-11T18:24:43.5580794-10:00", "AlbumImageUrl": http://ecx.images-amazon.com/images/…, "AmazonUrl": http://www.amazon.com/…, "Songs": [ { "SongName": "Rock 'n Roll Damnation", "SongLength": 3.12}, { "SongName": "Downpayment Blues", "SongLength": 4.22 }, { "SongName": "Riff Raff", "SongLength": 2.42 } ] } $.ajax( { url: "albums/", type: "POST", contentType: "application/json", data: JSON.stringify(album), processData: false, beforeSend: function (xhr) { // not required since JSON is default output xhr.setRequestHeader("Accept", "application/json"); }, success: function (result) { // reload list of albums page.loadAlbums(); }, error: function (xhr, status, p3, p4) { var err = "Error"; if (xhr.responseText && xhr.responseText[0] == "{") err = JSON.parse(xhr.responseText).message; alert(err); } }); The code in Listing 7 creates an album object in JavaScript to match the structure of the .NET Album class. This object is passed to the $.ajax() function to send to the server as POST. The data is turned into JSON and the content type set to application/json so that the server knows what to convert when deserializing in the Album instance. The jQuery code hooks up success and failure events. Success returns the result data, which is a string that’s echoed back with an alert box. If an error occurs, jQuery returns the XHR instance and status code. You can check the XHR to see if a JSON object is embedded and if it is, you can extract it by de-serializing it and accessing the .message property. REST standards suggest that updates to existing resources should use PUT operations. REST standards aside, I’m not a big fan of separating out inserts and updates so I tend to have a single method that handles both. But if you want to follow REST suggestions, you can create a PUT method that handles updates by forwarding the PUT operation to the POST method:public HttpResponseMessage PutAlbum(Album album) { return PostAlbum(album); } To make the corresponding $.ajax() call, all you have to change from Listing 7 is the type: from POST to PUT. Model Binding with UrlEncoded POST Variables In the example in Listing 7 I used JSON objects to post a serialized object to a server method that accepted an strongly typed object with the same structure, which is a common way to send data to the server. However, Web API supports a number of different ways that data can be received by server methods. For example, another common way is to use plain UrlEncoded POST  values to send to the server. Web API supports Model Binding that works similar (but not the same) as MVC's model binding where POST variables are mapped to properties of object parameters of the target method. This is actually quite common for AJAX calls that want to avoid serialization and the potential requirement of a JSON parser on older browsers. For example, using jQUery you might use the $.post() method to send a new album to the server (albeit one without songs) using code like the following:$.post("albums/",{AlbumName: "Dirty Deeds", YearReleased: 1976 … },albumPostCallback); Although the code looks very similar to the client code we used before passing JSON, here the data passed is URL encoded values (AlbumName=Dirty+Deeds&YearReleased=1976 etc.). Web API then takes this POST data and maps each of the POST values to the properties of the Album object in the method's parameter. Although the client code is different the server can both handle the JSON object, or the UrlEncoded POST values. Dynamic Access to POST Data There are also a few options available to dynamically access POST data, if you know what type of data you're dealing with. If you have POST UrlEncoded values, you can dynamically using a FormsDataCollection:[HttpPost] public string PostAlbum(FormDataCollection form) { return string.Format("{0} - released {1}", form.Get("AlbumName"),form.Get("RearReleased")); } The FormDataCollection is a very simple object, that essentially provides the same functionality as Request.Form[] in ASP.NET. Request.Form[] still works if you're running hosted in an ASP.NET application. However as a general rule, while ASP.NET's functionality is always available when running Web API hosted inside of an  ASP.NET application, using the built in classes specific to Web API makes it possible to run Web API applications in a self hosted environment outside of ASP.NET. If your client is sending JSON to your server, and you don't want to map the JSON to a strongly typed object because you only want to retrieve a few simple values, you can also accept a JObject parameter in your API methods:[HttpPost] public string PostAlbum(JObject jsonData) { dynamic json = jsonData; JObject jalbum = json.Album; JObject juser = json.User; string token = json.UserToken; var album = jalbum.ToObject<Album>(); var user = juser.ToObject<User>(); return String.Format("{0} {1} {2}", album.AlbumName, user.Name, token); } There quite a few options available to you to receive data with Web API, which gives you more choices for the right tool for the job. Unfortunately one shortcoming of Web API is that POST data is always mapped to a single parameter. This means you can't pass multiple POST parameters to methods that receive POST data. It's possible to accept multiple parameters, but only one can map to the POST content - the others have to come from the query string or route values. I have a couple of Blog POSTs that explain what works and what doesn't here: Passing multiple POST parameters to Web API Controller Methods Mapping UrlEncoded POST Values in ASP.NET Web API   Handling Delete Operations Finally, to round out the server API code of the album example we've been discussin, here’s the DELETE verb controller method that allows removal of an album by its title:public HttpResponseMessage DeleteAlbum(string title) { var matchedAlbum = AlbumData.Current.Where(alb => alb.AlbumName == title) .SingleOrDefault(); if (matchedAlbum == null) return new HttpResponseMessage(HttpStatusCode.NotFound); AlbumData.Current.Remove(matchedAlbum); return new HttpResponseMessage(HttpStatusCode.NoContent); } To call this action method using jQuery, you can use:$(".removeimage").live("click", function () { var $el = $(this).parent(".album"); var txt = $el.find("a").text(); $.ajax({ url: "albums/" + encodeURIComponent(txt), type: "Delete", success: function (result) { $el.fadeOut().remove(); }, error: jqError }); }   Note the use of the DELETE verb in the $.ajax() call, which routes to DeleteAlbum on the server. DELETE is a non-content operation, so you supply a resource ID (the title) via route value or the querystring. Routing Conflicts In all requests with the exception of the AlbumArt image example shown so far, I used HTTP Verb routing that I set up in Listing 1. HTTP Verb Routing is a recommendation that is in line with typical REST access to HTTP resources. However, it takes quite a bit of effort to create REST-compliant API implementations based only on HTTP Verb routing only. You saw one example that didn’t really fit – the return of an image where I created a custom route albums/{title}/image that required creation of a second controller and a custom route to work. HTTP Verb routing to a controller does not mix with custom or action routing to the same controller because of the limited mapping of HTTP verbs imposed by HTTP Verb routing. To understand some of the problems with verb routing, let’s look at another example. Let’s say you create a GetSortableAlbums() method like this and add it to the original AlbumApiController accessed via HTTP Verb routing:[HttpGet] public IQueryable<Album> SortableAlbums() { var albums = AlbumData.Current; // generally should be done only on actual queryable results (EF etc.) // Done here because we're running with a static list but otherwise might be slow return albums.AsQueryable(); } If you compile this code and try to now access the /albums/ link, you get an error: Multiple Actions were found that match the request. HTTP Verb routing only allows access to one GET operation per parameter/route value match. If more than one method exists with the same parameter signature, it doesn’t work. As I mentioned earlier for the image display, the only solution to get this method to work is to throw it into another controller. Because I already set up the AlbumRpcApiController I can add the method there. First, I should rename the method to SortableAlbums() so I’m not using a Get prefix for the method. This also makes the action parameter look cleaner in the URL - it looks less like a method and more like a noun. I can then create a new route that handles direct-action mapping:RouteTable.Routes.MapHttpRoute( name: "AlbumRpcApiAction", routeTemplate: "albums/rpc/{action}/{title}", defaults: new { title = RouteParameter.Optional, controller = "AlbumRpcApi", action = "GetAblums" } ); As I am explicitly adding a route segment – rpc – into the route template, I can now reference explicit methods in the Web API controller using URLs like this: http://localhost/AspNetWebApi/rpc/SortableAlbums Error Handling I’ve already done some minimal error handling in the examples. For example in Listing 6, I detected some known-error scenarios like model validation failing or a resource not being found and returning an appropriate HttpResponseMessage result. But what happens if your code just blows up or causes an exception? If you have a controller method, like this:[HttpGet] public void ThrowException() { throw new UnauthorizedAccessException("Unauthorized Access Sucka"); } You can call it with this: http://localhost/AspNetWebApi/albums/rpc/ThrowException The default exception handling displays a 500-status response with the serialized exception on the local computer only. When you connect from a remote computer, Web API throws back a 500  HTTP Error with no data returned (IIS then adds its HTML error page). The behavior is configurable in the GlobalConfiguration:GlobalConfiguration .Configuration .IncludeErrorDetailPolicy = IncludeErrorDetailPolicy.Never; If you want more control over your error responses sent from code, you can throw explicit error responses yourself using HttpResponseException. When you throw an HttpResponseException the response parameter is used to generate the output for the Controller action. [HttpGet] public void ThrowError() { var resp = Request.CreateResponse<ApiMessageError>( HttpStatusCode.BadRequest, new ApiMessageError("Your code stinks!")); throw new HttpResponseException(resp); } Throwing an HttpResponseException stops the processing of the controller method and immediately returns the response you passed to the exception. Unlike other Exceptions fired inside of WebAPI, HttpResponseException bypasses the Exception Filters installed and instead just outputs the response you provide. In this case, the serialized ApiMessageError result string is returned in the default serialization format – XML or JSON. You can pass any content to HttpResponseMessage, which includes creating your own exception objects and consistently returning error messages to the client. Here’s a small helper method on the controller that you might use to send exception info back to the client consistently:private void ThrowSafeException(string message, HttpStatusCode statusCode = HttpStatusCode.BadRequest) { var errResponse = Request.CreateResponse<ApiMessageError>(statusCode, new ApiMessageError() { message = message }); throw new HttpResponseException(errResponse); } You can then use it to output any captured errors from code:[HttpGet] public void ThrowErrorSafe() { try { List<string> list = null; list.Add("Rick"); } catch (Exception ex) { ThrowSafeException(ex.Message); } }   Exception Filters Another more global solution is to create an Exception Filter. Filters in Web API provide the ability to pre- and post-process controller method operations. An exception filter looks at all exceptions fired and then optionally creates an HttpResponseMessage result. Listing 8 shows an example of a basic Exception filter implementation.public class UnhandledExceptionFilter : ExceptionFilterAttribute { public override void OnException(HttpActionExecutedContext context) { HttpStatusCode status = HttpStatusCode.InternalServerError; var exType = context.Exception.GetType(); if (exType == typeof(UnauthorizedAccessException)) status = HttpStatusCode.Unauthorized; else if (exType == typeof(ArgumentException)) status = HttpStatusCode.NotFound; var apiError = new ApiMessageError() { message = context.Exception.Message }; // create a new response and attach our ApiError object // which now gets returned on ANY exception result var errorResponse = context.Request.CreateResponse<ApiMessageError>(status, apiError); context.Response = errorResponse; base.OnException(context); } } Exception Filter Attributes can be assigned to an ApiController class like this:[UnhandledExceptionFilter] public class AlbumRpcApiController : ApiController or you can globally assign it to all controllers by adding it to the HTTP Configuration's Filters collection:GlobalConfiguration.Configuration.Filters.Add(new UnhandledExceptionFilter()); The latter is a great way to get global error trapping so that all errors (short of hard IIS errors and explicit HttpResponseException errors) return a valid error response that includes error information in the form of a known-error object. Using a filter like this allows you to throw an exception as you normally would and have your filter create a response in the appropriate output format that the client expects. For example, an AJAX application can on failure expect to see a JSON error result that corresponds to the real error that occurred rather than a 500 error along with HTML error page that IIS throws up. You can even create some custom exceptions so you can differentiate your own exceptions from unhandled system exceptions - you often don't want to display error information from 'unknown' exceptions as they may contain sensitive system information or info that's not generally useful to users of your application/site. This is just one example of how ASP.NET Web API is configurable and extensible. Exception filters are just one example of how you can plug-in into the Web API request flow to modify output. Many more hooks exist and I’ll take a closer look at extensibility in Part 2 of this article in the future. Summary Web API is a big improvement over previous Microsoft REST and AJAX toolkits. The key features to its usefulness are its ease of use with simple controller based logic, familiar MVC-style routing, low configuration impact, extensibility at all levels and tight attention to exposing and making HTTP semantics easily discoverable and easy to use. Although none of the concepts used in Web API are new or radical, Web API combines the best of previous platforms into a single framework that’s highly functional, easy to work with, and extensible to boot. I think that Microsoft has hit a home run with Web API. Related Resources Where does ASP.NET Web API fit? Sample Source Code on GitHub Passing multiple POST parameters to Web API Controller Methods Mapping UrlEncoded POST Values in ASP.NET Web API Creating a JSONP Formatter for ASP.NET Web API Removing the XML Formatter from ASP.NET Web API Applications© Rick Strahl, West Wind Technologies, 2005-2012Posted in Web Api   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

< Previous Page | 32 33 34 35 36 37 38 39 40 41 42 43  | Next Page >