Search Results

Search found 3040 results on 122 pages for 'detail'.

Page 98/122 | < Previous Page | 94 95 96 97 98 99 100 101 102 103 104 105  | Next Page >

  • [javascript] Can I overload an object with a function?

    - by user257493
    Lets say I have an object of functions/values. I'm interested in overloading based on calling behavior. For example, this block of code below demonstrates what I wish to do. var main_thing = { initalized: false, something: "Hallo, welt!", something_else: [123,456,789], load: { sub1 : function() { //Some stuff }, sub2 : function() { //Some stuff }, all : function() { this.load.sub1(); this.load.sub2(); } } init: function () { this.initalized=true; this.something="Hello, world!"; this.something_else = [0,0,0]; this.load(); //I want this to call this.load.all() instead. } } The issue to me is that main_thing.load is assigned to an object, and to call main_thing.load.all() would call the function inside of the object (the () operator). What can I do to set up my code so I could use main_thing.load as an access the object, and main_thing.load() to execute some code? Or at least, similar behavior. Basically, this would be similar to a default constructor in other languages where you don't need to call main_thing.constructor(). If this isn't possible, please explain with a bit of detail.

    Read the article

  • Advance Query with Join

    - by user1462589
    I'm trying to convert a product table that contains all the detail of the product into separate tables in SQL. I've got everything done except for duplicated descriptor details. The problem I am having all the products have size/color/style/other that many other products contain. I want to only have one size or color descriptor for all the items and reuse the "ID" for all the product which I believe is a Parent key to the Product ID which is a ...Foreign Key. The only problem is that every descriptor would have multiple Foreign Keys assigned to it. So I was thinking on the fly just have it skip figuring out a Foreign Parent key for each descriptor and just check to see if that descriptor exist and if it does use its Key for the descriptor. Data Table PI Colo Sz OTHER 1 | Blue | 5 | Vintage 2 | Blue | 6 | Vintage 3 | Blac | 5 | Simple 4 | Blac | 6 | Simple =================================== Its destination table is this =================================== DI Description 1 | Blue 2 | Blac 3 | 5 4 | 6 6 | Vintage 7 | Simple ============================= Select Data.Table Unique.Data.Table.Colo Unique.Data.Table.Sz Unique.Data.Table.Other ======================================= Then the dual part of the questions after we create all the descriptors how to do a new query and assign the product ID to the descriptors. PI| DI 1 | 1 1 | 3 1 | 4 2 | 1 2 | 3 2 | 4 By figuring out how to do this I should be able to duplicate this pattern for all 300 + columns in the product. Some of these fields are 60+ characters large so its going to save a ton of space. Do I use a Array?

    Read the article

  • Best way to make sure I get all available options array/loop

    - by jaz872
    OK, here is my problem. I'm looking for the best way to loop through a bunch of options to make sure that I hit all available options. Some detail. I've created a feature that allows a client to build images that are basically other images layered on top of each other. These other images are split up into different groups. They have links on the side of the image that they can click to scroll through all the different images to view them. I'm now making an automated process that is going to run the function that changes the image when a user clicks one of the links. I need to make sure that every possible combo of the different images is hit during this process. So I have an array with the number of options for each group. The current array is [3, 9, 3, 3] My question is what is the best way to loop through this to make sure that all possible options will be shown? I apologize if this seems simple for someone out there, but I'm just having trouble wrapping my head around it. Hopefully if that person is out there, they can give a helping hand :)

    Read the article

  • What would be a good Database strategy to manage these two product options?

    - by bemused
    I have a site that allows users to purchase "items" (imagine it as an Advertisement, or a download). There are 2 ways to purchase. Either a subscription, 70 items within 1 month (use them or lose them--at the end of the month your count is 0) or purchase each item individually as you need it. So the user could subscribe and get 70/month or pay for 10 and use them when they want until the 10 are gone. Maybe it's the late hour, but I can't isolate a solution I like and thought some users here would surely have stumbled upon something similar. One I can imagine is webhosts. They sell hosting for monthy fees and sell counts of things like you get 5 free domains with our reseller account. or something like a movie download site, you can subscribe and get 100 movies each month, or pay for a one-time package of 10 movies. so is this a web of tables and where would be a good cross between the product a user has purchased and how many they have left? products productID, productType=subscription, consumable, subscription&consumable subscriptions SubscriptionID, subscriptionStartDate, subscriptionEndDate, consumables consumableID, consumableName UserProducts userID,productID,productType ,consumptionLimit,consumedCount (if subscription check against dates), otherwise just check that consumedCount is < than limit. Usually I can layout my data in a way that I know it will work the way I expect, but this one feels a little questionable to me. Like there is a hidden detail that is going to creep up later. That's why I decided to ask for help if someone in the vast expanse can enlighten me with their wisdom and experience and clue me in to a satisfying strategy. Thank you.

    Read the article

  • LocationMatch Regex for versioning

    - by Aventus
    I've tried using the docs but I'm quite new to regex. I've had success with others but the same method is not working for what I'm actually after. I'm trying to send users to different servers based on the version number in the URL. This this case, older versions are to be sent to the new server for a particular service. <LocationMatch "/(1.0|2.0|3.0)/appname"> ... </LocationMatch> The following is working - <LocationMatch "/1/appname"> ... </LocationMatch> <LocationMatch "/2/appname"> ... </LocationMatch> What I would love to achieve is sending all those major releases with a single tag - <LocationMatch "/(1*|2*|3*)/appname"> ... </LocationMatch> I've already referred the documentation at http://httpd.apache.org/docs/2.2/mod/core.html#locationmatch but unfortunately it doesn't cover my case with enough detail to help me.

    Read the article

  • Agile Development

    - by James Oloo Onyango
    Alot of literature has and is being written about agile developement and its surrounding philosophies. In my quest to find the best way to express the importance of agile methodologies, i have found Robert C. Martin's "A Satire Of Two Companies" to be both the most concise and thorough! Enjoy the read! Rufus Inc Project Kick Off Your name is Bob. The date is January 3, 2001, and your head still aches from the recent millennial revelry. You are sitting in a conference room with several managers and a group of your peers. You are a project team leader. Your boss is there, and he has brought along all of his team leaders. His boss called the meeting. "We have a new project to develop," says your boss's boss. Call him BB. The points in his hair are so long that they scrape the ceiling. Your boss's points are just starting to grow, but he eagerly awaits the day when he can leave Brylcream stains on the acoustic tiles. BB describes the essence of the new market they have identified and the product they want to develop to exploit this market. "We must have this new project up and working by fourth quarter October 1," BB demands. "Nothing is of higher priority, so we are cancelling your current project." The reaction in the room is stunned silence. Months of work are simply going to be thrown away. Slowly, a murmur of objection begins to circulate around the conference table.   His points give off an evil green glow as BB meets the eyes of everyone in the room. One by one, that insidious stare reduces each attendee to quivering lumps of protoplasm. It is clear that he will brook no discussion on this matter. Once silence has been restored, BB says, "We need to begin immediately. How long will it take you to do the analysis?" You raise your hand. Your boss tries to stop you, but his spitwad misses you and you are unaware of his efforts.   "Sir, we can't tell you how long the analysis will take until we have some requirements." "The requirements document won't be ready for 3 or 4 weeks," BB says, his points vibrating with frustration. "So, pretend that you have the requirements in front of you now. How long will you require for analysis?" No one breathes. Everyone looks around to see whether anyone has some idea. "If analysis goes beyond April 1, we have a problem. Can you finish the analysis by then?" Your boss visibly gathers his courage: "We'll find a way, sir!" His points grow 3 mm, and your headache increases by two Tylenol. "Good." BB smiles. "Now, how long will it take to do the design?" "Sir," you say. Your boss visibly pales. He is clearly worried that his 3 mms are at risk. "Without an analysis, it will not be possible to tell you how long design will take." BB's expression shifts beyond austere.   "PRETEND you have the analysis already!" he says, while fixing you with his vacant, beady little eyes. "How long will it take you to do the design?" Two Tylenol are not going to cut it. Your boss, in a desperate attempt to save his new growth, babbles: "Well, sir, with only six months left to complete the project, design had better take no longer than 3 months."   "I'm glad you agree, Smithers!" BB says, beaming. Your boss relaxes. He knows his points are secure. After a while, he starts lightly humming the Brylcream jingle. BB continues, "So, analysis will be complete by April 1, design will be complete by July 1, and that gives you 3 months to implement the project. This meeting is an example of how well our new consensus and empowerment policies are working. Now, get out there and start working. I'll expect to see TQM plans and QIT assignments on my desk by next week. Oh, and don't forget that your crossfunctional team meetings and reports will be needed for next month's quality audit." "Forget the Tylenol," you think to yourself as you return to your cubicle. "I need bourbon."   Visibly excited, your boss comes over to you and says, "Gosh, what a great meeting. I think we're really going to do some world shaking with this project." You nod in agreement, too disgusted to do anything else. "Oh," your boss continues, "I almost forgot." He hands you a 30-page document. "Remember that the SEI is coming to do an evaluation next week. This is the evaluation guide. You need to read through it, memorize it, and then shred it. It tells you how to answer any questions that the SEI auditors ask you. It also tells you what parts of the building you are allowed to take them to and what parts to avoid. We are determined to be a CMM level 3 organization by June!"   You and your peers start working on the analysis of the new project. This is difficult because you have no requirements. But from the 10-minute introduction given by BB on that fateful morning, you have some idea of what the product is supposed to do.   Corporate process demands that you begin by creating a use case document. You and your team begin enumerating use cases and drawing oval and stick diagrams. Philosophical debates break out among the team members. There is disagreement as to whether certain use cases should be connected with <<extends>> or <<includes>> relationships. Competing models are created, but nobody knows how to evaluate them. The debate continues, effectively paralyzing progress.   After a week, somebody finds the iceberg.com Web site, which recommends disposing entirely of <<extends>> and <<includes>> and replacing them with <<precedes>> and <<uses>>. The documents on this Web site, authored by Don Sengroiux, describes a method known as stalwart-analysis, which claims to be a step-by-step method for translating use cases into design diagrams. More competing use case models are created using this new scheme, but again, people can't agree on how to evaluate them. The thrashing continues. More and more, the use case meetings are driven by emotion rather than by reason. If it weren't for the fact that you don't have requirements, you'd be pretty upset by the lack of progress you are making. The requirements document arrives on February 15. And then again on February 20, 25, and every week thereafter. Each new version contradicts the previous one. Clearly, the marketing folks who are writing the requirements, empowered though they might be, are not finding consensus.   At the same time, several new competing use case templates have been proposed by the various team members. Each template presents its own particularly creative way of delaying progress. The debates rage on. On March 1, Prudence Putrigence, the process proctor, succeeds in integrating all the competing use case forms and templates into a single, all-encompassing form. Just the blank form is 15 pages long. She has managed to include every field that appeared on all the competing templates. She also presents a 159- page document describing how to fill out the use case form. All current use cases must be rewritten according to the new standard.   You marvel to yourself that it now requires 15 pages of fill-in-the-blank and essay questions to answer the question: What should the system do when the user presses Return? The corporate process (authored by L. E. Ott, famed author of "Holistic Analysis: A Progressive Dialectic for Software Engineers") insists that you discover all primary use cases, 87 percent of all secondary use cases, and 36.274 percent of all tertiary use cases before you can complete analysis and enter the design phase. You have no idea what a tertiary use case is. So in an attempt to meet this requirement, you try to get your use case document reviewed by the marketing department, which you hope will know what a tertiary use case is.   Unfortunately, the marketing folks are too busy with sales support to talk to you. Indeed, since the project started, you have not been able to get a single meeting with marketing, which has provided a never-ending stream of changing and contradictory requirements documents.   While one team has been spinning endlessly on the use case document, another team has been working out the domain model. Endless variations of UML documents are pouring out of this team. Every week, the model is reworked.   The team members can't decide whether to use <<interfaces>> or <<types>> in the model. A huge disagreement has been raging on the proper syntax and application of OCL. Others on the team just got back from a 5-day class on catabolism, and have been producing incredibly detailed and arcane diagrams that nobody else can fathom.   On March 27, with one week to go before analysis is to be complete, you have produced a sea of documents and diagrams but are no closer to a cogent analysis of the problem than you were on January 3. **** And then, a miracle happens.   **** On Saturday, April 1, you check your e-mail from home. You see a memo from your boss to BB. It states unequivocally that you are done with the analysis! You phone your boss and complain. "How could you have told BB that we were done with the analysis?" "Have you looked at a calendar lately?" he responds. "It's April 1!" The irony of that date does not escape you. "But we have so much more to think about. So much more to analyze! We haven't even decided whether to use <<extends>> or <<precedes>>!" "Where is your evidence that you are not done?" inquires your boss, impatiently. "Whaaa . . . ." But he cuts you off. "Analysis can go on forever; it has to be stopped at some point. And since this is the date it was scheduled to stop, it has been stopped. Now, on Monday, I want you to gather up all existing analysis materials and put them into a public folder. Release that folder to Prudence so that she can log it in the CM system by Monday afternoon. Then get busy and start designing."   As you hang up the phone, you begin to consider the benefits of keeping a bottle of bourbon in your bottom desk drawer. They threw a party to celebrate the on-time completion of the analysis phase. BB gave a colon-stirring speech on empowerment. And your boss, another 3 mm taller, congratulated his team on the incredible show of unity and teamwork. Finally, the CIO takes the stage to tell everyone that the SEI audit went very well and to thank everyone for studying and shredding the evaluation guides that were passed out. Level 3 now seems assured and will be awarded by June. (Scuttlebutt has it that managers at the level of BB and above are to receive significant bonuses once the SEI awards level 3.)   As the weeks flow by, you and your team work on the design of the system. Of course, you find that the analysis that the design is supposedly based on is flawedno, useless; no, worse than useless. But when you tell your boss that you need to go back and work some more on the analysis to shore up its weaker sections, he simply states, "The analysis phase is over. The only allowable activity is design. Now get back to it."   So, you and your team hack the design as best you can, unsure of whether the requirements have been properly analyzed. Of course, it really doesn't matter much, since the requirements document is still thrashing with weekly revisions, and the marketing department still refuses to meet with you.     The design is a nightmare. Your boss recently misread a book named The Finish Line in which the author, Mark DeThomaso, blithely suggested that design documents should be taken down to code-level detail. "If we are going to be working at that level of detail," you ask, "why don't we simply write the code instead?" "Because then you wouldn't be designing, of course. And the only allowable activity in the design phase is design!" "Besides," he continues, "we have just purchased a companywide license for Dandelion! This tool enables 'Round the Horn Engineering!' You are to transfer all design diagrams into this tool. It will automatically generate our code for us! It will also keep the design diagrams in sync with the code!" Your boss hands you a brightly colored shrinkwrapped box containing the Dandelion distribution. You accept it numbly and shuffle off to your cubicle. Twelve hours, eight crashes, one disk reformatting, and eight shots of 151 later, you finally have the tool installed on your server. You consider the week your team will lose while attending Dandelion training. Then you smile and think, "Any week I'm not here is a good week." Design diagram after design diagram is created by your team. Dandelion makes it very difficult to draw these diagrams. There are dozens and dozens of deeply nested dialog boxes with funny text fields and check boxes that must all be filled in correctly. And then there's the problem of moving classes between packages. At first, these diagram are driven from the use cases. But the requirements are changing so often that the use cases rapidly become meaningless. Debates rage about whether VISITOR or DECORATOR design patterns should be used. One developer refuses to use VISITOR in any form, claiming that it's not a properly object-oriented construct. Someone refuses to use multiple inheritance, since it is the spawn of the devil. Review meetings rapidly degenerate into debates about the meaning of object orientation, the definition of analysis versus design, or when to use aggregation versus association. Midway through the design cycle, the marketing folks announce that they have rethought the focus of the system. Their new requirements document is completely restructured. They have eliminated several major feature areas and replaced them with feature areas that they anticipate customer surveys will show to be more appropriate. You tell your boss that these changes mean that you need to reanalyze and redesign much of the system. But he says, "The analysis phase is system. But he says, "The analysis phase is over. The only allowable activity is design. Now get back to it."   You suggest that it might be better to create a simple prototype to show to the marketing folks and even some potential customers. But your boss says, "The analysis phase is over. The only allowable activity is design. Now get back to it." Hack, hack, hack, hack. You try to create some kind of a design document that might reflect the new requirements documents. However, the revolution of the requirements has not caused them to stop thrashing. Indeed, if anything, the wild oscillations of the requirements document have only increased in frequency and amplitude.   You slog your way through them.   On June 15, the Dandelion database gets corrupted. Apparently, the corruption has been progressive. Small errors in the DB accumulated over the months into bigger and bigger errors. Eventually, the CASE tool just stopped working. Of course, the slowly encroaching corruption is present on all the backups. Calls to the Dandelion technical support line go unanswered for several days. Finally, you receive a brief e-mail from Dandelion, informing you that this is a known problem and that the solution is to purchase the new version, which they promise will be ready some time next quarter, and then reenter all the diagrams by hand.   ****   Then, on July 1 another miracle happens! You are done with the design!   Rather than go to your boss and complain, you stock your middle desk drawer with some vodka.   **** They threw a party to celebrate the on-time completion of the design phase and their graduation to CMM level 3. This time, you find BB's speech so stirring that you have to use the restroom before it begins. New banners and plaques are all over your workplace. They show pictures of eagles and mountain climbers, and they talk about teamwork and empowerment. They read better after a few scotches. That reminds you that you need to clear out your file cabinet to make room for the brandy. You and your team begin to code. But you rapidly discover that the design is lacking in some significant areas. Actually, it's lacking any significance at all. You convene a design session in one of the conference rooms to try to work through some of the nastier problems. But your boss catches you at it and disbands the meeting, saying, "The design phase is over. The only allowable activity is coding. Now get back to it."   ****   The code generated by Dandelion is really hideous. It turns out that you and your team were using association and aggregation the wrong way, after all. All the generated code has to be edited to correct these flaws. Editing this code is extremely difficult because it has been instrumented with ugly comment blocks that have special syntax that Dandelion needs in order to keep the diagrams in sync with the code. If you accidentally alter one of these comments, the diagrams will be regenerated incorrectly. It turns out that "Round the Horn Engineering" requires an awful lot of effort. The more you try to keep the code compatible with Dandelion, the more errors Dandelion generates. In the end, you give up and decide to keep the diagrams up to date manually. A second later, you decide that there's no point in keeping the diagrams up to date at all. Besides, who has time?   Your boss hires a consultant to build tools to count the number of lines of code that are being produced. He puts a big thermometer graph on the wall with the number 1,000,000 on the top. Every day, he extends the red line to show how many lines have been added. Three days after the thermometer appears on the wall, your boss stops you in the hall. "That graph isn't growing quickly enough. We need to have a million lines done by October 1." "We aren't even sh-sh-sure that the proshect will require a m-million linezh," you blather. "We have to have a million lines done by October 1," your boss reiterates. His points have grown again, and the Grecian formula he uses on them creates an aura of authority and competence. "Are you sure your comment blocks are big enough?" Then, in a flash of managerial insight, he says, "I have it! I want you to institute a new policy among the engineers. No line of code is to be longer than 20 characters. Any such line must be split into two or more preferably more. All existing code needs to be reworked to this standard. That'll get our line count up!"   You decide not to tell him that this will require two unscheduled work months. You decide not to tell him anything at all. You decide that intravenous injections of pure ethanol are the only solution. You make the appropriate arrangements. Hack, hack, hack, and hack. You and your team madly code away. By August 1, your boss, frowning at the thermometer on the wall, institutes a mandatory 50-hour workweek.   Hack, hack, hack, and hack. By September 1st, the thermometer is at 1.2 million lines and your boss asks you to write a report describing why you exceeded the coding budget by 20 percent. He institutes mandatory Saturdays and demands that the project be brought back down to a million lines. You start a campaign of remerging lines. Hack, hack, hack, and hack. Tempers are flaring; people are quitting; QA is raining trouble reports down on you. Customers are demanding installation and user manuals; salespeople are demanding advance demonstrations for special customers; the requirements document is still thrashing, the marketing folks are complaining that the product isn't anything like they specified, and the liquor store won't accept your credit card anymore. Something has to give.    On September 15, BB calls a meeting. As he enters the room, his points are emitting clouds of steam. When he speaks, the bass overtones of his carefully manicured voice cause the pit of your stomach to roll over. "The QA manager has told me that this project has less than 50 percent of the required features implemented. He has also informed me that the system crashes all the time, yields wrong results, and is hideously slow. He has also complained that he cannot keep up with the continuous train of daily releases, each more buggy than the last!" He stops for a few seconds, visibly trying to compose himself. "The QA manager estimates that, at this rate of development, we won't be able to ship the product until December!" Actually, you think it's more like March, but you don't say anything. "December!" BB roars with such derision that people duck their heads as though he were pointing an assault rifle at them. "December is absolutely out of the question. Team leaders, I want new estimates on my desk in the morning. I am hereby mandating 65-hour work weeks until this project is complete. And it better be complete by November 1."   As he leaves the conference room, he is heard to mutter: "Empowermentbah!" * * * Your boss is bald; his points are mounted on BB's wall. The fluorescent lights reflecting off his pate momentarily dazzle you. "Do you have anything to drink?" he asks. Having just finished your last bottle of Boone's Farm, you pull a bottle of Thunderbird from your bookshelf and pour it into his coffee mug. "What's it going to take to get this project done? " he asks. "We need to freeze the requirements, analyze them, design them, and then implement them," you say callously. "By November 1?" your boss exclaims incredulously. "No way! Just get back to coding the damned thing." He storms out, scratching his vacant head.   A few days later, you find that your boss has been transferred to the corporate research division. Turnover has skyrocketed. Customers, informed at the last minute that their orders cannot be fulfilled on time, have begun to cancel their orders. Marketing is re-evaluating whether this product aligns with the overall goals of the company. Memos fly, heads roll, policies change, and things are, overall, pretty grim. Finally, by March, after far too many sixty-five hour weeks, a very shaky version of the software is ready. In the field, bug-discovery rates are high, and the technical support staff are at their wits' end, trying to cope with the complaints and demands of the irate customers. Nobody is happy.   In April, BB decides to buy his way out of the problem by licensing a product produced by Rupert Industries and redistributing it. The customers are mollified, the marketing folks are smug, and you are laid off.     Rupert Industries: Project Alpha   Your name is Robert. The date is January 3, 2001. The quiet hours spent with your family this holiday have left you refreshed and ready for work. You are sitting in a conference room with your team of professionals. The manager of the division called the meeting. "We have some ideas for a new project," says the division manager. Call him Russ. He is a high-strung British chap with more energy than a fusion reactor. He is ambitious and driven but understands the value of a team. Russ describes the essence of the new market opportunity the company has identified and introduces you to Jane, the marketing manager, who is responsible for defining the products that will address it. Addressing you, Jane says, "We'd like to start defining our first product offering as soon as possible. When can you and your team meet with me?" You reply, "We'll be done with the current iteration of our project this Friday. We can spare a few hours for you between now and then. After that, we'll take a few people from the team and dedicate them to you. We'll begin hiring their replacements and the new people for your team immediately." "Great," says Russ, "but I want you to understand that it is critical that we have something to exhibit at the trade show coming up this July. If we can't be there with something significant, we'll lose the opportunity."   "I understand," you reply. "I don't yet know what it is that you have in mind, but I'm sure we can have something by July. I just can't tell you what that something will be right now. In any case, you and Jane are going to have complete control over what we developers do, so you can rest assured that by July, you'll have the most important things that can be accomplished in that time ready to exhibit."   Russ nods in satisfaction. He knows how this works. Your team has always kept him advised and allowed him to steer their development. He has the utmost confidence that your team will work on the most important things first and will produce a high-quality product.   * * *   "So, Robert," says Jane at their first meeting, "How does your team feel about being split up?" "We'll miss working with each other," you answer, "but some of us were getting pretty tired of that last project and are looking forward to a change. So, what are you people cooking up?" Jane beams. "You know how much trouble our customers currently have . . ." And she spends a half hour or so describing the problem and possible solution. "OK, wait a second" you respond. "I need to be clear about this." And so you and Jane talk about how this system might work. Some of her ideas aren't fully formed. You suggest possible solutions. She likes some of them. You continue discussing.   During the discussion, as each new topic is addressed, Jane writes user story cards. Each card represents something that the new system has to do. The cards accumulate on the table and are spread out in front of you. Both you and Jane point at them, pick them up, and make notes on them as you discuss the stories. The cards are powerful mnemonic devices that you can use to represent complex ideas that are barely formed.   At the end of the meeting, you say, "OK, I've got a general idea of what you want. I'm going to talk to the team about it. I imagine they'll want to run some experiments with various database structures and presentation formats. Next time we meet, it'll be as a group, and we'll start identifying the most important features of the system."   A week later, your nascent team meets with Jane. They spread the existing user story cards out on the table and begin to get into some of the details of the system. The meeting is very dynamic. Jane presents the stories in the order of their importance. There is much discussion about each one. The developers are concerned about keeping the stories small enough to estimate and test. So they continually ask Jane to split one story into several smaller stories. Jane is concerned that each story have a clear business value and priority, so as she splits them, she makes sure that this stays true.   The stories accumulate on the table. Jane writes them, but the developers make notes on them as needed. Nobody tries to capture everything that is said; the cards are not meant to capture everything but are simply reminders of the conversation.   As the developers become more comfortable with the stories, they begin writing estimates on them. These estimates are crude and budgetary, but they give Jane an idea of what the story will cost.   At the end of the meeting, it is clear that many more stories could be discussed. It is also clear that the most important stories have been addressed and that they represent several months worth of work. Jane closes the meeting by taking the cards with her and promising to have a proposal for the first release in the morning.   * * *   The next morning, you reconvene the meeting. Jane chooses five cards and places them on the table. "According to your estimates, these cards represent about one perfect team-week's worth of work. The last iteration of the previous project managed to get one perfect team-week done in 3 real weeks. If we can get these five stories done in 3 weeks, we'll be able to demonstrate them to Russ. That will make him feel very comfortable about our progress." Jane is pushing it. The sheepish look on her face lets you know that she knows it too. You reply, "Jane, this is a new team, working on a new project. It's a bit presumptuous to expect that our velocity will be the same as the previous team's. However, I met with the team yesterday afternoon, and we all agreed that our initial velocity should, in fact, be set to one perfectweek for every 3 real-weeks. So you've lucked out on this one." "Just remember," you continue, "that the story estimates and the story velocity are very tentative at this point. We'll learn more when we plan the iteration and even more when we implement it."   Jane looks over her glasses at you as if to say "Who's the boss around here, anyway?" and then smiles and says, "Yeah, don't worry. I know the drill by now."Jane then puts 15 more cards on the table. She says, "If we can get all these cards done by the end of March, we can turn the system over to our beta test customers. And we'll get good feedback from them."   You reply, "OK, so we've got our first iteration defined, and we have the stories for the next three iterations after that. These four iterations will make our first release."   "So," says Jane, can you really do these five stories in the next 3 weeks?" "I don't know for sure, Jane," you reply. "Let's break them down into tasks and see what we get."   So Jane, you, and your team spend the next several hours taking each of the five stories that Jane chose for the first iteration and breaking them down into small tasks. The developers quickly realize that some of the tasks can be shared between stories and that other tasks have commonalities that can probably be taken advantage of. It is clear that potential designs are popping into the developers' heads. From time to time, they form little discussion knots and scribble UML diagrams on some cards.   Soon, the whiteboard is filled with the tasks that, once completed, will implement the five stories for this iteration. You start the sign-up process by saying, "OK, let's sign up for these tasks." "I'll take the initial database generation." Says Pete. "That's what I did on the last project, and this doesn't look very different. I estimate it at two of my perfect workdays." "OK, well, then, I'll take the login screen," says Joe. "Aw, darn," says Elaine, the junior member of the team, "I've never done a GUI, and kinda wanted to try that one."   "Ah, the impatience of youth," Joe says sagely, with a wink in your direction. "You can assist me with it, young Jedi." To Jane: "I think it'll take me about three of my perfect workdays."   One by one, the developers sign up for tasks and estimate them in terms of their own perfect workdays. Both you and Jane know that it is best to let the developers volunteer for tasks than to assign the tasks to them. You also know full well that you daren't challenge any of the developers' estimates. You know these people, and you trust them. You know that they are going to do the very best they can.   The developers know that they can't sign up for more perfect workdays than they finished in the last iteration they worked on. Once each developer has filled his or her schedule for the iteration, they stop signing up for tasks.   Eventually, all the developers have stopped signing up for tasks. But, of course, tasks are still left on the board.   "I was worried that that might happen," you say, "OK, there's only one thing to do, Jane. We've got too much to do in this iteration. What stories or tasks can we remove?" Jane sighs. She knows that this is the only option. Working overtime at the beginning of a project is insane, and projects where she's tried it have not fared well.   So Jane starts to remove the least-important functionality. "Well, we really don't need the login screen just yet. We can simply start the system in the logged-in state." "Rats!" cries Elaine. "I really wanted to do that." "Patience, grasshopper." says Joe. "Those who wait for the bees to leave the hive will not have lips too swollen to relish the honey." Elaine looks confused. Everyone looks confused. "So . . .," Jane continues, "I think we can also do away with . . ." And so, bit by bit, the list of tasks shrinks. Developers who lose a task sign up for one of the remaining ones.   The negotiation is not painless. Several times, Jane exhibits obvious frustration and impatience. Once, when tensions are especially high, Elaine volunteers, "I'll work extra hard to make up some of the missing time." You are about to correct her when, fortunately, Joe looks her in the eye and says, "When once you proceed down the dark path, forever will it dominate your destiny."   In the end, an iteration acceptable to Jane is reached. It's not what Jane wanted. Indeed, it is significantly less. But it's something the team feels that can be achieved in the next 3 weeks.   And, after all, it still addresses the most important things that Jane wanted in the iteration. "So, Jane," you say when things had quieted down a bit, "when can we expect acceptance tests from you?" Jane sighs. This is the other side of the coin. For every story the development team implements,   Jane must supply a suite of acceptance tests that prove that it works. And the team needs these long before the end of the iteration, since they will certainly point out differences in the way Jane and the developers imagine the system's behaviour.   "I'll get you some example test scripts today," Jane promises. "I'll add to them every day after that. You'll have the entire suite by the middle of the iteration."   * * *   The iteration begins on Monday morning with a flurry of Class, Responsibilities, Collaborators sessions. By midmorning, all the developers have assembled into pairs and are rapidly coding away. "And now, my young apprentice," Joe says to Elaine, "you shall learn the mysteries of test-first design!"   "Wow, that sounds pretty rad," Elaine replies. "How do you do it?" Joe beams. It's clear that he has been anticipating this moment. "OK, what does the code do right now?" "Huh?" replied Elaine, "It doesn't do anything at all; there is no code."   "So, consider our task; can you think of something the code should do?" "Sure," Elaine said with youthful assurance, "First, it should connect to the database." "And thereupon, what must needs be required to connecteth the database?" "You sure talk weird," laughed Elaine. "I think we'd have to get the database object from some registry and call the Connect() method. "Ah, astute young wizard. Thou perceives correctly that we requireth an object within which we can cacheth the database object." "Is 'cacheth' really a word?" "It is when I say it! So, what test can we write that we know the database registry should pass?" Elaine sighs. She knows she'll just have to play along. "We should be able to create a database object and pass it to the registry in a Store() method. And then we should be able to pull it out of the registry with a Get() method and make sure it's the same object." "Oh, well said, my prepubescent sprite!" "Hay!" "So, now, let's write a test function that proves your case." "But shouldn't we write the database object and registry object first?" "Ah, you've much to learn, my young impatient one. Just write the test first." "But it won't even compile!" "Are you sure? What if it did?" "Uh . . ." "Just write the test, Elaine. Trust me." And so Joe, Elaine, and all the other developers began to code their tasks, one test case at a time. The room in which they worked was abuzz with the conversations between the pairs. The murmur was punctuated by an occasional high five when a pair managed to finish a task or a difficult test case.   As development proceeded, the developers changed partners once or twice a day. Each developer got to see what all the others were doing, and so knowledge of the code spread generally throughout the team.   Whenever a pair finished something significant whether a whole task or simply an important part of a task they integrated what they had with the rest of the system. Thus, the code base grew daily, and integration difficulties were minimized.   The developers communicated with Jane on a daily basis. They'd go to her whenever they had a question about the functionality of the system or the interpretation of an acceptance test case.   Jane, good as her word, supplied the team with a steady stream of acceptance test scripts. The team read these carefully and thereby gained a much better understanding of what Jane expected the system to do. By the beginning of the second week, there was enough functionality to demonstrate to Jane. She watched eagerly as the demonstration passed test case after test case. "This is really cool," Jane said as the demonstration finally ended. "But this doesn't seem like one-third of the tasks. Is your velocity slower than anticipated?"   You grimace. You'd been waiting for a good time to mention this to Jane but now she was forcing the issue. "Yes, unfortunately, we are going more slowly than we had expected. The new application server we are using is turning out to be a pain to configure. Also, it takes forever to reboot, and we have to reboot it whenever we make even the slightest change to its configuration."   Jane eyes you with suspicion. The stress of last Monday's negotiations had still not entirely dissipated. She says, "And what does this mean to our schedule? We can't slip it again, we just can't. Russ will have a fit! He'll haul us all into the woodshed and ream us some new ones."   You look Jane right in the eyes. There's no pleasant way to give someone news like this. So you just blurt out, "Look, if things keep going like they're going, we're not going to be done with everything by next Friday. Now it's possible that we'll figure out a way to go faster. But, frankly, I wouldn't depend on that. You should start thinking about one or two tasks that could be removed from the iteration without ruining the demonstration for Russ. Come hell or high water, we are going to give that demonstration on Friday, and I don't think you want us to choose which tasks to omit."   "Aw forchrisakes!" Jane barely manages to stifle yelling that last word as she stalks away, shaking her head. Not for the first time, you say to yourself, "Nobody ever promised me project management would be easy." You are pretty sure it won't be the last time, either.   Actually, things went a bit better than you had hoped. The team did, in fact, have to drop one task from the iteration, but Jane had chosen wisely, and the demonstration for Russ went without a hitch. Russ was not impressed with the progress, but neither was he dismayed. He simply said, "This is pretty good. But remember, we have to be able to demonstrate this system at the trade show in July, and at this rate, it doesn't look like you'll have all that much to show." Jane, whose attitude had improved dramatically with the completion of the iteration, responded to Russ by saying, "Russ, this team is working hard, and well. When July comes around, I am confident that we'll have something significant to demonstrate. It won't be everything, and some of it may be smoke and mirrors, but we'll have something."   Painful though the last iteration was, it had calibrated your velocity numbers. The next iteration went much better. Not because your team got more done than in the last iteration but simply because the team didn't have to remove any tasks or stories in the middle of the iteration.   By the start of the fourth iteration, a natural rhythm has been established. Jane, you, and the team know exactly what to expect from one another. The team is running hard, but the pace is sustainable. You are confident that the team can keep up this pace for a year or more.   The number of surprises in the schedule diminishes to near zero; however, the number of surprises in the requirements does not. Jane and Russ frequently look over the growing system and make recommendations or changes to the existing functionality. But all parties realize that these changes take time and must be scheduled. So the changes do not cause anyone's expectations to be violated. In March, there is a major demonstration of the system to the board of directors. The system is very limited and is not yet in a form good enough to take to the trade show, but progress is steady, and the board is reasonably impressed.   The second release goes even more smoothly than the first. By now, the team has figured out a way to automate Jane's acceptance test scripts. The team has also refactored the design of the system to the point that it is really easy to add new features and change old ones. The second release was done by the end of June and was taken to the trade show. It had less in it than Jane and Russ would have liked, but it did demonstrate the most important features of the system. Although customers at the trade show noticed that certain features were missing, they were very impressed overall. You, Russ, and Jane all returned from the trade show with smiles on your faces. You all felt as though this project was a winner.   Indeed, many months later, you are contacted by Rufus Inc. That company had been working on a system like this for its internal operations. Rufus has canceled the development of that system after a death-march project and is negotiating to license your technology for its environment.   Indeed, things are looking up!

    Read the article

  • Oracle Application Server Performance Monitoring and Tuning (CPU load high)

    - by Berkay
    Oracle Application Server Performance Monitoring and Tuning (CPU load high) i have just hired by a company and my boss give me a performance issue to solve as soon as possible. I don't have any experience with the Java EE before at the server side. Let me begin what i learned about the system and still couldn't find the solution: We have an Oracle Application Server (10.1.) and Oracle Database server (9.2.), the software guys wrote a kind of big J2EE project (X project) using specifically JSF 1.2 with Ajax which is only used in this project. They actively use PL/SQL in their code. So, we started the application server (Solaris machine), everything seems OK. users start using the app starting Monday from different locations (app 200 have user accounts,i just checked and see that the connection pool is set right, the session are active only 15 minutes). After sometime (2 days) CPU utilization gets high,%60, at night it is still same nothing changed (the online user amount is nearly 1 or 2 at this time), even it starts using the CPU allocated for other applications on the same server because they freed If we don't restart the server, the utilization becomes %90 following 2 days, application is so slow that end users starts calling. The main problem is software engineers say that code is clear, and the System and DBA managers say that we have the correct configuration,the other applications seems OK why this problem happens only for X application. I start copying the DB to a test platform and upgrade it to the latest version, also did in same with the application server (Weblogic) if there is a bug or not. i only tested by myself only one user and weblogic admin panel i can track the threads and dump them. i noticed that there are some threads showing as a hogging. when i checked the manuals and control the trace i see that it directs me the line number where PL/SQL code is called from a .java file. The software eng. says that yes we have really complex PL/SQL codes but what's the relation with Application server? this is the problem of DB server, i guess they're right... I know the question has many holes, i'd like to give more in detail but i appreciate the way you guide me. Thanks in advance ... Edit: The server both in CPU and Memory enough to run more complex applications

    Read the article

  • How do I get a .Net 4.0 app to coexist as an application under a SharePoint 2007 website in IIS v7?

    - by Craig Nakamoto
    I have created a small .Net 4.0 website and installed it on my SharePoint server as a separate web site in IIS v7 (using port 8008 for now). I had to install the .Net 4 framework, set up the database, etc. and this all went smoothly and my app works as a standalone website. Now I am trying to get pages from my website to show up in SharePoint 2007. For various reasons (the SharePoint site is using SSL, security, etc.) I now need to move my .Net app to run under the SharePoint 2007 site in IIS. I have added it as an 'Application' and set it up with the same .Net v4 application pool and settings that were working when it was set up as a standalone site. Now when I try to access the application I get the error at the end of this description. Any help would be greatly appreciated. I already tried following the instructions on this post: http://blogs.msdn.com/sgoodyear/archive/2007/05/07/custom-web-applications-coexisting-with-sharepoint-2007.aspx but that did not help. Thanks! Craig p.s. Here are the error details: Log Name: Application Source: ASP.NET 4.0.30319.0 Date: 11/05/2010 11:49:31 AM Event ID: 1310 Task Category: Web Event Level: Warning Keywords: Classic User: N/A Computer: GGI-SP1.ggi.ca Description: Event code: 3008 Event message: A configuration error has occurred. Event time: 11/05/2010 11:49:31 AM Event time (UTC): 11/05/2010 3:49:31 PM Event ID: 559d7ac619344f3499a4a31c6c9e58cd Event sequence: 1 Event occurrence: 1 Event detail code: 0 Application information: Application domain: /LM/W3SVC/1653978112/ROOT/bidmonitor-1-129180665715766107 Trust level: Application Virtual Path: /bidmonitor Application Path: C:\inetpub\wwwroot\bidmonitor\ Machine name: GGI-SP1 Process information: Process ID: 5272 Process name: w3wp.exe Account name: NT AUTHORITY\NETWORK SERVICE Exception information: Exception type: ConfigurationErrorsException Exception message: Could not find permission set named 'ASP.Net'. at System.Web.Hosting.ApplicationManager.CreateAppDomainWithHostingEnvironment(String appId, IApplicationHost appHost, HostingEnvironmentParameters hostingParameters) at System.Web.HttpRuntime.HostingInit(HostingEnvironmentFlags hostingFlags, PolicyLevel policyLevel, Exception appDomainCreationException) Request information: Request URL: gginet.ggi.ca/bidmonitor Request path: /bidmonitor User host address: 10.10.1.33 User: Is authenticated: False Authentication Type: Thread account name: NT AUTHORITY\NETWORK SERVICE Thread information: Thread ID: 3 Thread account name: NT AUTHORITY\NETWORK SERVICE Is impersonating: False Stack trace: at System.Web.HttpRuntime.HostingInit(HostingEnvironmentFlags hostingFlags, PolicyLevel policyLevel, Exception appDomainCreationException)

    Read the article

  • Migrate from MySQL to PostgreSQL on Linux (Kubuntu)

    - by Dave Jarvis
    Storyline Trying to migrate a database from MySQL to PostgreSQL. All the documentation I have read covers, in great detail, how to migrate the structure. I have found very little documentation on migrating the data. The schema has 13 tables (which have been migrated successfully) and 9 GB of data. MySQL version: 5.1.x PostgreSQL version: 8.4.x I want to use the R programming language to analyze the data using SQL select statements; PostgreSQL has PL/R, but MySQL has nothing (as far as I can tell). A long time ago in a galaxy far, far away... Create the database location (/var has insufficient space; also dislike having the PostgreSQL version number everywhere -- upgrading would break scripts!): sudo mkdir -p /home/postgres/main sudo cp -Rp /var/lib/postgresql/8.4/main /home/postgres sudo chown -R postgres.postgres /home/postgres sudo chmod -R 700 /home/postgres sudo usermod -d /home/postgres/ postgres All good to here. Next, restart the server and configure the database using these installation instructions: sudo apt-get install postgresql pgadmin3 sudo /etc/init.d/postgresql-8.4 stop sudo vi /etc/postgresql/8.4/main/postgresql.conf Change data_directory to /home/postgres/main sudo /etc/init.d/postgresql-8.4 start sudo -u postgres psql postgres \password postgres sudo -u postgres createdb climate pgadmin3 Use pgadmin3 to configure the database and create a schema. A New Hope The episode began in a remote shell known as bash, with both databases running, and the installation of a command with a most unusual logo: SQL Fairy. perl Makefile.PL sudo make install sudo apt-get install perl-doc (strangely, it is not called perldoc) perldoc SQL::Translator::Manual Extract a PostgreSQL-friendly DDL and all the MySQL data: sqlt -f DBI --dsn dbi:mysql:climate --db-user user --db-password password -t PostgreSQL > climate-pg-ddl.sql mysqldump --skip-add-locks --complete-insert --no-create-db --no-create-info --quick --result-file="climate-my.sql" --databases climate --skip-comments -u root -p The Database Strikes Back Recreate the structure in PostgreSQL as follows: pgadmin3 (switch to it) Click the Execute arbitrary SQL queries icon Open climate-pg-ddl.sql Search for TABLE " replace with TABLE climate." (insert the schema name climate) Search for on " replace with on climate." (insert the schema name climate) Press F5 to execute This results in: Query returned successfully with no result in 122 ms. Replies of the Jedi At this point I am stumped. Where do I go from here (what are the steps) to convert climate-my.sql to climate-pg.sql so that they can be executed against PostgreSQL? How to I make sure the indexes are copied over correctly (to maintain referential integrity; I don't have constraints at the moment to ease the transition)? How do I ensure that adding new rows in PostgreSQL will start enumerating from the index of the last row inserted (and not conflict with an existing primary key from the sequence)? Resources A fair bit of information was needed to get this far: https://help.ubuntu.com/community/PostgreSQL http://articles.sitepoint.com/article/site-mysql-postgresql-1 http://wiki.postgresql.org/wiki/Converting_from_other_Databases_to_PostgreSQL#MySQL http://pgfoundry.org/frs/shownotes.php?release_id=810 http://sqlfairy.sourceforge.net/ Thank you!

    Read the article

  • dhcp-snooping option 82 drops valid dhcp requests on 2610 series Procurve switches

    - by kce
    We are slowly starting to implement dhcp-snooping on our HP ProCurve 2610 series switches, all running the R.11.72 firmware. I'm seeing some strange behavior where dhcp-request or dhcp-renew packets are dropped when originating from "downstream" switches due "untrusted relay information from client". The full error: Received untrusted relay information from client <mac-address> on port <port-number> In more detail we have a 48 port HP2610 (Switch A) and a 24 port HP2610 (Switch B). Switch B is "downstream" of Switch A by virtue of a DSL connection to one of Switch A ports. The dhcp server is connected to Switch A. The relevant bits are as follows: Switch A dhcp-snooping dhcp-snooping authorized-server 192.168.0.254 dhcp-snooping vlan 1 168 interface 25 name "Server" dhcp-snooping trust exit Switch B dhcp-snooping dhcp-snooping authorized-server 192.168.0.254 dhcp-snooping vlan 1 interface Trk1 dhcp-snooping trust exit The switches are set to trust BOTH the port the authorized dhcp server is attached to and its IP address. This is all well and good for the clients attached to Switch A, but the clients attached to Switch B get denied due to the "untrusted relay information" error. This is odd for a few reasons 1) dhcp-relay is not configured on either switch, 2) the Layer-3 network here is flat, same subnet. DHCP packets should not have a modified option 82 attribute. dhcp-relay does appear to be enabled by default however: SWITCH A# show dhcp-relay DHCP Relay Agent : Enabled Option 82 : Disabled Response validation : Disabled Option 82 handle policy : append Remote ID : mac Client Requests Server Responses Valid Dropped Valid Dropped ---------- ---------- ---------- ---------- 0 0 0 0 SWITCH B# show dhcp-relay DHCP Relay Agent : Enabled Option 82 : Disabled Response validation : Disabled Option 82 handle policy : append Remote ID : mac Client Requests Server Responses Valid Dropped Valid Dropped ---------- ---------- ---------- ---------- 40156 0 0 0 And interestingly enough the dhcp-relay agent seems very busy on Switch B, but why? As far as I can tell there is no reason why dhcp requests need a relay with this topology. And furthermore I can't tell why the upstream switch is dropping legitimate dhcp requests for untrusted relay information when the relay agent in question (on Switch B) isn't modifying the option 82 attributes anyway. Adding the no dhcp-snooping option 82 on Switch A allows the dhcp traffic from Switch B to be approved by Switch A, by virtue of just turning off that feature. What are the repercussions of not validating option 82 modified dhcp traffic? If I disable option 82 on all my "upstream" switches - will they pass dhcp traffic from any downstream switch regardless of that traffic's legitimacy? This behavior is client operating system agnostic. I see it with both Windows and Linux clients. Our DHCP servers are either Windows Server 2003 or Windows Server 2008 R2 machines. I see this behavior regardless of the DHCP servers' operating system. Can anyone shed some light on what's happening here and give me some recommendations on how I should proceed with configuring the option 82 setting? I feel like i just haven't completely grokked dhcp-relaying and option 82 attributes.

    Read the article

  • How to tell if a freebsd jail is up to date?

    - by Martin Torhage
    I've set up a "Service Jail" in FreeBSD 8.0 according to the FreeBSD Handbook (http://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/jails-application.html). After upgrading the host to the latest patch level and then performed a jail-upgrade, freebsd-fetch still reports that there are files in need of an update in the jail. Is this expected? Then how do I know if a jail is up to date? This is what I've done in more detail: After the initial setup of the jail freebsd-update fetch reported that there were no updates available neither in the host system nor in the jail. This was expected. A while later freebsd-update fetch reported that the following files where in need of an update both in the host and in the jail. /usr/lib/libssl.a /usr/lib/libssl_p.a /usr/lib/libzpool.a /usr/lib32/libssl.a /usr/lib32/libssl_p.a /usr/lib32/libzpool.a I updated the host and followed the upgrade guide for the jail (http://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/jails-application.html#JAILS-SERVICE-JAILS-UPGRADING). freebsd-update fetch now reports that there are no updates available in the host but the following is the output from freebsd-update fetch in the jail: [root@bb /]# freebsd-update fetch Looking up update.FreeBSD.org mirrors... 3 mirrors found. Fetching metadata signature for 8.0-RELEASE from update5.FreeBSD.org... done. Fetching metadata index... done. Inspecting system... done. Preparing to download files... done. The following files are affected by updates, but no changes have been downloaded because the files have been modified locally: /var/db/mergemaster.mtree The following files will be updated as part of updating to 8.0-RELEASE-p2: /usr/lib/libssl.a /usr/lib/libssl_p.a /usr/lib/libzpool.a /usr/lib32/libssl.a /usr/lib32/libssl_p.a /usr/lib32/libzpool.a Shouldn't freebsd-update know that the jail is up to date or have I failed upgrading it? How am I supposed to know if a jail is up to date if freebsd-update can't tell? I'm sure I ran make cleandir twice before make buildworld. TIA

    Read the article

  • Why can't I play DVDs on Windows 8 Pro with Media Center Pack?

    - by ligos
    I have a laptop with Windows 8 Pro with Media Center (64 bit), but neither Media Player or Media Center can play DVDs. Have I done something wrong? Did the Feature Pack not install correctly? Should this work? Can I somehow uninstall and reinstall the Media Pack? Details So I upgraded by Windows 7 Home Premium laptop to Windows 8 Pro based on Microsoft's low pricing. I also grabbed my free upgrade to Media Pack and followed the instructions on that page to add my feature pack. Alas! I still cannot play DVDs via either Media Center or Player. Various Context Thinking I might need to re-install the pack, I found that I could no longer add any more feature packs (searching for add features settings only shows Turn Windows Features On and Off). Media Centre and Media Player are both enabled in Windows Features. I cannot see any way to remove or downgrade from the Media Pack, nor to add any more feature packs. I installed a codec pack (32bit) from Shark007, which has not allowed me to play DVDs (although did allow me to play various other media files). Media Player can play DTV recorded on another Windows 7 box, but Media Center cannot. VLC plays DVDs OK, but I'd prefer to figure out what the root cause of this problem is. There were no errors or other indications that the Media Pack failed to install; the installation itself was quite smooth. Although I have not checked my event log in detail. Before upgrading to Windows 7, I could play DVDs OK. Screenshots System Information, showing I have Windows 8 Pro with Media Center When playing a DVD, Media Player gives and error: The selected file has an extension that is not recognised by windows... When you click Yes, it fails saying: Windows Media Player cannot find the file... Media Center says: The file type is not recognisd and cannot be played, along with some codec related stuff. I can browse the files OK via My Computer on any video DVD.

    Read the article

  • Apache2 cgi's crash on odbc db access (but run fine from shell)

    - by Martin
    Problem overview (details below): I'm having an apache2 + ruby integration problem when trying to connect to an ODBC data source. The main problem boils down to the fact that scripts that run fine from an interactive shell crash ruby on the database connect line when run as a cgi from apache2. Ruby cgi's that don't try to access the ODBC datasource work fine. And (again) ruby scripts that connect to a database with ODBC do fine when executed from the command line (or cron). This behavior is identical when I use perl instead of ruby. So, the issue seems to be with the environment provided for ruby (perl) by apache2, but I can't figure out what is wrong or what to do about it. Does anyone have any suggestions on how to get these cgi scripts to work properly? I've tried many different things to get this to work, and I'm happy to provide more detail of any aspect if that will help. Details: Mac OS X Server 10.5.8 Xserve 2 x 2.66 Dual-Core Intel Xeon (12 GB) Apache 2.2.13 ruby 1.8.6 (2008-08-11 patchlevel 287) [universal-darwin9.0] ruby-odbc 0.9997 dbd-odbc (0.2.5) dbi (0.4.3) mod_ruby 1.3.0 Perl -- 5.8.8 DBI -- 1.609 DBD::ODBC -- 1.23 odbc driver: DataDirect SequeLink v5.5 (/Library/ODBC/SequeLink.bundle/Contents/MacOS/ivslk20.dylib) odbc datasource: FileMaker Server 10 (v10.0.2.206) ) a minimal version of a script (anonymized) that will crash in apache but run successfully from a shell: #!/usr/bin/ruby require 'cgi' require 'odbc' cgi = CGI.new("html3") aConnection = ODBC::connect('DBFile', "username", 'password') aQuery = aConnection.prepare("SELECT zzz_kP_ID FROM DBTable WHERE zzz_kP_ID = 81044") aQuery.execute aRecord = aQuery.fetch_hash.inspect aQuery.drop aConnection.disconnect # aRecord = '{"zzz_kP_ID"=>81044.0}' cgi.out{ cgi.html{ cgi.body{ "<pre>Primary Key: #{aRecord}</pre>" } } } Example of running this from a shell: gamma% ./minimal.rb (offline mode: enter name=value pairs on standard input) Content-Type: text/html Content-Length: 134 <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN"><HTML><BODY><pre>Primary Key: {"zzz_kP_ID"=>81044.0}</pre></font></BODY></HTML>% gamma% ) typical crash log lines: Dec 22 14:02:38 gamma ReportCrash[79237]: Formulating crash report for process perl[79236] Dec 22 14:02:38 gamma ReportCrash[79237]: Saved crashreport to /Library/Logs/CrashReporter/perl_2009-12-22-140237_HTCF.crash using uid: 0 gid: 0, euid: 0 egid: 0 Dec 22 14:03:13 gamma ReportCrash[79256]: Formulating crash report for process perl[79253] Dec 22 14:03:13 gamma ReportCrash[79256]: Saved crashreport to /Library/Logs/CrashReporter/perl_2009-12-22-140311_HTCF.crash using uid: 0 gid: 0, euid: 0 egid: 0

    Read the article

  • How to connect SAN from CentOS through two iSCSI Targets

    - by garconcn
    I had asked the similar question before. This time I want to use subnet for two iSCSI Targets, hence I start a new question. I have an old Promise VTrak M500i SAN Server. It comes with 2 iSCSI ports. I want to connect to two LUNs on the SAN server through two separate Targets from CentOS 5.7 64bits server. My network setup is as follows: CentOS server: Management network - 192.168.1.1 Storage network 1 - 192.168.5.2 Storage network 2 - 192.168.6.2 Promise SAN server: Management network - 192.168.1.2 iSCSI Port 1 - 192.168.5.1 iSCSI Port 2 - 192.168.6.1 I have two Logical Drives on this SAN and they are mapped as follows: Index Initiator Name LUN Mapping 0 iqn.2011-11:backup (LD0,0) 1 iqn.2011-11:template (LD1,1) Basically, I want the traffic to iqn.2011-11:backup LUN 0 through 192.168.5.1 network the traffic to iqn.2011-11:template LUN 1 through 192.168.6.1 network I don't use MPIO, just want to separate the traffic to avoid traffic jam. How do I achieve this? I am new to SAN stuff, please explain as much detail as you can. Thank you. The following are what I am doing now. After mapping the LUN to my pre-defined Initiators, the CentOS server can discover both Targets. [root@centos ~]# iscsiadm -m discovery -t sendtargets -p 192.168.5.1 192.168.5.1:3260,1 iscsi-1 192.168.6.1:3260,2 iscsi-1 [root@centos ~]# iscsiadm -m discovery -t sendtargets -p 192.168.6.1 192.168.6.1:3260,2 iscsi-1 192.168.5.1:3260,1 iscsi-1 [root@centos ~]# /etc/init.d/iscsi start iscsid is stopped Starting iSCSI daemon: [ OK ] [ OK ] Setting up iSCSI targets: Logging in to [iface: default, target: iscsi-1, portal: 192.168.6.1,3260] Logging in to [iface: default, target: iscsi-1, portal: 192.168.5.1,3260] Login to [iface: default, target: iscsi-1, portal: 192.168.6.1,3260] successful. Login to [iface: default, target: iscsi-1, portal: 192.168.5.1,3260] successful. [ OK ] [root@centos ~]# iscsiadm -m session tcp: [1] 192.168.6.1:3260,2 iscsi-1 tcp: [2] 192.168.5.1:3260,1 iscsi-1 When I check the LUN mapping on the SAN server for the two Logical Drives, both LUNs are connected through Port0-192.168.5.2 with the Initiator defined in CentOS. Assigned Initiator List: Initiator Name Alias IP Address LUN iqn.2011-11.centos centos.mydomain.com Port0-192.168.5.2 0 Initiator Name Alias IP Address LUN iqn.2011-11.centos centos.mydomain.com Port1-192.168.5.2 1 I assume the following is what I want: Initiator Name Alias IP Address LUN iqn.2011-11.backup centos.mydomain.com Port0-192.168.5.2 0 Initiator Name Alias IP Address LUN iqn.2011-11.template centos.mydomain.com Port0-192.168.6.2 1

    Read the article

  • freebsd-update reports an upgraded jail as not upgraded

    - by Martin Torhage
    I've set up a "Service Jail" in FreeBSD 8.0 according to the FreeBSD Handbook (http://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/jails-application.html). After upgrading the host to the latest patch level and then performed a jail-upgrade, freebsd-fetch still reports that there are files in need of an update in the jail. Is this expected? Then how do I know if a jail is up to date? This is what I've done in more detail: After the initial setup of the jail freebsd-update fetch reported that there were no updates available neither in the host system nor in the jail. This was expected. A while later freebsd-update fetch reported that the following files where in need of an update both in the host and in the jail. /usr/lib/libssl.a /usr/lib/libssl_p.a /usr/lib/libzpool.a /usr/lib32/libssl.a /usr/lib32/libssl_p.a /usr/lib32/libzpool.a I updated the host and followed the upgrade guide for the jail (http://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/jails-application.html#JAILS-SERVICE-JAILS-UPGRADING). freebsd-update fetch now reports that there are no updates available in the host but the following is the output from freebsd-update fetch in the jail: [root@bb /]# freebsd-update fetch Looking up update.FreeBSD.org mirrors... 3 mirrors found. Fetching metadata signature for 8.0-RELEASE from update5.FreeBSD.org... done. Fetching metadata index... done. Inspecting system... done. Preparing to download files... done. The following files are affected by updates, but no changes have been downloaded because the files have been modified locally: /var/db/mergemaster.mtree The following files will be updated as part of updating to 8.0-RELEASE-p2: /usr/lib/libssl.a /usr/lib/libssl_p.a /usr/lib/libzpool.a /usr/lib32/libssl.a /usr/lib32/libssl_p.a /usr/lib32/libzpool.a Shouldn't freebsd-update know that the jail is up to date or have I failed upgrading it? How am I supposed to know if a jail is up to date if freebsd-update can't tell? I'm sure I ran make cleandir twice before make buildworld. TIA

    Read the article

  • missing files after reassemble of RAID-5

    - by Kris_R
    I had to open my file-server's housing on Sunday to replace a faulty fan. What I didn't see was that one of the sata-cables was not properly connected. The 1st thing I did after a reboot was a check of the RAID status and it showed immediately that one drive is missing. Till this moment the device was not used (however it was mounted, so I'm not 100% sure that system did nothing). I stopped md0 and re-plugged the cable: mdadm --stop /dev/md0 poweroff After another reboot I checked the removed drive: mdadm --examine /dev/sdd1 ... Checksum : 3276bc1d - correct Events : 315782 Layout : left-symmetric Chunk Size : 32K Number Major Minor RaidDevice State this 0 8 49 0 active sync /dev/sdd1 0 0 8 49 0 active sync /dev/sdd1 1 1 8 65 1 active sync /dev/sde1 2 2 8 33 2 active sync /dev/sdc1 3 3 8 17 3 active sync /dev/sdb1 I was a bit surprised that it was shown as active (even if earlier mdadm said, that this device was removed from array) and its checksum was OK. I recreated RAID with: mdadm --assemble /dev/md0 --scan The command mdadm --detail /dev/md0 showed that all drives were running and system was in "clean" state. I mounted the device md0 and then came hic-cup. I wanted to work on one of the last files that I had been using before all the situation and it was not there. In another place I missed actually all files from the directory where I was working. As far as I can see most of the files that are older than a few days are intact but some newer ones are missing. Now the big question: what would be your advice? Is there a way to get these data? I thought about removing the drive that was earlier labeled by mdadm and rebuild array with another empty HDD. I've found that after re-assemble the "broken" drive has another label (changed from sdd to sdb). Can this have influence on rebuilt process? If yes, how to reassemble the array properly? I'm sure the SATA-cables are connected still in the same order to the controller. p.s. Please no advises like "restore from backup". I'm doing back-ups on Sunday's night and this happened in the late afternoon, so backup is not really options for me. p.s.s. I asked this question on Unix&Linux but no answer came up during last two days. I'm getting quite anxious. Sorry for duplicating if any of you reading the other forum.

    Read the article

  • Windows Vista: Networking can only connect "local only"

    - by Damien
    I am attempting to debug a problem on a Windows Vista laptop - not mine! Until just recently (last week or so), it was operating normally for about 4 years :) The problem is that I am having issues connecting to the local network (a basic wireless home router; more later) and the internet (via ADSL). This is both for wired [Broadcom chipset] and wireless [Intel chipset]. I will elaborate further later. To detail the network. I have three other clients (HTC phone, Ubuntu 12.04 desktop [wired] and Ubuntu 10.04 laptop [wireless]), all of whom are able to connect to the network and internet normally. A windows 7 virtual machine running on said desktop connects normally. I have tried two different wireless routers - Netgear DG834G and Netgear DGN3500. The same error mode is common to both. Updating the firmware to the latest on both routers does not help. Overall, it seems safe to say it's localised to the laptop in question. I do not have another Vista client to test with. The specific symptoms are as follows: When "connected", it says "Local Only", and says it cannot connect to the internet. This is true for both wired and wireless. It can get an IP address (192.168.0.5), and the router (192.168.0.1) reports that it can see the device. When I try to ping, I get the following results: ping 192.168.0.1 - (router) all packets lost ping 192.168.0.5 - (laptop's address) OK ping 192.168.0.4 - (desktop) all packets lost Pinging from the desktop to the problematic laptop results in "From 192.168.0.4 icmp_seq=1 Destination Host Unreachable" The most promising "fix" from trawling forums is KB928233 which does not work for me. The problem is persistent across reports (both full shutdown and hibernate) so it appears not to be sleep related. I am not a regular vista user, though I can fumble my way about a bit. Is there any other suggestions as to what I should do? Is there any further information I can provide?

    Read the article

  • How to configure nginx so it works with Express?

    - by Michal Stefanow
    I'm trying to configure nginx so it proxy_pass requests to my node apps. Question on StackOverflow got many upvotes: http://stackoverflow.com/questions/5009324/node-js-nginx-and-now and I'm using config from there. (but since question is about server configuration it is supposed to be on ServerFault) Here is the nginx configuration: server { listen 80; listen [::]:80; root /var/www/services.stefanow.net/public_html; index index.html index.htm; server_name services.stefanow.net; location / { try_files $uri $uri/ =404; } location /test-express { proxy_pass http://127.0.0.1:3002; } location /test-http { proxy_pass http://127.0.0.1:3003; } } Using plain node: var http = require('http'); http.createServer(function (req, res) { res.writeHead(200, {'Content-Type': 'text/plain'}); res.end('Hello World\n'); }).listen(3003, '127.0.0.1'); console.log('Server running at http://127.0.0.1:3003/'); It works! Check: http://services.stefanow.net/test-http Using express: var express = require('express'); var app = express(); // app.get('/', function(req, res) { res.redirect('/index.html'); }); app.get('/index.html', function(req, res) { res.send("blah blah index.html"); }); app.listen(3002, "127.0.0.1"); console.log('Server running at http://127.0.0.1:3002/'); It doesn't work :( See: http://services.stefanow.net/test-express I know that something is going on. a) test-express is NOT running b) text-express is running (and I can confirm it is running via command line while ssh on the server) root@stefanow:~# service nginx restart * Restarting nginx nginx [ OK ] root@stefanow:~# curl localhost:3002 Moved Temporarily. Redirecting to /index.html root@stefanow:~# curl localhost:3002/index.html blah blah index.html I tried setting headers as described here: http://www.nginxtips.com/how-to-setup-nginx-as-proxy-for-nodejs/ (still doesn't work) proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_set_header X-NginX-Proxy true; I also tried replacing '127.0.0.1' with 'localhost' and vice versa Please advise. I'm pretty sure I miss some obvious detail and I would like to learn more. Thank you.

    Read the article

  • How is incoming SMTP mail being delivered despite blocked port

    - by Josh
    I setup a MX mail server, everything works despite port 25 being blocked, I'm stumped as to why I am able to receive email with this setup, and what the consequences might be if I leave it this way. Here are the details: Connections to SMTP over port 25 and 587 both reliably connect over my local network. Connections to SMTP over port 25 are blocked from external IPs (the ISP is blocking the port). Connections to Submission SMTP over port 587 from external IPs are reliable. Emails sent from gmail, yahoo, and a few other addresses all are being delivered. I haven't found an email provider that fails to deliver mail to my MX. So, with port 25 blocked, I am assuming other MTA servers fallback to port 587, otherwise I can't imagine how the mail is received. I know port 25 shouldn't be blocked, but so far it works. Are there mail servers that this will not work with? Where can I find more about how this is working? -- edit More technical detail, to validate that I'm not missing something silly. Obviously in the transcript below I've replaced my actual domain with example.com. # DNS MX record points to the A record. $ dig example.com MX +short 1 example.com $ dig example.com A +short <Public IP address> # From a public server (not my ISP hosting the mail server) # We see port 25 is blocked, but port 587 is open $ telnet example.com 25 Trying <public ip>... telnet: Unable to connect to remote host: Connection refused # Let's try openssl $ openssl s_client -starttls smtp -crlf -connect example.com:25 connect: Connection refused connect:errno=111 # Again from a public server, we see port 587 is open $ telnet example.com 587 Trying <public ip>... Connected to example.com. Escape character is '^]'. 220 example.com ESMTP Postfix ehlo example.com 250-example.com 250-PIPELINING 250-SIZE 10485760 250-VRFY 250-ETRN 250-STARTTLS 250-ENHANCEDSTATUSCODES 250-8BITMIME 250-DSN 250-BINARYMIME 250 CHUNKING quit 221 2.0.0 Bye Connection closed by foreign host. Here is a portion from the mail log when receiving a message from gmail: postfix/postscreen[93152]: CONNECT from [209.85.128.49]:48953 to [192.168.0.10]:25 postfix/postscreen[93152]: PASS NEW [209.85.128.49]:48953 postfix/smtpd[93160]: connect from mail-qe0-f49.google.com[209.85.128.49] postfix/smtpd[93160]: 7A8C31C1AA99: client=mail-qe0-f49.google.com[209.85.128.49] The log shows that a connection was made to the local IP on port 25 (I'm not doing any port mapping, so it is port 25 on the public IP too). Seeing this leads me to hypothesize that the ISP block on port 25 only occurs when a connection is made from an IP address that is not known to be a mail server. Any other theories?

    Read the article

  • SQL server 2008 R2 installation error

    - by Sonia
    I have a windows 7,32 bit laptop. I am the administrator with all permissions. when I click on the SQL server 2008R2 set up file,it says : "SQL server set up has encountered the following error:Failed to retreive data for this request" click on OK. I have uninstalled all the components of SQL from control panel. I used Windows installer clean up to remove the files(which I must have not done ),but still no go. The summary.txt log says: Overall summary: Final result: Failed: see details below Exit code (Decimal): 847168662 Exit facility code: 638 Exit error code: 50326 Exit message: Failed to retrieve data for this request. Start time: 2012-05-25 14:59:15 End time: 2012-05-25 15:00:09 Requested action: RunRules Log with failure: C:\Program Files\Microsoft SQL Server\100\Setup Bootstrap\Log\20120525_145905\Detail.txt Exception help link: http%3a%2f%2fgo.microsoft.com%2ffwlink%3fLinkId%3d20476%26ProdName%3dMicrosoft%2bSQL%2bServer%26EvtSrc%3dsetup.rll%26EvtID%3d50000%26ProdVer%3d10.0.5500.0%26EvtType%3d0xEF814B06%400x92D13C14 Machine Properties: Machine name: EWAN-PC Machine processor count: 4 OS version: Windows Vista OS service pack: Service Pack 1 OS region: Australia OS language: English (United States) OS architecture: x86 Process architecture: 32 Bit OS clustered: No Package properties: Description: SQL Server Database Services 2008 SQLProductFamilyCode: {628F8F38-600E-493D-9946-F4178F20A8A9} ProductName: SQL2008 Type: RTM Version: 10 SPLevel: 0 Installation location: c:\385030d65c6ff61fb9\x86\setup\ Installation edition: EXPRESS User Input Settings: ACTION: RunRules CONFIGURATIONFILE: FEATURES: HELP: False INDICATEPROGRESS: False INSTANCENAME: QUIET: False QUIETSIMPLE: False RULES: GLOBALRULES,SqlUnsupportedProductBlocker,PerfMonCounterNotCorruptedCheck,Bids2005InstalledCheck,BlockInstallSxS,AclPermissionsFacet,FacetDomainControllerCheck,SSMS_IsInternetConnected,FacetWOW64PlatformCheck,FacetPowerShellCheck X86: False Configuration file: C:\Program Files\Microsoft SQL Server\100\Setup Bootstrap\Log\20120525_145905\ConfigurationFile.ini Detailed results: Rules with failures: Global rules: There are no scenario-specific rules. Rules report file: The rule result report file is not available. Exception summary: The following is an exception stack listing the exceptions in outermost to innermost order Inner exceptions are being indented Exception type: Microsoft.SqlServer.Management.Sdk.Sfc.EnumeratorException Message: Failed to retrieve data for this request. Data: HelpLink.ProdName = Microsoft SQL Server HelpLink.BaseHelpUrl = http://go.microsoft.com/fwlink HelpLink.LinkId = 20476 DisableWatson = true Stack: at Microsoft.SqlServer.Setup.Chainer.Workflow.PendingActions.InvokeActions(WorkflowObject metaDb, TextWriter loggingStream) at Microsoft.SqlServer.Setup.Chainer.Workflow.ActionEngine.RunActionQueue() at Microsoft.SqlServer.Setup.Chainer.Workflow.Workflow.RunWorkflow(HandleInternalException exceptionHandler) at Microsoft.SqlServer.Chainer.Setup.Setup.RunRequestedWorkflow() at Microsoft.SqlServer.Chainer.Setup.Setup.Run() at Microsoft.SqlServer.Chainer.Setup.Setup.Start() at Microsoft.SqlServer.Chainer.Setup.Setup.Main() Inner exception type: Microsoft.SqlServer.Configuration.Sco.ScoException Message: Attempted to perform an unauthorized operation. Data: WatsonData = HKEY_LOCAL_MACHINE@SOFTWARE\Microsoft\Windows\CurrentVersion\Uninstall\Microsoft SQL Server 10 Stack: at Microsoft.SqlServer.Configuration.Sco.InternalRegistryKey.OpenSubKey(String subkey, RegistryAccess requestedAccess) at Microsoft.SqlServer.Configuration.Sco.SqlRegistryKey.OpenSubKey(String subkey, RegistryAccess requestedAccess) at Microsoft.SqlServer.Discovery.RegistryKeyExistsPropertyValueProvider.GetPropertyValue(Object[] context) at Microsoft.SqlServer.Discovery.DiscoveryEnumObject.GetPropertyValueFromProvider(IPropertyValueProvider propertyValueProvider, String machineName, Object[] context) at Microsoft.SqlServer.Discovery.ObjectInstanceSettings.IsObjectFound(String machineName, String idFilter) at Microsoft.SqlServer.Discovery.Product.FilterObjectSet(ArrayList objects, String idFilter) at Microsoft.SqlServer.Discovery.Product.GetData(EnumResult erParent) at Microsoft.SqlServer.Management.Sdk.Sfc.Environment.GetData() at Microsoft.SqlServer.Management.Sdk.Sfc.Environment.GetData(Request req, Object ci) at Microsoft.SqlServer.Management.Sdk.Sfc.Enumerator.GetData(Object connectionInfo, Request request) at Microsoft.SqlServer.Management.Sdk.Sfc.Enumerator.Process(Object connectionInfo, Request request) Inner exception type: System.UnauthorizedAccessException Message: Attempted to perform an unauthorized operation. Stack: at Microsoft.SqlServer.Configuration.Sco.InternalRegistryKey.OpenSubKey(String subkey, RegistryAccess requestedAccess) Ineed to install SQL server 2008 R2 for one of the company softwares to work. Any immediate help will be greatly appreciated. Thanks Sonia

    Read the article

  • Forms authentication failed between web server and sql server

    - by Matt Bear
    I've actually found the solution, but I'm trying to understand why it failed, and why my solution fixed the problem. We have an application that uses forms authentication between a web server and sql server, web server runs server 2008, sql server runs 2008 r2, and sql server 2008. In august the sql server was patched with .net 3.5.1, the web server was untouched, and the forms authentication continued to work. 1 week ago we virtualized the web server onto our vSphere server because of failing hardware. Afterwards the form authentication failed with event code 4005, detail code 50201, The ticket supplied was invalid (on the sql server). In fact the sql server started generating Schannel errors and began blue screening 3-4 times a day. At this point I touched the sql server for the first time(ever), the errors were non specific, any reference to them I could find had to do with either zone alarm(which we don't run), or memory errors. So I applied service pack 1, which stopped the blue screening, but did not fix the forms authentication. At this point we had a work around, so we put it on the back burner while we completed another project, and I was able to get back on it last night. First thing was to adjust some code in the webconfig file on the sql server, nothing, next was regenerate and change out the machine key, still no change. Update the DNS servers, no change. Finally I went through and installed all windows updates, two reboots, (over RDP installed a network card driver which failed, and did not have my server room key, that was fun). After that, forms authentication was working again. And the sql server stopped generating as many errors, I've gotten two schannel errors since then. In short, forms authentication began failing when the web server was cloned onto a virtual machine, which caused the sql server to blue sceen? and forms authentication to fail. And could only be fixed by applying patches to the sql server?(I'm wishing I had patched the servers one at a time so I could know for sure which patch on which server fixed it). My question is why did it fail, and why did patching fix it? I hate fixing something without fully understanding the why and how.

    Read the article

  • Video memory buswidth vs video memory Bandwidth

    - by Mixxiphoid
    My current video card (9600GT) is dying and I'm searching for a new video card. Between acquiring my current one and now, I got a lot more knowledge about hardware and I want to use that to pick my new card. So I decided to not just buy some popular card blindly, but to search for a card able to handle my hardware requirements. I searched the specs at the NVidia site for the GT640 and was confused by the memory section and some questions raised. My current card's memory bus width is 256bit and has 1GB of memory. I checked Google about the importance of bus width. And all the links basically said the same 'The higher the number the more potential simultaneously traffic can be transferred'. This was already clear to me, yet there are currently a lot of new cards which are considered better than my current one with a lower bus width. To go in more detail about my question I copied the memory info from the NVidia site: GT 640 GT640 GDDR5 Memory Specs: Memory Clock 1.8 Gbps 5.0 Gbps Standard Memory Config 2048 MB 1024 MB Memory Interface DDR3 GDDR5 Memory Interface Width 128-bit 64-bit Memory Bandwidth (GB/sec) 28.5 40.0 What puzzled me is that the Memory Bandwidth seems to me the most important part, yet the lower bus width has the higher 'performance'. Is this due to the fact the memory interface is GDDR5 and is therefore able to have a higher memory clock speed (5Gbps)? If I am to buy a new video card, should I check the bus width? Memory clock? Bandwith? Amount of memory? My current card ahs 1GB memory, so I was searching for a 2GB memory card, but now I'm not so sure any more whether that is really 'better'. My main question: To me it seems that memory performance is made up by the combination of bus width and frequency. Is this true? If yes, why are there so many sites telling me I need to get a card with a high bus width? If no, then what IS important when it goes about memory performance on a video card. NOTE: The memory bandwidth is (almost) never displayed on vendor sites. How can I determine which card is better without knowing the bandwith?

    Read the article

  • Configured MySQL for SSL , but SLL is still not in use..!

    - by Sunrays
    I configured SSL for MySQL using the following script. #!/bin/bash # mkdir -p /root/abc/ssl_certs cd /root/abc/ssl_certs # echo "--> 1. Create CA cert, private key" openssl genrsa 2048 > ca-key.pem echo "--> 2. Create CA cert, certificate" openssl req -new -x509 -nodes -days 1000 -key ca-key.pem > ca-cert.pem echo "--> 3. Create Server certificate, key" openssl req -newkey rsa:2048 -days 1000 -nodes -keyout server-key.pem > server-req.pem echo "--> 4. Create Server certificate, cert" openssl x509 -req -in server-req.pem -days 1000 -CA ca-cert.pem -CAkey ca-key.pem -set_serial 01 > server-cert.pem echo "" echo echo "" echo "--> 5. Create client certificate, key. Use DIFFERENT common name then server!!!!" echo "" openssl req -newkey rsa:2048 -days 1000 -nodes -keyout client-key.pem > client-req.pem echo "6. Create client certificate, cert" openssl x509 -req -in client-req.pem -days 1000 -CA ca-cert.pem -CAkey ca-key.pem -set_serial 01 > client-cert.pem exit 0 The following files were created: ca-key.pem ca-cert.pem server-req.pem server-key.pem server-cert.pem client-req.pem client-key.pem client-cert.pem Then I combined server-cert.pem and client-cert.pem into ca.pem (I read in a post to do so..) I created a ssl user in MySQL: GRANT ALL ON *.* to sslsuer@hostname IDENTIFIED BY 'pwd' REQUIRE SSL; Next I added the following in my.cnf [mysqld] ssl-ca = /root/abc/ssl_certs/ca.pem ssl-cert = /root/abc/ssl_certs/server-cert.pem ssl-key = /root/abc/ssl_certs/server-key.pem After restarting the server,I connected to mysql but SSL was still not in use :( mysql -u ssluser -p SSL: Not in use Even the have_ssl parameter was still showing disabled.. :( mysql> show variables like '%ssl%'; +---------------+---------------------------------------------+ | Variable_name | Value | +---------------+---------------------------------------------+ | have_openssl | DISABLED | | have_ssl | DISABLED | | ssl_ca | /root/abc/ssl_certs/ca.pem | | ssl_capath | | | ssl_cert | /root/abc/ssl_certs/server-cert.pem | | ssl_cipher | | | ssl_key | /root/abc/ssl_certs/server-key.pem | +---------------+---------------------------------------------+ Have I missed any step, or whats wrong.. Answers with missed steps in detail will be highly appreciated..

    Read the article

  • Windows DHCP Server - get notification when a non-AD joined device gets an IP address

    - by TheCleaner
    SCENARIO To simplify this down to it's easiest example: I have a Windows 2008 R2 standard DC with the DHCP server role. It hands out IPs via various IPv4 scopes, no problem there. WHAT I'D LIKE I would like a way to create a notification/eventlog entry/similar whenever a device gets a DHCP address lease and that device IS NOT a domain joined computer in Active Directory. It doesn't matter to me whether it is custom Powershell, etc. Bottom line = I'd like a way to know when non-domain devices are on the network without using 802.1X at the moment. I know this won't account for static IP devices. I do have monitoring software that will scan the network and find devices, but it isn't quite this granular in detail. RESEARCH DONE/OPTIONS CONSIDERED I don't see any such possibilities with the built in logging. Yes, I'm aware of 802.1X and have the ability to implement it long-term at this location but we are some time away from a project like that, and while that would solve network authentication issues, this is still helpful to me outside of 802.1X goals. I've looked around for some script bits, etc. that might prove useful but the things I'm finding lead me to believe that my google-fu is failing me at the moment. I believe the below logic is sound (assuming there isn't some existing solution): Device receives DHCP address Event log entry is recorded (event ID 10 in the DHCP audit log should work (since a new lease is what I'd be most interested in, not renewals): http://technet.microsoft.com/en-us/library/dd759178.aspx) At this point a script of some kind would probably have to take over for the remaining "STEPS" below. Somehow query this DHCP log for these event ID 10's (I would love push, but I'm guessing pull is the only recourse here) Parse the query for the name of the device being assigned the new lease Query AD for the device's name IF not found in AD, send a notification email If anyone has any ideas on how to properly do this, I'd really appreciate it. I'm not looking for a "gimme the codez" but would love to know if there are alternatives to the above list or if I'm not thinking clear and another method exists for gathering this information. If you have code snippets/PS commands you'd like to share to help accomplish this, all the better.

    Read the article

  • Hot-swap drive got new name, can I change it on-the-fly?

    - by T.J. Crowder
    One of the HDDs in my server's RAID config failed, so I took it out of the array and had the data center hot-swap it. They've done that, but now the new drive is /dev/sdc rather than /dev/sda. I suspect — correct me if I'm wrong — that if I reboot the server, it will be /dev/sda again, so I'm hesitant to add it back to the array as /dev/sdc because I don't want to lay a trap for myself to fall into on the next reboot. I'd just as soon not reboot the server if I don't need to (if I do need to, well, too bad for me). Is there a way I can change the device name from /dev/sdc to /dev/sda without rebooting? This is on Ubuntu 10.04 LTS. It's an md array ("Linux Software RAID"), where currently one of the devices (there are a couple of them) looks like this ("degraded" because I've removed the old /dev/sda from it): # mdadm --detail /dev/md0 /dev/md0: Version : 00.90.03 Creation Time : Sun Oct 11 21:07:54 2009 Raid Level : raid1 Array Size : 97536 (95.27 MiB 99.88 MB) Used Dev Size : 97536 (95.27 MiB 99.88 MB) Raid Devices : 2 Total Devices : 1 Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Thu Jun 30 09:31:16 2011 State : clean, degraded Active Devices : 1 Working Devices : 1 Failed Devices : 0 Spare Devices : 0 UUID : 496be7a5:ab9177ed:7792c71e:7dc17aa4 Events : 0.112 Number Major Minor RaidDevice State 0 8 17 0 active sync /dev/sdb1 1 0 0 1 removed Thanks, Update: Reading through the kernel md documentation, I suspect that if the name changes on reboot, it won't matter. (Good design, that.) Here's why: Boot time autodetection of RAID arrays When md is compiled into the kernel (not as module), partitions of type 0xfd are scanned and automatically assembled into RAID arrays. This autodetection may be suppressed with the kernel parameter "raid=noautodetect". As of kernel 2.6.9, only drives with a type 0 superblock can be autodetected and run at boot time. The kernel parameter "raid=partitionable" (or "raid=part") means that all auto-detected arrays are assembled as partitionable. I do have md compiled into the kernel, so I'm rebuilding the array now and will do the reboot to see what happens. Even if it works, the above doesn't answer the question I actually asked, so unless someone comes along and answers that question in the meantime (I'd be interested, even if it's not necessary for what I'm doing this very moment), I'll just delete the question to keep noise down.

    Read the article

< Previous Page | 94 95 96 97 98 99 100 101 102 103 104 105  | Next Page >