Search Results

Search found 2201 results on 89 pages for 'anti spam'.

Page 85/89 | < Previous Page | 81 82 83 84 85 86 87 88 89  | Next Page >

  • ASP.NET Controls – CommunityServer Captcha ControlAdapter, a practical case

    - by nmgomes
    The ControlAdapter is available since .NET framework version 2.0 and his main goal is to adapt and customize a control render in order to achieve a specific behavior or layout. This customization is done without changing the base control. A ControlAdapter is commonly used to custom render for specific platforms like Mobile. In this particular case the ControlAdapter was used to add a specific behavior to a Control. In this  post I will use one adapter to add a Captcha to all WeblogPostCommentForm controls within pontonetpt.com CommunityServer instance. The Challenge The ControlAdapter complexity is usually associated with the complexity/structure of is base control. This case is precisely one of those since base control dynamically load his content (controls) thru several ITemplate. Those of you who already played with ITemplate knows that while it is an excellent option for control composition it also brings to the table a big issue: “Controls defined within a template are not available for manipulation until they are instantiated inside another control.” While analyzing the WeblogPostCommentForm control I found that he uses the ITemplate technique to compose it’s layout and unfortunately I also found that the template content vary from theme to theme. This could have been a problem but luckily WeblogPostCommentForm control template content always contains a submit button with a well known ID (at least I can assume that there are a well known set of IDs). Using this submit button as anchor it’s possible to add the Captcha controls in the correct place. Another important finding was that WeblogPostCommentForm control inherits from the WrappedFormBase control which is the base control for all CommunityServer input forms. Knowing this inheritance link the main goal has changed to became the creation of a base ControlAdapter that  could be extended and customized to allow adding Captcha to: post comments form contact form user creation form. And, with this mind set, I decided to used the following ControlAdapter base class signature :public abstract class WrappedFormBaseCaptchaAdapter<T> : ControlAdapter where T : WrappedFormBase { }Great, but there are still many to do … Captcha The Captcha will be assembled with: A dynamically generated image with a set of random numbers A TextBox control where the image number will be inserted A Validator control to validate whether TextBox numbers match the image numbers This is a common Captcha implementation, is not rocket science and don’t bring any additional problem. The main problem, as told before, is to find the correct anchor control to ensure a correct Captcha control injection. The anchor control can vary by: target control  theme Implementation To support this dynamic scenario I choose to use the following implementation:private List<string> _validAnchorIds = null; protected virtual List<string> ValidAnchorIds { get { if (this._validAnchorIds == null) { this._validAnchorIds = new List<string>(); this._validAnchorIds.Add("btnSubmit"); } return this._validAnchorIds; } } private Control GetAnchorControl(T wrapper) { if (this.ValidAnchorIds == null || this.ValidAnchorIds.Count == 0) { throw new ArgumentException("Cannot be null or empty", "validAnchorNames"); } var q = from anchorId in this.ValidAnchorIds let anchorControl = CSControlUtility.Instance().FindControl(wrapper, anchorId) where anchorControl != null select anchorControl; return q.FirstOrDefault(); } I can now, using the ValidAnchorIds property, configure a set of valid anchor control  Ids. The GetAnchorControl method searches for a valid anchor control within the set of valid control Ids. Here, some of you may question why to use a LINQ To Objects expression, but the important here is to notice the usage of CSControlUtility.Instance().FindControl CommunityServer method. I want to build on top of CommunityServer not to reinvent the wheel. Assuming that an anchor control was found, it’s now possible to inject the Captcha at the correct place. This not something new, we do this all the time when creating server controls or adding dynamic controls:protected sealed override void CreateChildControls() { base.CreateChildControls(); if (this.IsCaptchaRequired) { T wrapper = base.Control as T; if (wrapper != null) { Control anchorControl = GetAnchorControl(wrapper); if (anchorControl != null) { Panel phCaptcha = new Panel {CssClass = "CommonFormField", ID = "Captcha"}; int index = anchorControl.Parent.Controls.IndexOf(anchorControl); anchorControl.Parent.Controls.AddAt(index, phCaptcha); CaptchaConfiguration.DefaultProvider.AddCaptchaControls( phCaptcha, GetValidationGroup(wrapper, anchorControl)); } } } } Here you can see a new entity in action: a provider. This is a CaptchaProvider class instance and is only goal is to create the Captcha itself and do everything else is needed to ensure is correct operation.public abstract class CaptchaProvider : ProviderBase { public abstract void AddCaptchaControls(Panel captchaPanel, string validationGroup); } You can create your own specific CaptchaProvider class to use different Captcha strategies including the use of existing Captcha services  like ReCaptcha. Once the generic ControlAdapter was created became extremely easy to created a specific one. Here is the specific ControlAdapter for the WeblogPostCommentForm control:public class WeblogPostCommentFormCaptchaAdapter : WrappedFormBaseCaptchaAdapter<WrappedFormBase> { #region Overriden Methods protected override List<string> ValidAnchorIds { get { List<string> validAnchorNames = base.ValidAnchorIds; validAnchorNames.Add("CommentSubmit"); return validAnchorNames; } } protected override string DefaultValidationGroup { get { return "CreateCommentForm"; } } #endregion Overriden Methods } Configuration This is the magic step. Without changing the original pages and keeping the application original assemblies untouched we are going to add a new behavior to the CommunityServer application. To glue everything together you must follow this steps: Add the following configuration to default.browser file:<?xml version='1.0' encoding='utf-8'?> <browsers> <browser refID="Default"> <controlAdapters> <!-- Adapter for the WeblogPostCommentForm control in order to add the Captcha and prevent SPAM comments --> <adapter controlType="CommunityServer.Blogs.Controls.WeblogPostCommentForm" adapterType="NunoGomes.CommunityServer.Components.WeblogPostCommentFormCaptchaAdapter, NunoGomes.CommunityServer" /> </controlAdapters> </browser> </browsers> Add the following configuration to web.config file:<configuration> <configSections> <!-- New section for Captcha providers configuration --> <section name="communityServer.Captcha" type="NunoGomes.CommunityServer.Captcha.Configuration.CaptchaSection" /> </configSections> <!-- Configuring a simple Captcha provider --> <communityServer.Captcha defaultProvider="simpleCaptcha"> <providers> <add name="simpleCaptcha" type="NunoGomes.CommunityServer.Captcha.Providers.SimpleCaptchaProvider, NunoGomes.CommunityServer" imageUrl="~/captcha.ashx" enabled="true" passPhrase="_YourPassPhrase_" saltValue="_YourSaltValue_" hashAlgorithm="SHA1" passwordIterations="3" keySize="256" initVector="_YourInitVectorWithExactly_16_Bytes_" /> </providers> </communityServer.Captcha> <system.web> <httpHandlers> <!-- The Captcha Image handler used by the simple Captcha provider --> <add verb="GET" path="captcha.ashx" type="NunoGomes.CommunityServer.Captcha.Providers.SimpleCaptchaProviderImageHandler, NunoGomes.CommunityServer" /> </httpHandlers> </system.web> <system.webServer> <handlers accessPolicy="Read, Write, Script, Execute"> <!-- The Captcha Image handler used by the simple Captcha provider --> <add verb="GET" name="captcha" path="captcha.ashx" type="NunoGomes.CommunityServer.Captcha.Providers.SimpleCaptchaProviderImageHandler, NunoGomes.CommunityServer" /> </handlers> </system.webServer> </configuration> Conclusion Building a ControlAdapter can be complex but the reward is his ability to allows us, thru configuration changes, to modify an application render and/or behavior. You can see this ControlAdapter in action here and here (anonymous required). A complete solution is available in “CommunityServer Extensions” Codeplex project.

    Read the article

  • The hidden cost of interrupting knowledge workers

    - by Piet
    The November issue of pragpub has an interesting article on interruptions. The article is written by Brian Tarbox, who also mentions the article on his blog. I like the subtitle: ‘Simple Strategies for Avoiding Dumping Your Mental Stack’. Brian talks about the effective cost of interrupting a ‘knowledge worker’, often with trivial questions or distractions. In the eyes of the interruptor, the interruption only costs the time the interrupted had to listen to the question and give an answer. However, depending on what the interrupted was doing at the time, getting fully immersed in their task again might take up to 15-20 minutes. Enough interruptions might even cause a knowledge worker to mentally call it a day. According to this article interruptions can consume about 28% of a knowledge worker’s time, translating in a $588 billion loss for US companies each year. Looking for a new developer to join your team? Ever thought about optimizing your team’s environment and the way they work instead? Making non knowledge workers aware You can’t. Well, I haven’t succeeded yet. And believe me: I’ve tried. When you’ve got a simple way to really increase your productivity (’give me 2 hours of uninterrupted time a day’) it wouldn’t be right not to tell your boss or team-leader about it. The problem is: only productive knowledge workers seem to understand this. People who don’t fall into this category just seem to think you’re joking, being arrogant or anti-social when you tell them the interruptions can really have an impact on your productivity. Also, knowledge workers often work in a very concentrated mental state which is described here as: It is the same mindfulness as ecstatic lovemaking, the merging of two into a fluidly harmonious one. The hallmark of flow is a feeling of spontaneous joy, even rapture, while performing a task. Yes, coding can be addictive and if you’re interrupting a programmer at the wrong moment, you’re effectively bringing down a junkie from his high in just a few seconds. This can result in seemingly arrogant, almost aggressive reactions. How to make people aware of the production-cost they’re inflicting: I’ve been often pondering that question myself. The article suggests that solutions based on that question never seem to work. To be honest: I’ve never even been able to find a half decent solution for this question. People who are not in this situations just don’t understand the issue, no matter how you try to explain it. Fun (?) thing I’ve noticed: Programmers or IT people in general who don’t get this are often the kind of people who just don’t get anything done. Interrupt handling (interruption management?) IRL Have non-urgent questions handled in a non-interruptive way It helps a bit to educate people into using non-interruptive ways to ask questions: “duh, I have no idea, but I’m a bit busy here now could you put it in an email so I don’t forget?”. Eventually, a considerable amount of people will skip interrupting you and just send an email right away. Some stubborn-headed people however will continue to just interrupt you, saying “you’re 10 meters from my desk, why can’t we just talk?”. Just remember to disable your email notifications, it can be hard to resist opening your email client when you know a new email just arrived. Use Do Not Disturb signals When working in a group of programmers, often the unofficial sign you can only be interrupted for something important is to put on headphones. And when the environment is quiet enough, often people aren’t even listening to music. Otherwise music can help to block the indirect distractions (someone else talking on the phone or tapping their feet). You might get a “they’re all just surfing and listening to music”-reaction from outsiders though. Peopleware talks about a team where the no-interruption sign was placing a shawl on the desk. If I remember correctly, I am unable to locate my copy of this really excellent must-read book. If you have all standardized on the same IM tool, maybe that tool has a ‘do not disturb’ setting. Also some phone-systems have a ‘DND’ (do not disturb) setting. Hide Brian offers a number of good suggestions, some obvious like: hide away somewhere they can’t find you. Not sure how long it’ll be till someone thinks you’re just taking a nap somewhere though. Also, this often isn’t possible or your boss might not understand this. And if you really get caught taking a nap, make sure to explain that your were powernapping. Counter-act interruptions Another suggestion he offers is when you’re being interrupted to just hold up your hand, blocking the interruption, and at least giving you time to finish your sentence or your block/line of code. The last suggestion works more as a way to make it obvious to the interruptor that they really are interrupting your work and to offload some of the cost on the interruptor. In practice, this can also helps you cool down a bit so you don’t start saying nasty things to the interruptor. Unfortunately I’ve sometimes been confronted with people who just ignore this signal and keep talking, as if they’re sure that whatever they’ve got to say is really worth listening to and without a doubt more important than anything you might be doing. This behaviour usually leaves me speechless (not good when someone just asked a question). I’ve noticed that these people are usually also the first to complain when being interrupted themselves. They’re generally not very liked as colleagues, so try not to imitate their behaviour. TDD as a way to minimize recovery time I don’t like Test Driven Development. Mainly for only one reason: It interrupts flow. At least, that’s what it does for me, but maybe I’m just not grown used to TDD yet. BUT a positive effect TDD has on me when I have to work in an interruptive environment and can’t really get into the ‘flow’ (also supposedly called ‘the zone’ by software developers, although I’ve never heard it 1st hand), TDD helps me to concentrate on the tasks at hand and helps me to get back at work after an interruption. I feel when using TDD, I can get by without the need for being totally ‘in’ the project and I can be reasonably productive without obtaining ‘flow’. Do you have a suggestion on how to make people aware of the concept of ‘flow’ and the cost of interruptions? (without looking like an arrogant ass or a weirdo)

    Read the article

  • SQL SERVER – 5 Tips for Improving Your Data with expressor Studio

    - by pinaldave
    It’s no secret that bad data leads to bad decisions and poor results.  However, how do you prevent dirty data from taking up residency in your data store?  Some might argue that it’s the responsibility of the person sending you the data.  While that may be true, in practice that will rarely hold up.  It doesn’t matter how many times you ask, you will get the data however they decide to provide it. So now you have bad data.  What constitutes bad data?  There are quite a few valid answers, for example: Invalid date values Inappropriate characters Wrong data Values that exceed a pre-set threshold While it is certainly possible to write your own scripts and custom SQL to identify and deal with these data anomalies, that effort often takes too long and becomes difficult to maintain.  Instead, leveraging an ETL tool like expressor Studio makes the data cleansing process much easier and faster.  Below are some tips for leveraging expressor to get your data into tip-top shape. Tip 1:     Build reusable data objects with embedded cleansing rules One of the new features in expressor Studio 3.2 is the ability to define constraints at the metadata level.  Using expressor’s concept of Semantic Types, you can define reusable data objects that have embedded logic such as constraints for dealing with dirty data.  Once defined, they can be saved as a shared atomic type and then re-applied to other data attributes in other schemas. As you can see in the figure above, I’ve defined a constraint on zip code.  I can then save the constraint rules I defined for zip code as a shared atomic type called zip_type for example.   The next time I get a different data source with a schema that also contains a zip code field, I can simply apply the shared atomic type (shown below) and the previously defined constraints will be automatically applied. Tip 2:     Unlock the power of regular expressions in Semantic Types Another powerful feature introduced in expressor Studio 3.2 is the option to use regular expressions as a constraint.   A regular expression is used to identify patterns within data.   The patterns could be something as simple as a date format or something much more complex such as a street address.  For example, I could define that a valid IP address should be made up of 4 numbers, each 0 to 255, and separated by a period.  So 192.168.23.123 might be a valid IP address whereas 888.777.0.123 would not be.   How can I account for this using regular expressions? A very simple regular expression that would look for any 4 sets of 3 digits separated by a period would be:  ^[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}$ Alternatively, the following would be the exact check for truly valid IP addresses as we had defined above:  ^(25[0-5]|2[0-4][0-9]|1[0-9]{2}|[1-9]?[0-9])\.(25[0-5]|2[0-4][0-9]|1[0-9]{2}|[1-9]?[0-9])\.(25[0-5]|2[0-4][0-9]|1[0-9]{2}|[1-9]?[0-9])\.(25[0-5]|2[0-4][0-9]|1[0-9]{2}|[1-9]?[0-9])$ .  In expressor, we would enter this regular expression as a constraint like this: Here we select the corrective action to be ‘Escalate’, meaning that the expressor Dataflow operator will decide what to do.  Some of the options include rejecting the offending record, skipping it, or aborting the dataflow. Tip 3:     Email pattern expressions that might come in handy In the example schema that I am using, there’s a field for email.  Email addresses are often entered incorrectly because people are trying to avoid spam.  While there are a lot of different ways to define what constitutes a valid email address, a quick search online yields a couple of really useful regular expressions for validating email addresses: This one is short and sweet:  \b[A-Z0-9._%+-]+@[A-Z0-9.-]+\.[A-Z]{2,4}\b (Source: http://www.regular-expressions.info/) This one is more specific about which characters are allowed:  ^([a-zA-Z0-9_\-\.]+)@((\[[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.)|(([a-zA-Z0-9\-]+\.)+))([a-zA-Z]{2,4}|[0-9]{1,3})(\]?)$ (Source: http://regexlib.com/REDetails.aspx?regexp_id=26 ) Tip 4:     Reject “dirty data” for analysis or further processing Yet another feature introduced in expressor Studio 3.2 is the ability to reject records based on constraint violations.  To capture reject records on input, simply specify Reject Record in the Error Handling setting for the Read File operator.  Then attach a Write File operator to the reject port of the Read File operator as such: Next, in the Write File operator, you can configure the expressor operator in a similar way to the Read File.  The key difference would be that the schema needs to be derived from the upstream operator as shown below: Once configured, expressor will output rejected records to the file you specified.  In addition to the rejected records, expressor also captures some diagnostic information that will be helpful towards identifying why the record was rejected.  This makes diagnosing errors much easier! Tip 5:    Use a Filter or Transform after the initial cleansing to finish the job Sometimes you may want to predicate the data cleansing on a more complex set of conditions.  For example, I may only be interested in processing data containing males over the age of 25 in certain zip codes.  Using an expressor Filter operator, you can define the conditional logic which isolates the records of importance away from the others. Alternatively, the expressor Transform operator can be used to alter the input value via a user defined algorithm or transformation.  It also supports the use of conditional logic and data can be rejected based on constraint violations. However, the best tip I can leave you with is to not constrain your solution design approach – expressor operators can be combined in many different ways to achieve the desired results.  For example, in the expressor Dataflow below, I can post-process the reject data from the Filter which did not meet my pre-defined criteria and, if successful, Funnel it back into the flow so that it gets written to the target table. I continue to be impressed that expressor offers all this functionality as part of their FREE expressor Studio desktop ETL tool, which you can download from here.  Their Studio ETL tool is absolutely free and they are very open about saying that if you want to deploy their software on a dedicated Windows Server, you need to purchase their server software, whose pricing is posted on their website. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Kernel Mode Rootkit

    - by Pajarito
    On the other 3 computers in my family, I believe that we have a kernel-mode rootkit for windows. It appears that the same rootkit is on all of them. We think. We changed all the important passwords from my computer, running linux right now. On all of the infected computers is Symantic Endpoint Protection, because it's free from the university where my mom and dad work. In my opinion symantec is a piece of crap, seeing as it didn't even manager to delete the tracking cookies it found when I tried it on my own computer. The Computers and their set-ups: Computer A: Vista Business; symantec antivirus. runs it as admin, no password. IE8. no other security software other than what comes with windows. IE8 security settings the default Computer B: XP Home Premium; symantec antivirus. runs as normal user, no password, admin account with weak password, spybot, uses IE8 with default settings, sometimes Firefox Computer C: XP Home Premium; symantec antivirus. runs as normal user, no password, admin account with weak password, uses IE8 with default settings, no other security programs except what came with windows This is what's happening. Cut and pasted from my dad's forum post. -- When I scanned my laptop (Dell XPS M1330 with Windows Vista Small Business), Symantec Endpoint Protection hangs for a while, perhaps 10 seconds or so, on some of the following files 9129837.exe, hide_evr2.sys, VirusRemoval.vbs, NewVirusRemoval.vbs, dll.dll, alsmt.ext, and _epnt.sys. It does this if a run a scan that I set up to run on a new thumbnail drive and it does this even if the thumbnail is not plugged in. It doesn't seem to do this if I scan only the C: drive. I've check for problems with symantec endpoint protection and also with Microsoft Security Essentials and Malwarebytes Anti-Malware. They found nothing and I can't find anything by searching for hidden files. Next I tried microsoft's rootkitrevealer. It (rootkitrevealer) finds 279660 (or so) discrepancies and the interface is so glitchy after that I can't really figure out what is going on. The screen is squirrely. The rootkitrevealer pulls up many files in the folder \programdata\applicationdata and there are numberous appended \applicationdata on the end of that as well. -- As you can see, what we did was install MSE and MBAM and scan with both of them. Nothing but a tracking cookie. Then I took over and ran rootkitrevealer.exe from MicroSoft from a flash drive. It found a bunch of discrepancies, but only about 20 or so where security related, the rest being files that you just couldn't see from Windows Explorer. I couldn't see whether of not the files list above, the ones that the scan was hanging on, where in the list. The other thing is, I have no idea what to do about the things the scan comes up with. Then we checked the other computers and they do the same thing when you scan with Symantec. The people at the university seen to think that dad might not have a virus, but 2 of the computers slowed down noticably AND IE8 started acting all funny. None of my family is very computer oriented, and 2 of the possible causes for the rootkit are: -My dad bought a new flash drive, which shipped with a data security executable on it -My dad has to download lots of articles for his work Those are the only things that stand out, but it could have been anything. We are currently backing up our data, and I'll post again after trying IceSword 1.22. I just looked at my dad's forum topic, and someone recommended GMER. I'll try that too.

    Read the article

  • Time Tracking on an Agile Team

    - by Stephen.Walther
    What’s the best way to handle time-tracking on an Agile team? Your gut reaction to this question might be to resist any type of time-tracking at all. After all, one of the principles of the Agile Manifesto is “Individuals and interactions over processes and tools”.  Forcing the developers on your team to track the amount of time that they devote to completing stories or tasks might seem like useless bureaucratic red tape: an impediment to getting real work done. I completely understand this reaction. I’ve been required to use time-tracking software in the past to account for each hour of my workday. It made me feel like Fred Flintstone punching in at the quarry mine and not like a professional. Why You Really Do Need Time-Tracking There are, however, legitimate reasons to track time spent on stories even when you are a member of an Agile team.  First, if you are working with an outside client, you might need to track the number of hours spent on different stories for the purposes of billing. There might be no way to avoid time-tracking if you want to get paid. Second, the Product Owner needs to know when the work on a story has gone over the original time estimated for the story. The Product Owner is concerned with Return On Investment. If the team has gone massively overtime on a story, then the Product Owner has a legitimate reason to halt work on the story and reconsider the story’s business value. Finally, you might want to track how much time your team spends on different types of stories or tasks. For example, if your team is spending 75% of their time doing testing then you might need to bring in more testers. Or, if 10% of your team’s time is expended performing a software build at the end of each iteration then it is time to consider better ways of automating the build process. Time-Tracking in SonicAgile For these reasons, we added time-tracking as a feature to SonicAgile which is our free Agile Project Management tool. We were heavily influenced by Jeff Sutherland (one of the founders of Scrum) in the way that we implemented time-tracking (see his article http://scrum.jeffsutherland.com/2007/03/time-tracking-is-anti-scrum-what-do-you.html). In SonicAgile, time-tracking is disabled by default. If you want to use this feature then the project owner must enable time-tracking in Project Settings. You can choose to estimate using either days or hours. If you are estimating at the level of stories then it makes more sense to choose days. Otherwise, if you are estimating at the level of tasks then it makes more sense to use hours. After you enable time-tracking then you can assign three estimates to a story: Original Estimate – This is the estimate that you enter when you first create a story. You don’t change this estimate. Time Spent – This is the amount of time that you have already devoted to the story. You update the time spent on each story during your daily standup meeting. Time Left – This is the amount of time remaining to complete the story. Again, you update the time left during your daily standup meeting. So when you first create a story, you enter an original estimate that becomes the time left. During each daily standup meeting, you update the time spent and time left for each story on the Kanban. If you had perfect predicative power, then the original estimate would always be the same as the sum of the time spent and the time left. For example, if you predict that a story will take 5 days to complete then on day 3, the story should have 3 days spent and 2 days left. Unfortunately, never in the history of mankind has anyone accurately predicted the exact amount of time that it takes to complete a story. For this reason, SonicAgile does not update the time spent and time left automatically. Each day, during the daily standup, your team should update the time spent and time left for each story. For example, the following table shows the history of the time estimates for a story that was originally estimated to take 3 days but, eventually, takes 5 days to complete: Day Original Estimate Time Spent Time Left Day 1 3 days 0 days 3 days Day 2 3 days 1 day 2 days Day 3 3 days 2 days 2 days Day 4 3 days 3 days 2 days Day 5 3 days 4 days 0 days In the table above, everything goes as predicted until you reach day 3. On day 3, the team realizes that the work will require an additional two days. The situation does not improve on day 4. All of the sudden, on day 5, all of the remaining work gets done. Real work often follows this pattern. There are long periods when nothing gets done punctuated by occasional and unpredictable bursts of progress. We designed SonicAgile to make it as easy as possible to track the time spent and time left on a story. Detecting when a Story Goes Over the Original Estimate Sometimes, stories take much longer than originally estimated. There’s a surprise. For example, you discover that a new software component is incompatible with existing software components. Or, you discover that you have to go through a month-long certification process to finish a story. In those cases, the Product Owner has a legitimate reason to halt work on a story and re-evaluate the business value of the story. For example, the Product Owner discovers that a story will require weeks to implement instead of days, then the story might not be worth the expense. SonicAgile displays a warning on both the Backlog and the Kanban when the time spent on a story goes over the original estimate. An icon of a clock is displayed. Time-Tracking and Tasks Another optional feature of SonicAgile is tasks. If you enable Tasks in Project Settings then you can break stories into one or more tasks. You can perform time-tracking at the level of a story or at the level of a task. If you don’t break a story into tasks then you can enter the time left and time spent for the story. As soon as you break a story into tasks, then you can no longer enter the time left and time spent at the level of the story. Instead, the time left and time spent for a story is rolled up from its tasks. On the Kanban, you can see how the time left and time spent for each task gets rolled up into each story. The progress bar for the story is rolled up from the progress bars for each task. The original estimate is never rolled up – even when you break a story into tasks. A story’s original estimate is entered separately from the original estimates of each of the story’s tasks. Summary Not every Agile team can avoid time-tracking. You might be forced to track time to get paid, to detect when you are spending too much time on a particular story, or to track the amount of time that you are devoting to different types of tasks. We designed time-tracking in SonicAgile to require the least amount of work to track the information that you need. Time-tracking is an optional feature. If you enable time-tracking then you can track the original estimate, time left, and time spent for each story and task. You can use time-tracking with SonicAgile for free. Register at http://SonicAgile.com.

    Read the article

  • How To Remove People and Objects From Photographs In Photoshop

    - by Eric Z Goodnight
    You might think that it’s a complicated process to remove objects from photographs. But really Photoshop makes it quite simple, even when removing all traces of a person from digital photographs. Read on to see just how easy it is. Photoshop was originally created to be an image editing program, and it excels at it. With hardly any Photoshop experience, any beginner can begin removing objects or people from their photos. Have some friends that photobombed an otherwise great pic? Tell them to say their farewells, because here’s how to get rid of them with Photoshop! Tools for Removing Objects Removing an object is not really “magical” work. Your goal is basically to cover up the information you don’t want in an image with information you do want. In this sample image, we want to remove the cigar smoking man, and leave the geisha. Here’s a couple of the tools that can be useful to work with when attempting this kind of task. Clone Stamp and Pattern Stamp Tool: Samples parts of your image from your background, and allows you to paint into your image with your mouse or stylus. Eraser and Brush Tools: Paint flat colors and shapes, and erase cloned layers of image information. Basic, down and dirty photo editing tools. Pen, Quick Selection, Lasso, and Crop tools: Select, isolate, and remove parts of your image with these selection tools. All useful in their own way. Some, like the pen tool, are nightmarishly tough on beginners. Remove a Person with the Clone Stamp Tool (Video) The video above uses the Clone Stamp tool to sample and paint with the background texture. It’s a simple tool to use, although it can be confusing, possibly counter-intuitive. Here’s some pointers, in addition to the video above. Select shortcut key to choose the Clone tool stamp from the Tools Panel. Always create a copy of your background layer before doing heavy edits by right clicking on the background in your Layers Panel and selecting “Duplicate.” Hold with the Clone Tool selected, and click anywhere in your image to sample that area. When you’re sampling an area, your cursor is “Aligned” with your sample area. When you paint, your sample area moves. You can turn the “Aligned” setting off by clicking the in the Options Panel at the top of your screen if you want. Change your brush size and hardness as shown in the video by right-clicking in your image. Use your lasso to copy and paste pieces of your image in order to cover up any parts that seem appropriate. Photoshop Magic with the “Content-Aware Fill” One of the hallmark features of CS5 is the “Content-Aware Fill.” Content aware fill can be an excellent shortcut to removing objects and even people in Photoshop, but it is somewhat limited, and can get confused. Here’s a basic rundown on how it works. Select an object using your Lasso tool, shortcut key . The Lasso works fine as this selection can be rough. Navigate to Edit > Fill, and select “Content-Aware,” as illustrated above, from the pull-down menu. It’s surprisingly simple. After some processing, Photoshop has done the work of removing the object for you. It takes a few moments, and it is not perfect, so be prepared to touch it up with some Copy-Paste, or some Clone stamp action. Content Aware Fill Has Its Limits Keep in mind that the Content Aware Fill is meant to be used with other techniques in mind. It doesn’t always perform perfectly, but can give you a great starting point. Take this image for instance. It is actually plausible to hide this figure and make this image look like he was never there at all. With a selection made with the Lasso tool, navigate to Edit > Fill and select “Content Aware” again. The result is surprisingly good, but as you can see, worthy of some touch up. With a result like this one, you’ll have to get your hands dirty with copy-paste to create believable lines in the background. With many photographs, Content Aware Fill will simply get confused and give you results you won’t be happy with. Additional Touch Up for Bad Background Textures with the Pattern Stamp Tool For the perfectionist, cleaning up the lumpy looking textures that the Clone Stamp can leave is fairly simple using the Pattern Stamp Tool. Sample an piece of your image with your Marquee Tool, shortcut key . Navigate to Edit > Define Pattern to create a new Pattern from your selection. Click OK to continue. Click and hold down on the Clone Stamp tool in your Tools Panel until you can select the Pattern Stamp Tool. Pick your new pattern from the Options at the top of your screen, in the Options Panel. Then simply right click in your image in order to pick as soft a brush as possible to paint with. Paint into your image until your background is as smooth as you want it to be, making your painted out object more and more invisible. If you get lines from your repeated texture, experiment turning the on and off and paint over them. In addition to this, simple use of the Crop Tool, shortcut , can recompose an image, making it look as if it never had another object in it at all. Combine these techniques to find a method that works best for your images. Have questions or comments concerning Graphics, Photos, Filetypes, or Photoshop? Send your questions to [email protected], and they may be featured in a future How-To Geek Graphics article. Image Credits: Geisha Kyoto Gion by Todd Laracuenta via Wikipedia, used under Creative Commons. Moai Rano raraku by Aurbina, in Public Domain. Chris Young visits Wrigley by TonyTheTiger, via Wikipedia, used under Creative Commons. Latest Features How-To Geek ETC Ask How-To Geek: How Can I Monitor My Bandwidth Usage? Internet Explorer 9 RC Now Available: Here’s the Most Interesting New Stuff Here’s a Super Simple Trick to Defeating Fake Anti-Virus Malware How to Change the Default Application for Android Tasks Stop Believing TV’s Lies: The Real Truth About "Enhancing" Images The How-To Geek Valentine’s Day Gift Guide CyanogenMod Updates; Rolls out Android 2.3 to the Less Fortunate MyPaint is an Open-Source Graphics App for Digital Painters Can the Birds and Pigs Really Be Friends in the End? [Angry Birds Video] Add the 2D Version of the New Unity Interface to Ubuntu 10.10 and 11.04 MightyMintyBoost Is a 3-in-1 Gadget Charger Watson Ties Against Human Jeopardy Opponents

    Read the article

  • Exchange Server 2007 Setup

    - by AlamedaDad
    Hi, I'm working on a upgrade to Exchange 2007 and I wanted to get some advise on hardware choices. We currently have an Exchange 2003 STD server with 400 users split between 6 AD Sites, that is housed on a single server. We need to move to a redundant, fault tolerant system to support our users. I'm planning on installing 2 Dell 1950 servers with W2k8-std to act as CAS and Hub servers, with NLB to allow abstraction of the actual server name to the users. There won't be an edge system since we have a Barracuda box already that will handle in/out spam/virus filtering. Backend I'm planning on 2 mailbox servers which will be Dell 2950s with 16GB RAM, 2 either dual-core or quad-core CPUs and 6 300GB SAS drives in some RAID config. These systems will be clustered using W2k8 Ent clustering and running CCR in Exchange. My questions are as follows: Is 16GB enough RAM for serving that many mailboxes along with the windows clustering and ccr? I'm trying to figure out disk layouts and I'm unsure of whether to use all local disk or some local and some SAN, via an OpenFiler iSCSI server. The SAN would be a Dell 2850 with 6 - 300GB SCSI drives and a PERC controller to slice as I want, with 8GB RAM. Option 1: 2 drives, RAID 1 - OS 2 drives, RAID 1 - Logs 2 drives, RAID 1 - Mail stores Option 2: 2 drives, RAID 1 - OS and logs 4 drives, RAID 5 - Mail Stores and scratch space for eseutil. Option 3: 2 drives, RAID 1 - OS 2 drives, RAID 1 - Logs 2 drives, RAID 0 - scratch space ~300GB iSCSI volume for mail stores Option 4: 2 drives, RAID 1 - OS 4 drives, RAID 5 - scratch space ~300GB iSCSI volume for mail stores ~300GB iSCSI volume for logs I have 2 sockets for CPUs and need to chose between dual and quad cores. The dual core have faster clocks but less cache and I'm thinking older architecture. Am I better off with more cores and cache while sacraficing clock speed? I am planning on adding the new E2K7 cluster to the E2K3 server and then move each mailbox over, all at once, then remove the old server. This seems more complicated than simply getting rid of the 2003 server and then adding the 2007 cluster and restoring the mailboxes using PowerControls or exmerge. The migration option lets me do this on my time, where a cutover means it all needs to work at once. If I go with the cutover method, how can I prebuild the servers and add them to the domain right after removing the 2003 server, or can't I? I think the answer is no and the migration is my only real option if I want to prebuild. I need to also migrate about 30GB of Public Folders. Is there anything special about this, other than specifying in the E2K7 install that I want older Outlook clients and PF's setup? I guess I could even keep the E2K3 server to host just the PFs? Lastly, if I have a mix of Outlook 200, 2003 and 2007 what do I need to do to make sure they all have access to the GAL and OAB? At time of cutover, we'll be at like 90% 2007, but we will have some older stuff around. My plan is to use Outlook Anywhere on laptops that are used outside the physical network. Are there any gotchas involved in that? I'm even thinking about using is for all Outlook clients, does anyone do that? The reason I'm considering it is that our WAN is really VPN tunnels over internet connections, so not a fully messhed, stable WAN. Thank you all very much for the assistance in advance and I look forward to discussion of these points! Regards...Michael

    Read the article

  • Windows 7 BSOD - ntoskrnl?

    - by Ken Mason
    2 new HP Pavilion notebooks with 7 Home Premium pre-loaded with Norton. My first act was to use the Norton Removal Tool and load ZoneAlarm free and AVG Free. Frequent random BSOD's ever since...I found my way into Debug and have had various reports regarding ntoskrnl, depending on the status of symbols. It's been many years since I played with (DOS 3.x) debug, so this has been a considerable fumble. Excerpts follow and any insights would be greatly appreciated, as I am not a developer: ADDITIONAL_DEBUG_TEXT: Use '!findthebuild' command to search for the target build information. If the build information is available, run '!findthebuild -s ; .reload' to set symbol path and load symbols. MODULE_NAME: nt FAULTING_MODULE: fffff8000305d000 nt DEBUG_FLR_IMAGE_TIMESTAMP: 4b88cfeb BUGCHECK_STR: 0x7f_8 CUSTOMER_CRASH_COUNT: 1 DEFAULT_BUCKET_ID: VISTA_DRIVER_FAULT CURRENT_IRQL: 0 LAST_CONTROL_TRANSFER: from fffff800030ccb69 to fffff800030cd600 STACK_TEXT: fffff80004d6fd28 fffff800030ccb69 : 000000000000007f 0000000000000008 0000000080050033 00000000000006f8 : nt+0x70600 fffff80004d6fd30 000000000000007f : 0000000000000008 0000000080050033 00000000000006f8 fffff80003095e58 : nt+0x6fb69 fffff80004d6fd38 0000000000000008 : 0000000080050033 00000000000006f8 fffff80003095e58 0000000000000000 : 0x7f fffff80004d6fd40 0000000080050033 : 00000000000006f8 fffff80003095e58 0000000000000000 0000000000000000 : 0x8 fffff80004d6fd48 00000000000006f8 : fffff80003095e58 0000000000000000 0000000000000000 0000000000000000 : 0x80050033 fffff80004d6fd50 fffff80003095e58 : 0000000000000000 0000000000000000 0000000000000000 0000000000000000 : 0x6f8 fffff80004d6fd58 0000000000000000 : 0000000000000000 0000000000000000 0000000000000000 0000000000000000 : nt+0x38e58 STACK_COMMAND: kb FOLLOWUP_IP: nt+70600 fffff800`030cd600 48894c2408 mov qword ptr [rsp+8],rcx SYMBOL_STACK_INDEX: 0 SYMBOL_NAME: nt+70600 FOLLOWUP_NAME: MachineOwner IMAGE_NAME: ntoskrnl.exe BUCKET_ID: WRONG_SYMBOLS Followup: MachineOwner ...................................................................... 0: kd !lmi nt Loaded Module Info: [nt] Module: ntkrnlmp Base Address: fffff8000305d000 Image Name: ntkrnlmp.exe Machine Type: 34404 (X64) Time Stamp: 4b88cfeb Sat Feb 27 00:55:23 2010 Size: 5dc000 CheckSum: 545094 Characteristics: 22 perf Debug Data Dirs: Type Size VA Pointer CODEVIEW 25, 19c65c, 19bc5c RSDS - GUID: {7E9A3CAB-6268-45DE-8E10-816E3080A3B7} Age: 2, Pdb: ntkrnlmp.pdb CLSID 4, 19c658, 19bc58 [Data not mapped] Image Type: FILE - Image read successfully from debugger. ntkrnlmp.exe Symbol Type: PDB - Symbols loaded successfully from symbol server. d:\debugsymbols\ntkrnlmp.pdb\7E9A3CAB626845DE8E10816E3080A3B72\ntkrnlmp.pdb Load Report: public symbols , not source indexed d:\debugsymbols\ntkrnlmp.pdb\7E9A3CAB626845DE8E10816E3080A3B72\ntkrnlmp.pdb 0: kd !analyze -v * Bugcheck Analysis * * UNEXPECTED_KERNEL_MODE_TRAP (7f) This means a trap occurred in kernel mode, and it's a trap of a kind that the kernel isn't allowed to have/catch (bound trap) or that is always instant death (double fault). The first number in the bugcheck params is the number of the trap (8 = double fault, etc) Consult an Intel x86 family manual to learn more about what these traps are. Here is a portion of those codes: If kv shows a taskGate use .tss on the part before the colon, then kv. Else if kv shows a trapframe use .trap on that value Else .trap on the appropriate frame will show where the trap was taken (on x86, this will be the ebp that goes with the procedure KiTrap) Endif kb will then show the corrected stack. Arguments: Arg1: 0000000000000008, EXCEPTION_DOUBLE_FAULT Arg2: 0000000080050033 Arg3: 00000000000006f8 Arg4: fffff80003095e58 Debugging Details: BUGCHECK_STR: 0x7f_8 CUSTOMER_CRASH_COUNT: 1 DEFAULT_BUCKET_ID: VISTA_DRIVER_FAULT PROCESS_NAME: System CURRENT_IRQL: 2 LAST_CONTROL_TRANSFER: from fffff800030ccb69 to fffff800030cd600 STACK_TEXT: fffff80004d6fd28 fffff800030ccb69 : 000000000000007f 0000000000000008 0000000080050033 00000000000006f8 : nt!KeBugCheckEx fffff80004d6fd30 fffff800030cb032 : 0000000000000000 0000000000000000 0000000000000000 0000000000000000 : nt!KiBugCheckDispatch+0x69 fffff80004d6fe70 fffff80003095e58 : 0000000000000000 0000000000000000 0000000000000000 0000000000000000 : nt!KiDoubleFaultAbort+0xb2 fffff880089efc60 0000000000000000 : 0000000000000000 0000000000000000 0000000000000000 0000000000000000 : nt!SeAccessCheckFromState+0x58 STACK_COMMAND: kb FOLLOWUP_IP: nt!KiDoubleFaultAbort+b2 fffff800`030cb032 90 nop SYMBOL_STACK_INDEX: 2 SYMBOL_NAME: nt!KiDoubleFaultAbort+b2 FOLLOWUP_NAME: MachineOwner MODULE_NAME: nt IMAGE_NAME: ntkrnlmp.exe DEBUG_FLR_IMAGE_TIMESTAMP: 4b88cfeb FAILURE_BUCKET_ID: X64_0x7f_8_nt!KiDoubleFaultAbort+b2 BUCKET_ID: X64_0x7f_8_nt!KiDoubleFaultAbort+b2 Followup: MachineOwner I tried running Rootkit Revealer but I don't think it works on x64 systems. Similarly Blacklight seems to have aged off. I'm running Sophos Anti-Rootkit now. So far so good...

    Read the article

  • Use BGInfo to Build a Database of System Information of Your Network Computers

    - by Sysadmin Geek
    One of the more popular tools of the Sysinternals suite among system administrators is BGInfo which tacks real-time system information to your desktop wallpaper when you first login. For obvious reasons, having information such as system memory, available hard drive space and system up time (among others) right in front of you is very convenient when you are managing several systems. A little known feature about this handy utility is the ability to have system information automatically saved to a SQL database or some other data file. With a few minutes of setup work you can easily configure BGInfo to record system information of all your network computers in a centralized storage location. You can then use this data to monitor or report on these systems however you see fit. BGInfo Setup If you are familiar with BGInfo, you can skip this section. However, if you have never used this tool, it takes just a few minutes to setup in order to capture the data you are looking for. When you first open BGInfo, a timer will be counting down in the upper right corner. Click the countdown button to keep the interface up so we can edit the settings. Now edit the information you want to capture from the available fields on the right. Since all the output will be redirected to a central location, don’t worry about configuring the layout or formatting. Configuring the Storage Database BGInfo supports the ability to store information in several database formats: SQL Server Database, Access Database, Excel and Text File. To configure this option, open File > Database. Using a Text File The simplest, and perhaps most practical, option is to store the BGInfo data in a comma separated text file. This format allows for the file to be opened in Excel or imported into a database. To use a text file or any other file system type (Excel or MS Access), simply provide the UNC to the respective file. The account running the task to write to this file will need read/write access to both the share and NTFS file permissions. When using a text file, the only option is to have BGInfo create a new entry each time the capture process is run which will add a new line to the respective CSV text file. Using a SQL Database If you prefer to have the data dropped straight into a SQL Server database, BGInfo support this as well. This requires a bit of additional configuration, but overall it is very easy. The first step is to create a database where the information will be stored. Additionally, you will want to create a user account to fill data into this table (and this table only). For your convenience, this script creates a new database and user account (run this as Administrator on your SQL Server machine): @SET Server=%ComputerName%.@SET Database=BGInfo@SET UserName=BGInfo@SET Password=passwordSQLCMD -S “%Server%” -E -Q “Create Database [%Database%]“SQLCMD -S “%Server%” -E -Q “Create Login [%UserName%] With Password=N’%Password%’, DEFAULT_DATABASE=[%Database%], CHECK_EXPIRATION=OFF, CHECK_POLICY=OFF”SQLCMD -S “%Server%” -E -d “%Database%” -Q “Create User [%UserName%] For Login [%UserName%]“SQLCMD -S “%Server%” -E -d “%Database%” -Q “EXEC sp_addrolemember N’db_owner’, N’%UserName%’” Note the SQL user account must have ‘db_owner’ permissions on the database in order for BGInfo to work correctly. This is why you should have a SQL user account specifically for this database. Next, configure BGInfo to connect to this database by clicking on the SQL button. Fill out the connection properties according to your database settings. Select the option of whether or not to only have one entry per computer or keep a history of each system. The data will then be dropped directly into a table named “BGInfoTable” in the respective database.   Configure User Desktop Options While the primary function of BGInfo is to alter the user’s desktop by adding system info as part of the wallpaper, for our use here we want to leave the user’s wallpaper alone so this process runs without altering any of the user’s settings. Click the Desktops button. Configure the Wallpaper modifications to not alter anything.   Preparing the Deployment Now we are all set for deploying the configuration to the individual machines so we can start capturing the system data. If you have not done so already, click the Apply button to create the first entry in your data repository. If all is configured correctly, you should be able to open your data file or database and see the entry for the respective machine. Now click the File > Save As menu option and save the configuration as “BGInfoCapture.bgi”.   Deploying to Client Machines Deployment to the respective client machines is pretty straightforward. No installation is required as you just need to copy the BGInfo.exe and the BGInfoCapture.bgi to each machine and place them in the same directory. Once in place, just run the command: BGInfo.exe BGInfoCapture.bgi /Timer:0 /Silent /NoLicPrompt Of course, you probably want to schedule the capture process to run on a schedule. This command creates a Scheduled Task to run the capture process at 8 AM every morning and assumes you copied the required files to the root of your C drive: SCHTASKS /Create /SC DAILY /ST 08:00 /TN “System Info” /TR “C:\BGInfo.exe C:\BGInfoCapture.bgi /Timer:0 /Silent /NoLicPrompt” Adjust as needed, but the end result is the scheduled task command should look something like this:   Download BGInfo from Sysinternals Latest Features How-To Geek ETC How To Create Your Own Custom ASCII Art from Any Image How To Process Camera Raw Without Paying for Adobe Photoshop How Do You Block Annoying Text Message (SMS) Spam? How to Use and Master the Notoriously Difficult Pen Tool in Photoshop HTG Explains: What Are the Differences Between All Those Audio Formats? How To Use Layer Masks and Vector Masks to Remove Complex Backgrounds in Photoshop Bring Summer Back to Your Desktop with the LandscapeTheme for Chrome and Iron The Prospector – Home Dash Extension Creates a Whole New Browsing Experience in Firefox KinEmote Links Kinect to Windows Why Nobody Reads Web Site Privacy Policies [Infographic] Asian Temple in the Snow Wallpaper 10 Weird Gaming Records from the Guinness Book

    Read the article

  • What do these messages in the Qmail maillog indicate?

    - by Griffo
    There seems to be an endless supply of messages in the Qmail maillog for a single address. Can anyone shed some light on why this might be and whether it is a problem? To me it looks like either spam or some sort of unhandled problem. It strikes me as unusual that the 'from=' field is blank. This is on a VPS using Plesk in case that's important. Jun 30 15:10:17 vps-1001108-595 qmail-remote-handlers[23593]: [email protected] Jun 30 15:10:17 vps-1001108-595 qmail-remote-handlers[23586]: from= Jun 30 15:10:17 vps-1001108-595 qmail-remote-handlers[23586]: [email protected] Jun 30 15:10:17 vps-1001108-595 qmail-remote-handlers[23585]: from= Jun 30 15:10:17 vps-1001108-595 qmail-remote-handlers[23585]: [email protected] Jun 30 15:10:17 vps-1001108-595 qmail-remote-handlers[23584]: from= Jun 30 15:10:17 vps-1001108-595 qmail-remote-handlers[23584]: [email protected] Jun 30 15:10:17 vps-1001108-595 qmail-remote-handlers[23583]: from= Jun 30 15:10:17 vps-1001108-595 qmail-remote-handlers[23583]: [email protected] Jun 30 15:10:17 vps-1001108-595 qmail-remote-handlers[23600]: from= Jun 30 15:10:17 vps-1001108-595 qmail-remote-handlers[23600]: [email protected] Jun 30 15:10:17 vps-1001108-595 qmail-remote-handlers[23599]: from= Jun 30 15:10:17 vps-1001108-595 qmail-remote-handlers[23599]: [email protected] EDIT Here's a sample of one of the emails: Received: (qmail 5603 invoked for bounce); 29 Jun 2011 07:46:31 +0100 Date: 29 Jun 2011 07:46:31 +0100 From: [email protected] To: [email protected] Subject: failure notice Hi. This is the qmail-send program at vps-1001108-595.cp.blacknight.com. I'm afraid I wasn't able to deliver your message to the following addresses. This is a permanent error; I've given up. Sorry it didn't work out. <[email protected]>: 200.147.36.13 does not like recipient. Remote host said: 450 4.7.1 Client host rejected: cannot find your hostname, [78.153.208.195] Giving up on 200.147.36.13. I'm not going to try again; this message has been in the queue too long. --- Below this line is a copy of the message. Return-Path: <[email protected]> Received: (qmail 15585 invoked by uid 48); 22 Jun 2011 07:38:26 +0100 Date: 22 Jun 2011 07:38:26 +0100 Message-ID: <[email protected]> To: [email protected] Subject: Cadastre-se e Concorra ? um Carro! MIME-Version: 1.0 Content-type: text/html; charset=iso-8859-1 From: Cielo Fidelidade <[email protected]> <!DOCTYPE HTML> <html> ... <body text removed> <html> If I understand this correctly, this is saying that an email sent by my server, from address [email protected], could not be delivered. However, [email protected] is not a valid email address on my server, so how can email be sent from this address on my server? I have tested whether my server is acting as an open relay, and it isn't. So how else could this be happening? I am getting thousands of these every day. What can I do to prevent it?

    Read the article

  • How accurate is "Business logic should be in a service, not in a model"?

    - by Jeroen Vannevel
    Situation Earlier this evening I gave an answer to a question on StackOverflow. The question: Editing of an existing object should be done in repository layer or in service? For example if I have a User that has debt. I want to change his debt. Should I do it in UserRepository or in service for example BuyingService by getting an object, editing it and saving it ? My answer: You should leave the responsibility of mutating an object to that same object and use the repository to retrieve this object. Example situation: class User { private int debt; // debt in cents private string name; // getters public void makePayment(int cents){ debt -= cents; } } class UserRepository { public User GetUserByName(string name){ // Get appropriate user from database } } A comment I received: Business logic should really be in a service. Not in a model. What does the internet say? So, this got me searching since I've never really (consciously) used a service layer. I started reading up on the Service Layer pattern and the Unit Of Work pattern but so far I can't say I'm convinced a service layer has to be used. Take for example this article by Martin Fowler on the anti-pattern of an Anemic Domain Model: There are objects, many named after the nouns in the domain space, and these objects are connected with the rich relationships and structure that true domain models have. The catch comes when you look at the behavior, and you realize that there is hardly any behavior on these objects, making them little more than bags of getters and setters. Indeed often these models come with design rules that say that you are not to put any domain logic in the the domain objects. Instead there are a set of service objects which capture all the domain logic. These services live on top of the domain model and use the domain model for data. (...) The logic that should be in a domain object is domain logic - validations, calculations, business rules - whatever you like to call it. To me, this seemed exactly what the situation was about: I advocated the manipulation of an object's data by introducing methods inside that class that do just that. However I realize that this should be a given either way, and it probably has more to do with how these methods are invoked (using a repository). I also had the feeling that in that article (see below), a Service Layer is more considered as a façade that delegates work to the underlying model, than an actual work-intensive layer. Application Layer [his name for Service Layer]: Defines the jobs the software is supposed to do and directs the expressive domain objects to work out problems. The tasks this layer is responsible for are meaningful to the business or necessary for interaction with the application layers of other systems. This layer is kept thin. It does not contain business rules or knowledge, but only coordinates tasks and delegates work to collaborations of domain objects in the next layer down. It does not have state reflecting the business situation, but it can have state that reflects the progress of a task for the user or the program. Which is reinforced here: Service interfaces. Services expose a service interface to which all inbound messages are sent. You can think of a service interface as a façade that exposes the business logic implemented in the application (typically, logic in the business layer) to potential consumers. And here: The service layer should be devoid of any application or business logic and should focus primarily on a few concerns. It should wrap Business Layer calls, translate your Domain in a common language that your clients can understand, and handle the communication medium between server and requesting client. This is a serious contrast to other resources that talk about the Service Layer: The service layer should consist of classes with methods that are units of work with actions that belong in the same transaction. Or the second answer to a question I've already linked: At some point, your application will want some business logic. Also, you might want to validate the input to make sure that there isn't something evil or nonperforming being requested. This logic belongs in your service layer. "Solution"? Following the guidelines in this answer, I came up with the following approach that uses a Service Layer: class UserController : Controller { private UserService _userService; public UserController(UserService userService){ _userService = userService; } public ActionResult MakeHimPay(string username, int amount) { _userService.MakeHimPay(username, amount); return RedirectToAction("ShowUserOverview"); } public ActionResult ShowUserOverview() { return View(); } } class UserService { private IUserRepository _userRepository; public UserService(IUserRepository userRepository) { _userRepository = userRepository; } public void MakeHimPay(username, amount) { _userRepository.GetUserByName(username).makePayment(amount); } } class UserRepository { public User GetUserByName(string name){ // Get appropriate user from database } } class User { private int debt; // debt in cents private string name; // getters public void makePayment(int cents){ debt -= cents; } } Conclusion All together not much has changed here: code from the controller has moved to the service layer (which is a good thing, so there is an upside to this approach). However this doesn't look like it had anything to do with my original answer. I realize design patterns are guidelines, not rules set in stone to be implemented whenever possible. Yet I have not found a definitive explanation of the service layer and how it should be regarded. Is it a means to simply extract logic from the controller and put it inside a service instead? Is it supposed to form a contract between the controller and the domain? Should there be a layer between the domain and the service layer? And, last but not least: following the original comment Business logic should really be in a service. Not in a model. Is this correct? How would I introduce my business logic in a service instead of the model?

    Read the article

  • Refactoring Part 1 : Intuitive Investments

    - by Wes McClure
    Fear, it’s what turns maintaining applications into a nightmare.  Technology moves on, teams move on, someone is left to operate the application, what was green is now perceived brown.  Eventually the business will evolve and changes will need to be made.  The approach to those changes often dictates the long term viability of the application.  Fear of change, lack of passion and a lack of interest in understanding the domain often leads to a paranoia to do anything that doesn’t involve duct tape and bailing twine.  Don’t get me wrong, those have a place in the short term viability of a project but they don’t have a place in the long term.  Add to it “us versus them” in regards to the original team and those that maintain it, internal politics and other factors and you have a recipe for disaster.  This results in code that quickly becomes unmanageable.  Even the most clever of designs will eventually become sub optimal and debt will amount that exponentially makes changes difficult.  This is where refactoring comes in, and it’s something I’m very passionate about.  Refactoring is about improving the process whereby we make change, it’s an exponential investment in the process of change. Without it we will incur exponential complexity that halts productivity. Investments, especially in the long term, require intuition and reflection.  How can we tackle new development effectively via evolving the original design and paying off debt that has been incurred? The longer we wait to ask and answer this question, the more it will cost us.  Small requests don’t warrant big changes, but realizing when changes now will pay off in the long term, and especially in the short term, is valuable. I have done my fair share of maintaining applications and continuously refactoring as needed, but recently I’ve begun work on a project that hasn’t had much debt, if any, paid down in years.  This is the first in a series of blog posts to try to capture the process which is largely driven by intuition of smaller refactorings from other projects. Signs that refactoring could help: Testability How can decreasing test time not pay dividends? One of the first things I found was that a very important piece often takes 30+ minutes to test.  I can only imagine how much time this has cost historically, but more importantly the time it might cost in the coming weeks: I estimate at least 10-20 hours per person!  This is simply unacceptable for almost any situation.  As it turns out, about 6 hours of working with this part of the application and I was able to cut the time down to under 30 seconds!  In less than the lost time of one week, I was able to fix the problem for all future weeks! If we can’t test fast then we can’t change fast, nor with confidence. Code is used by end users and it’s also used by developers, consider your own needs in terms of the code base.  Adding logic to enable/disable features during testing can help decouple parts of an application and lead to massive improvements.  What exactly is so wrong about test code in real code?  Often, these become features for operators and sometimes end users.  If you cannot run an integration test within a test runner in your IDE, it’s time to refactor. Readability Are variables named meaningfully via a ubiquitous language? Is the code segmented functionally or behaviorally so as to minimize the complexity of any one area? Are aspects properly segmented to avoid confusion (security, logging, transactions, translations, dependency management etc) Is the code declarative (what) or imperative (how)?  What matters, not how.  LINQ is a great abstraction of the what, not how, of collection manipulation.  The Reactive framework is a great example of the what, not how, of managing streams of data. Are constants abstracted and named, or are they just inline? Do people constantly bitch about the code/design? If the code is hard to understand, it will be hard to change with confidence.  It’s a large undertaking if the original designers didn’t pay much attention to readability and as such will never be done to “completion.”  Make sure not to go over board, instead use this as you change an application, not in lieu of changes (like with testability). Complexity Simplicity will never be achieved, it’s highly subjective.  That said, a lot of code can be significantly simplified, tidy it up as you go.  Refactoring will often converge upon a simplification step after enough time, keep an eye out for this. Understandability In the process of changing code, one often gains a better understanding of it.  Refactoring code is a good way to learn how it works.  However, it’s usually best in combination with other reasons, in effect killing two birds with one stone.  Often this is done when readability is poor, in which case understandability is usually poor as well.  In the large undertaking we are making with this legacy application, we will be replacing it.  Therefore, understanding all of its features is important and this refactoring technique will come in very handy. Unused code How can deleting things not help? This is a freebie in refactoring, it’s very easy to detect with modern tools, especially in statically typed languages.  We have VCS for a reason, if in doubt, delete it out (ok that was cheesy)! If you don’t know where to start when refactoring, this is an excellent starting point! Duplication Do not pray and sacrifice to the anti-duplication gods, there are excellent examples where consolidated code is a horrible idea, usually with divergent domains.  That said, mediocre developers live by copy/paste.  Other times features converge and aren’t combined.  Tools for finding similar code are great in the example of copy/paste problems.  Knowledge of the domain helps identify convergent concepts that often lead to convergent solutions and will give intuition for where to look for conceptual repetition. 80/20 and the Boy Scouts It’s often said that 80% of the time 20% of the application is used most.  These tend to be the parts that are changed.  There are also parts of the code where 80% of the time is spent changing 20% (probably for all the refactoring smells above).  I focus on these areas any time I make a change and follow the philosophy of the Boy Scout in cleaning up more than I messed up.  If I spend 2 hours changing an application, in the 20%, I’ll always spend at least 15 minutes cleaning it or nearby areas. This gives a huge productivity edge on developers that don’t. Ironically after a short period of time the 20% shrinks enough that we don’t have to spend 80% of our time there and can move on to other areas.   Refactoring is highly subjective, never attempt to refactor to completion!  Learn to be comfortable with leaving one part of the application in a better state than others.  It’s an evolution, not a revolution.  These are some simple areas to look into when making changes and can help get one started in the process.  I’ve often found that refactoring is a convergent process towards simplicity that sometimes spans a few hours but often can lead to massive simplifications over the timespan of weeks and months of regular development.

    Read the article

  • Probation is Over: PASS Board Year 1, Q2

    - by Denise McInerney
    Though it's not always official every job begins with a probation period. You start out with lots of questions and every day you find out how much more you have to learn. Usually after a few months you discover that you can actually answer some questions and have at least an idea of what you are supposed to be doing. Now at the end of my second quarter on the "job" of serving on the PASS Board I have reached that point. My probation period is over. The last three months were busy for the entire Board with the budget process, an in-person meeting and moving forward with PASS Global Growth plans. I had also set a specific goal for myself for my 2nd quarter: to see the Board to adopt a Code of Conduct for the PASS Summit. Code of Conduct When I ran for the Board I included my desire to see PASS establish a code of conduct in my campaign platform.  I was motivated to do this for a few reasons. Other technical conferences have had incidents of harassment. Most of these did not have a policy in place prior to having a problem, though several conference organizers have since adopted anti-harassment policies or codes of conduct. I felt it would be in PASS' interest to establish a policy so we would be prepared should there be an incident.   "This is Community" Adopting a code of conduct would reinforce our community orientation and send a message about the positive character of the Summit. PASS is a leader among technical organizations for its promotion and support of women. Adopting a code of conduct would further demonstrate our leadership in this area. After researching similar polices from other organizations I published a first draft in April. I solicited feedback from the Board, HQ staff and some PASS members. Incorporating that feedback I presented version 4 at the May Board meeting, where we had a good discussion. You can read the meeting minutes for details. I incorporated points from  the Board discussion as well as feedback from a legal review to produce a final version which has been submitted to the Board. It will be discussed at the Board meeting July 12. You can read the full text at the end of this post. Virtual Chapters In the first quarter we started ramping up marketing support for the Virtual Chapters. Since then each edition of the Connector has highlighted a different VC to help get out the message about the variety of eductional opporutnities that are offered. These VC profiles will continue in the coming months. I was very pleased to welcome the new DBA Fundamentals VC which is geared toward new DBAs, people who are considering entering the field and those transitioning from a different IT role. Thanks to the contributions of Erin Stellato, Michelle Nalliah and Karla Landrum we published a "Virtual Chapter Guidebook". This document includes great advice on how to build and promote a VC. It's also a reference for how things work, from budgets to webinar hosting. I think this document will be extremely valuable to all our VC leaders and am grateful to those who put it together. Board Meeting/SQL Rally The Board met in May in Dallas. Among the items discussed were Global Growth, the budget, future events and the upcoming elections. We covered a lot of ground in two days and I will again refer you to the meeting minutes for details. The meeting schedule allowed us to participate in the SQL Rally networking events and one full day of the conference. I enjoyed having the opportunity to meet and talk with many PASS members. And my hat is off to the SQL Rally organizers who put on an outstanding event. Global Growth PASS has undertaken a major intitiative to reach and engage SQL Server professionals around the world. This Global Growth plan is ambitious and will have a significant impact on the strategic direction of the organization. We have been reaching out to the community for feedback, including hosting Twitter chats and live Town Hall meetings. I co-hosted two of these events and appreciated hearing the different perspectives of the people who participated If you have not done so I encourage you to read about the Global Growth vision and proposed governance changes  and submit your feedback. FY13 Budget July 1 is the beginning of PASS' fiscal year, which makes the end of June the deadline for approving a budget. Each director submits a budget for his or her portfolio. For the Virtual Chapter portfolio I focused on how we can allocate resources to grow the VCs. Budgeting is a give-and-take process, and while I didn't get everything I asked for I'm pleased the FY13 budget includes a significant increase in financial support for the Virtual Chapters. Many people put a lot of work into the budget, but no two people deserve credit more than VP of Finance Douglas McDowell and Accounting Manager Sandy Cherry. Thanks to both of them for getting us across the goal line on time. SQL Saturday I attended SQL Saturdays in Orange Co. CA and Phoenix. It's always inspiring to see the enthusiasm in the community for learning and networking. These events are successful due to the hard work of many volunteers. Thanks to the organizers in both cities for all your efforts. Next Up This quarter we'll be gearing up plans for the VCs at the Summit and exploring ways the VCs can best support PASS' Global Growth work. I'll also be wrapping up work on the Code of Conduct and attending a Board meeting in September. And I will be at SQL Saturday #144 in Sacramento later this month. Here is the language of the Code of Conduct I have submitted to the Board for consideration: PASS Code of Conduct The PASS Summit provides database professionals from a variety of backgrounds with an opportunity to connect, share and learn.  We value the strong sense of community that characterizes this event and we seek to foster an inclusive, professional atmosphere. We are dedicated to providing a harassment-free conference experience for everyone, regardless of gender, race, sexual orientation, disability, physical appearance, religion or any other protected classification.  Everyone at the Summit is expected to follow the Code of Conduct. This includes but is not limited to: PASS Staff, Exhibitors, Speakers, Attendees and anyone affiliated with the event. Participants are expected to follow the Code of Conduct at all Summit events, including PASS-sponsored social events. Participant behavior Harassment includes, but is not limited to, offensive verbal comments related to gender, race, sexual orientation, disability, physical appearance, religion, or any other protected classification.  Intimidation, threats, stalking, harassing photography or recording, sustained disruption of talks or other events, inappropriate physical contact and unwelcome attention will also be considered harassment. Similarly, sexual, racist, derogatory, threatening or other inappropriate language and imagery are not appropriate for any conference venue, including sessions.  Recourse If a participant engages in any conduct that is prohibited under this Code of Conduct, the conference organizers may take any action they deem appropriate, including warning the offender or expelling the offender from the conference. No refunds will be granted to attendees expelled from the Summit due to violations of the Code of Conduct. If you are being harassed, witness harassment, or have any other concerns, please contact a member of conference staff immediately. Conference staff can be identified by their “Headquarters/Staff” shirts and are trained to handle the situation appropriately. A Code of Conduct Committee (CCC) made up of the Executive Manager and three members of the Board of Directors designated by the President will be authorized to take action in response to an incident or behavior that violates the Code of Conduct.

    Read the article

  • LDAP object class violation: attribute ou not allowed in suffix?

    - by Paramaeleon
    I am about to set up a LDAP directory. It is used as a tool to communicate user permissions from a web application to WebDav file system access, e.g. adding a user to the web platform shall allow login to the file system with the same credentials. There are no other usages intended. Following this German tutorial which encourages the use of the attributes c, o, ou etc. over dc, I configured the following suffix and root: suffix "ou=webtool,o=myOrg,c=de" rootdn "cn=ldapadmin,ou=webtool,o=myOrg,c=de" Server starts and I can connect to it by LDAP Admin, which reports “LDAP error: Object lacks”. Well, there aren’t any objects yet. I now want to create the root and admin elements from shell. I created an init.ldif file: dn: ou=webtool,o=myOrg,c=de objectclass: dcObject objectclass: organization dc: webtool o: webtool dn: cn=ldapadmin,ou=webtool,o=myOrg,c=de objectclass: organizationalRole cn: ldapadmin Trying to load the file runs into an error, telling me that ou is not allowed: server:~ # ldapadd -x -D "cn=ldapadmin,ou=webtool,o=myOrg,c=de" -W -f init.ldif Enter LDAP Password: adding new entry "ou=webtool,o=myOrg,c=de" ldap_add: Object class violation (65) additional info: attribute 'ou' not allowed I am not using ou anywhere except in the suffix, so the question: Isn’t it allowed here? What is allowed here? Here is my answer. I am not allowed to post it as answer for 8 hours, so don’t mind that it is part of the question by now. I will move it outside some day, if I don’t forget to do so. There are numberous dependencies for the creation of elements, and error messages are rather confusing if you don’t know of the concept. The objectclass isn’t necessarily dcObject for the databases’ root node, as it is likely to guess when you read several tutoriales. Instead, it must correspond to the object’s type: Here, for a name starting with ou=, it must be organizationalUnit. I found this piece of information in these tables [Link removed due to restriction: Oops! Your edit couldn't be submitted because: We're sorry, but as a spam prevention mechanism, new users can only post a maximum of two hyperlinks. Earn more than 10 reputation to post more hyperlinks. Link is below]. Further on, the object class dictates which properties must and can be added in the record. Here, organizationalUnit must have an ou: entry and must not have neither dc: nor o: entry. The healthy init.ldif file looks like that: dn: ou=webtool,o=myOrg,c=de objectclass: organizationalUnit ou: LDAP server for my webtool dn: cn=ldapadmin,ou=webtool,o=myOrg,c=de objectclass: organizationalRole cn: ldapadmin Note: The page also states: “While many objectClasses show no MUST attributes you must (ouch) follow any hierarchy […] to determine if this is the really case.” I thought that would mean my root record would have to provide the must fields for c= and o= (c: and o:, respectively) but this isn’t the case. Link in answer is (1): http :// www (dot) zytrax (dot) com/books/ldap/ape/ "Appendix E: LDAP - Object Classes and Attributes"

    Read the article

  • First PC Build (Part 1)

    - by Anthony Trudeau
    Originally posted on: http://geekswithblogs.net/tonyt/archive/2014/08/05/157959.aspxA couple of months ago I made the decision to build myself a new computer. The intended use is gaming and for using the last real version of Photoshop. I was motivated by the poor state of console gaming and a simple desire to do something I haven’t done before – build a PC from the ground up. I’ve been using PCs for more than two decades. I’ve replaced a component hear and there, but for the last 10 years or so I’ve only used laptops. Therefore, this article will be written from the perspective of someone familiar with PCs, but completely new at building. I’m not an expert and this is not a definitive guide for building a PC, but I do hope that it encourages you to try it yourself. Component List Research There was a lot of research necessary, because building a PC is completely new to me, and I haven’t kept up with what’s out there. The first thing you want to do is nail down what your goals are. Your goals are going to be driven by what you want to do with your computer and personal choice. Don’t neglect the second one, because if you’re doing this for fun you want to get what you want. In my case, I focused on three things: performance, longevity, and aesthetics. The performance aspect is important for gaming and Photoshop. This will drive what components you get. For example, heavy gaming use is going to drive your choice of graphics card. Longevity is relevant to me, because I don’t want to be changing things out anytime soon for the next hot game. The consequence of performance and longevity is cost. Finally, aesthetics was my next consideration. I could have just built a box, but it wouldn’t have been nearly as fun for me. Aesthetics might not be important to you. They are for me. I also like gadgets and that played into at least one purchase for this build. I used PC Part Picker to put together my component list. I found it invaluable during the process and I’d recommend it to everyone. One caveat is that I wouldn’t trust the compatibility aspects. It does a pretty good job of not steering you wrong, but do your own research. The rest of it isn’t really sexy. I started out with what appealed to me and then I made changes and additions as I dived deep into researching each component and interaction I could find. The resources I used are innumerable. I used reviews, product descriptions, forum posts (praises and problems), et al. to assist me. I also asked friends into gaming what they thought about my component list. And when I got near the end I posted my list to the Reddit /r/buildapc forum. I cannot stress the value of extra sets of eyeballs and first hand experiences. Some of the resources I used: PC Part Picker Tom’s Hardware bit-tech Reddit Purchase PC Part Picker favors certain vendors. You should look at others too. In my case I found their favorites to be the best. My priorities were out-the-door price and shipping time. I knew that once I started getting parts I’d want to start building. Luckily, I timed it well and everything arrived within the span of a few days. Here are my opinions on the vendors I ended up using in alphabetical order. Amazon.com is a good, reliable choice. They have excellent customer service in my experience, and I knew I wouldn’t have trouble with them. However, shipping time is often a problem when you use their free shipping unless you order expensive items (I’ve found items over $100 ship quickly). Ultimately though, price wasn’t always the best and their collection of sales tax in my state turned me off them. I did purchase my case from them. I ordered the mouse as well, but I cancelled after it was stuck four days in a “shipping soon” state. I purchased the mouse locally. Best Buy is not my favorite place to do business. There’s a lot of history with poor, uninterested sales representatives and they used to have a lot of bad anti-consumer policies. That’s a lot better now, but the bad taste is still in my mouth. I ended up purchasing the accessories from them including mouse (locally) and headphones. NCIX is a company that I’ve never heard of before. It popped up as a recommendation for my CPU cooler on PC Part Picker. I didn’t do a lot of research on the company, because their policy on you buying insurance for your orders turned me off. That policy makes it clear to me that the company finds me responsible for the shipment once it leaves their dock. That’s not right, and may run afoul of state laws. Regardless they shipped my CPU cooler quickly and I didn’t have a problem. NewEgg.com is a well known company. I had never done business with them, but I’m glad I did. They shipped quickly and provided good visibility over everything. The prices were also the best in most cases. My main complaint is that they have a lot of exchange only return policies on components. To their credit those policies are listed in the cart underneath each item. The visibility tells me that they’re not playing any shenanigans and made me comfortable dealing with that risk. The vast majority of what I ordered came from them. Coming Next In the next part I’ll tackle my build experience.

    Read the article

  • Email forwarding from my domain to gmail - FAIL

    - by pitosalas
    [There are numerous similar questions on ServerFault but I couldn't find one that was exactly on point] Background: I use Gmail for my email client. My email is [email protected]. However the email that people communicate to me with is [email protected]. I run the server that hosts www.example.com and other domains, at ServerBeach. Up to yesterday, I had SENDMAIL painlessly just forward emails to [email protected] to [email protected] and everything was fine, for several years in fact. Suddenly my email stopped working - that is, my gmail account stopped receiving emails via the forward from my server. Looking into it I found a bunch of emails sitting on my server with content like this: ... while talking to gmail-smtp-in.l.google.com.: RCPT To: <<< 450-4.2.1 The user you are trying to contact is receiving mail at a rate that <<< 450-4.2.1 prevents additional messages from being delivered. Please resend your <<< 450-4.2.1 message at a later time. If the user is able to receive mail at that <<< 450-4.2.1 time, your message will be delivered. For more information, please <<< 450 4.2.1 visit xxxxxx://mail.google.com/support/bin/answer.py?answer=6592 u15si37138086qco.76 [email protected]... Deferred: 450-4.2.1 The user you are trying to contact is receiving mail at a rate that DATA <<< 550-5.7.1 [64.34.168.137 1] Our system has detected an unusual rate of <<< 550-5.7.1 unsolicited mail originating from your IP address. To protect our <<< 550-5.7.1 users from spam, mail sent from your IP address has been blocked. <<< 550-5.7.1 Please visit xxxxx://www.google.com/mail/help/bulk_mail.html to review <<< 550 5.7.1 our Bulk Email Senders Guidelines. u15si37138086qco.76 554 5.0.0 Service unavailable ... while talking to alt1.gmail-smtp-in.l.google.com.: From what I've been researching, I think somehow someone has/is hijacking my domain name or something and this somehow has caused gmail's servers to notice and cut me off. But I don't know really what's going on nor do I see whatever emails might be involved. I've read stuff on zoneedit.com that sounds like they might have a solution in their service for what I am trying to do. I also read a lot about admining DNS and SENDMAIL and tried various things, but nothing works. Can you tell from my description what is going on that caused GMail's server to stop accepting email from my server and is there a way to stop it? What is the 'correct' way to configure things so that emails to [email protected] behave as if they were sent to [email protected]? Thanks so much!

    Read the article

  • VPS 512 MB RAM with WordPressMU comes to consumes lots of memory

    - by CAPitalZ
    I have googled for days and gathered all optimization suggestions and tried. My sites are not getting any high hits. May be like 100 hits per day [all my sites combined]. Here are my specs I have 512 MB RAM VPS with burstable 1024 MB. Centos 5 32-bit & cPanel/WHM Apache 2.2 MySQL 5.0 PHP 5.3.2 Here is my Configs I have 2 WordPressMU production sites, and 1 test site my.cnf # The following options will be passed to all MySQL clients [client] #password = your_password port = 3306 socket = /var/lib/mysql/mysql.sock # Here follows entries for some specific programs # The MySQL server [mysqld] port = 3306 socket = /var/lib/mysql/mysql.sock skip-locking skip-bdb skip-innodb key_buffer = 16M max_allowed_packet = 1M table_cache = 64 sort_buffer_size = 512K net_buffer_length = 8K read_buffer_size = 256K read_rnd_buffer_size = 512K myisam_sort_buffer_size = 8M #CAPitalZ thread_cache_size=8 thread_concurrency=4 #query_cache_type=1 #query_cache_limit=1M query_cache_size=16M concurrent_insert=2 low_priority_updates=1 max_connections=50 tmp_table_size=16M max_heap_table_size=16M join_buffer_size=1M interactive_timeout=25 wait_timeout=1000 #connect_timout=10 not able to restart mysql max_connect_errors=10 # Don't listen on a TCP/IP port at all. This can be a security enhancement, # if all processes that need to connect to mysqld run on the same host. # All interaction with mysqld must be made via Unix sockets or named pipes. # Note that using this option without enabling named pipes on Windows # (via the "enable-named-pipe" option) will render mysqld useless! # skip-networking # Disable Federated by default skip-federated # Replication Master Server (default) # binary logging is required for replication log-bin=mysql-bin # required unique id between 1 and 2^32 - 1 # defaults to 1 if master-host is not set # but will not function as a master if omitted server-id = 1 [mysqld_safe] open_files_limit=8192 [mysqldump] quick max_allowed_packet = 16M [mysql] no-auto-rehash # Remove the next comment character if you are not familiar with SQL #safe-updates [isamchk] key_buffer = 20M sort_buffer_size = 20M read_buffer = 2M write_buffer = 2M [myisamchk] key_buffer = 20M sort_buffer_size = 20M read_buffer = 2M write_buffer = 2M [mysqlhotcopy] interactive-timeout httpd.conf I have unselected many modules and recompiled using EasyApache in WHM. Only have the following modules built Deflate Expires Fileprotect Imagemap MPM Prefork Version [default] EAccelerator for PHP Bcmath Calendar CurlSSL [I'm using Curl. But I don't have any https sites] Expat GD [for image cropping] Gettext Imap Mbregex [default] Mbstring [need both Mbregex and Mbstring for utf-8] Mysql of the system MySQL "Improved" extension. Sockets TTF (FreeType) [I'm using custom font] Zlib Under Global Configuration I only have FollowSymLinks enabled I Have TraceEnable, ServerSignature, FileETag OFF ServerTokens ProductOnly DirectoryIndex Priority has index.php as the first one I have removed Clamd [Clam Anti-virus] SpamAssasin is Off Under Tweak Settings Default catch-all/default address behavior for new accounts. This is set to "fail" All stats programs turned off I have eAccelerator installed and checked in phpinfo and its working [Pre VirtualHost Include under WHM] Timeout 20 KeepAlive On MaxKeepAliveRequests 200 KeepAliveTimeout 3 MinSpareServers 1 MaxSpareServers 3 StartServers 1 ServerLimit 50 MaxClients 50 MaxRequestsPerChild 4000 ExtendedStatus Off #ServerType standalone this throws error HostnameLookups Off <Directory "/"> AllowOverride None </Directory> My sites will take ages to load and WHM/CPanel will not even load. adadaa.com/ http://adadaa.net/ kadais.ca/ My average memory consumption is like 1000 MB! [yes always bursting] The process that consumes most CPU and also most memory is mysql But I also get like 15 httpd processes [when its bursting] I already got warning from cpuwatchcheck saying "While processing, the cpu has been maxed out for more than a 6 hour period. The current load/uptime line on the server at the time of this email is 07:00:37 up 11:30, 0 users, load average: 14.64, 16.79, 20.07" I don't know, I have tried switching these config values many different times, but nothing seems to work. Please show some light... Thanks

    Read the article

  • iptables 1.4 and passive FTP on custom port

    - by Cracky
    after the upgrade from debian squeeze to wheezy I've got a problem with passive FTP connection. I could narrow it to be iptables related, as I could connect via FTP w/o problems after adding my IP to the iptables ACCEPT rule. Before the upgrade I was able just to do modprobe nf_conntract_ftp ports=21332 and adding iptables -A THRU -p tcp --dport 21332 -m state --state NEW,ESTABLISHED,RELATED -j ACCEPT now..it doesn't help anymore. The INPUT rule is being triggered as I can see in the counter, but the directory listing is the last thing it does. Setting up a passive-port range is the last thing I want to do, I dislike open ports. I also tried the trick with helper mod by adding following rule before the actual rule for 21332 iptables -A THRU -p tcp -i eth0 --dport 21332 -m state --state NEW -m helper --helper ftp-21332 -j ACCEPT but it doesn't help and is even not being triggered according to counter. The rule in the next line (w/o helper) is being triggered.. here some info: # iptables --version iptables v1.4.14 # lsmod |grep nf_ nf_nat_ftp 12460 0 nf_nat 18242 1 nf_nat_ftp nf_conntrack_ftp 12605 1 nf_nat_ftp nf_conntrack_ipv4 14078 32 nf_nat nf_defrag_ipv4 12483 1 nf_conntrack_ipv4 nf_conntrack 52720 7 xt_state,nf_conntrack_ipv4,xt_conntrack,nf_conntrack_ftp,nf_nat,nf_nat_ftp,xt_helper # uname -a Linux loki 3.2.0-4-amd64 #1 SMP Debian 3.2.46-1 x86_64 GNU/Linux # iptables-save # Generated by iptables-save v1.4.14 on Sun Jun 30 03:54:28 2013 *filter :INPUT ACCEPT [0:0] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [0:0] :BLACKLIST - [0:0] :LOGDROP - [0:0] :SPAM - [0:0] :THRU - [0:0] :WEB - [0:0] :fail2ban-dovecot-pop3imap - [0:0] :fail2ban-pureftpd - [0:0] :fail2ban-ssh - [0:0] -A INPUT -p tcp -m multiport --dports 110,995,143,993 -j fail2ban-dovecot-pop3imap -A INPUT -p tcp -m multiport --dports 21,21332 -j fail2ban-pureftpd -A INPUT -p tcp -m multiport --dports 22 -j fail2ban-ssh -A INPUT -p tcp -m multiport --dports 110,995,143,993 -j fail2ban-dovecot-pop3imap -A INPUT -i lo -j ACCEPT -A INPUT -i eth0 -p tcp -m tcp --tcp-flags FIN,SYN,RST,PSH,ACK,URG NONE -j DROP -A INPUT -i eth0 -p tcp -m tcp --tcp-flags FIN,SYN FIN,SYN -j DROP -A INPUT -i eth0 -p tcp -m tcp --tcp-flags SYN,RST SYN,RST -j DROP -A INPUT -i eth0 -p tcp -m tcp --tcp-flags FIN,RST FIN,RST -j DROP -A INPUT -i eth0 -p tcp -m tcp --tcp-flags FIN,ACK FIN -j DROP -A INPUT -i eth0 -p tcp -m tcp --tcp-flags ACK,URG URG -j DROP -A INPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT -A INPUT -j BLACKLIST -A INPUT -j THRU -A INPUT -j LOGDROP -A OUTPUT -j ACCEPT -A OUTPUT -s 93.223.38.223/32 -j ACCEPT -A BLACKLIST -s 38.113.165.0/24 -j LOGDROP -A BLACKLIST -s 202.177.216.0/24 -j LOGDROP -A BLACKLIST -s 130.117.190.0/24 -j LOGDROP -A BLACKLIST -s 117.79.92.0/24 -j LOGDROP -A BLACKLIST -s 72.47.228.0/24 -j LOGDROP -A BLACKLIST -s 195.200.70.0/24 -j LOGDROP -A BLACKLIST -s 195.200.71.0/24 -j LOGDROP -A LOGDROP -m limit --limit 5/sec -j LOG --log-prefix drop_packet_ --log-level 7 -A LOGDROP -p tcp -m tcp --dport 25 -m limit --limit 2/sec -j LOG --log-prefix spam_blacklist --log-level 7 -A LOGDROP -p tcp -m tcp --dport 80 -m limit --limit 2/sec -j LOG --log-prefix web_blacklist --log-level 7 -A LOGDROP -p tcp -m tcp --dport 22 -m limit --limit 2/sec -j LOG --log-prefix ssh_blacklist --log-level 7 -A LOGDROP -j REJECT --reject-with icmp-host-prohibited -A THRU -p icmp -m limit --limit 1/sec -m icmp --icmp-type 8 -j ACCEPT -A THRU -i eth0 -p tcp -m tcp --dport 25 -j ACCEPT -A THRU -i eth0 -p udp -m udp --dport 53 -j ACCEPT -A THRU -i eth0 -p tcp -m tcp --dport 80 -j ACCEPT -A THRU -i eth0 -p tcp -m tcp --dport 110 -j ACCEPT -A THRU -i eth0 -p tcp -m tcp --dport 143 -j ACCEPT -A THRU -i eth0 -p tcp -m tcp --dport 465 -j ACCEPT -A THRU -i eth0 -p tcp -m tcp --dport 585 -j ACCEPT -A THRU -i eth0 -p tcp -m tcp --dport 993 -j ACCEPT -A THRU -i eth0 -p tcp -m tcp --dport 995 -j ACCEPT -A THRU -i eth0 -p tcp -m tcp --dport 2008 -m state --state NEW,RELATED,ESTABLISHED -j ACCEPT -A THRU -i eth0 -p tcp -m tcp --dport 10011 -m state --state NEW,RELATED,ESTABLISHED -j ACCEPT -A THRU -i eth0 -p tcp -m tcp --dport 21332 -m state --state NEW,RELATED,ESTABLISHED -j ACCEPT -A THRU -i eth0 -p tcp -m tcp --dport 30033 -m state --state NEW,RELATED,ESTABLISHED -j ACCEPT -A fail2ban-dovecot-pop3imap -j RETURN -A fail2ban-dovecot-pop3imap -j RETURN -A fail2ban-pureftpd -j RETURN -A fail2ban-pureftpd -j RETURN -A fail2ban-ssh -j RETURN -A fail2ban-ssh -j RETURN COMMIT # Completed on Sun Jun 30 03:54:28 2013 So, as I said, I have no problems with connecting when adding my IP to go through..but that's not a solution as noone except me can connect anymore~ If someone got an idea what the problem is, please help me! Thanks Cracky

    Read the article

  • Hoster not fulfilling contract: how to get money back?

    - by plua
    For several years, we have as a small webdesign company rented a dedicated server at a large hosting provider. They had several support levels. When we signed up for this, we had very limited in-house knowledge about server maintenance, and were very worried about the security of our server. We therefore took one of the more expensive support packages. An important aspect in this were these claims: [PROVIDER] verifies the availability of the latest security updates and sends you a notification to see if you are interested to have them installed [PROVIDER] verifies the availability of the latest supported software updates and sends you a notification to see if you are interested to have them installed These items were clearly stated on their website as being part of the advantage of this package.; With not enough knowledge about installing and updating such software on a Linux server, we decided to go for this package. We paid a premium of $50 per month over the maintenance package that is next in line ($100 vs $50). Over the years, we have paid several thousand dollars for this service. Then came the moment that I learned more and more about server management. And I found out step by step that our server was horrendously outdated! We had an OS that was hardly updated, our anti-virus was not working because it needed certain more recent packages on the OS, and in general there were a whole bunch of security vulnerabilities and fixes that were lacking. Shocked, I wrote the provider. Turns out, they decided unilaterally that they would not send out any notifications to clients because clients would get too many e-mails. This is a quote from their explanation: [...] We have decided not to spam its clients with OS and security updates and only install them whenever asked by the client I was shocked! They had never mentioned that they would drop this service, and in fact the claims about updating their clients through e-mail was still on their website, after they apparently stopped doing this years ago! Upon finding this out, I requested they refund all that we have paid as a premium over the other package, and make it available as future credit with their own company. I thought this was a very reasonable request. However, they said they would only go back one year and provide credit for this one year. Mails went back and forth, but they were not willing to give credit for the whole period, which I felt I was entitled to. So ultimately I left the hosting company, and filed a complaint with the BBB a while ago. Now, I am not the kind of person who runs to a lawyer for any minor thing, but in this case I am really considering taking action. I have been paying for years for a service I did not receive (the premium package had a few other pluses, but we took it primarily for these two points, and I can prove that we did not use the other benefits). For our small company the hosting costs were a very large part of our budget, and I feel it is very unfair how this large provider just does not care about not fulfilling its obligations. So my question is: what action should I take? Is a lawyer the only next step, or are there other suggestions? And am I right here to claim this money, or are they right that there is some sort of statue of limitations on such claims? Any feedback is appreciated.

    Read the article

  • OpenVPN Error : TLS Error: local/remote TLS keys are out of sync: [AF_INET]

    - by Lucidity
    Fist off thanks for reading this, I appreciate any and all suggestions. I am having some serious problems reconnecting to my OpenVPN client using Riseup.net's VPN. I have spent a few days banging my head against the wall in attempts to set this up on my iOS devices....but that is a whole other issue. I was however able to set it up on my Mac OS X specifically on my Windows Vista 32 bit BootCamp VM with relatively little trouble. To originally connect I only had to modify the recommended Config file very slightly (Config file included at the end of this post): - I had to enter the code directly into my config file - And change "dev tap" to "dev tun" So I was connected. (Note - I did test to ensure the VPN was actually working after I originally connected, it was. Also verified the .pem file (inserted as the coding in my config file) for authenticity). I left the VPN running. My computer went to sleep. Today I went to use the internet expecting (possibly incorrectly - I am now unsure if I was wrong to leave it running) to still be connected to the VPN. However I saw immediately I was not. I went to reconnect. And was (am) unable to. My logs after attempting to connect (and getting a connection failed dialog box) show everything working as it should (as far as I can tell) until the end where I get the following lines: Mon Sep 23 21:07:49 2013 us=276809 Initialization Sequence Completed Mon Sep 23 21:07:49 2013 us=276809 MANAGEMENT: >STATE:1379995669,CONNECTED,SUCCESS, OMITTED Mon Sep 23 21:22:50 2013 us=390350 Authenticate/Decrypt packet error: packet HMAC authentication failed Mon Sep 23 21:23:39 2013 us=862180 TLS Error: local/remote TLS keys are out of sync: [AF_INET] VPN IP OMITTED [2] Mon Sep 23 21:23:57 2013 us=395183 Authenticate/Decrypt packet error: packet HMAC authentication failed Mon Sep 23 22:07:41 2013 us=296898 TLS: soft reset sec=0 bytes=513834601/0 pkts=708032/0 Mon Sep 23 22:07:41 2013 us=671299 VERIFY OK: depth=1, C=US, O=Riseup Networks, L=Seattle, ST=WA, CN=Riseup Networks, [email protected] Mon Sep 23 22:07:41 2013 us=671299 VERIFY OK: depth=0, C=US, O=Riseup Networks, L=Seattle, ST=WA, CN=vpn.riseup.net Mon Sep 23 22:07:46 2013 us=772508 Data Channel Encrypt: Cipher 'BF-CBC' initialized with 128 bit key Mon Sep 23 22:07:46 2013 us=772508 Data Channel Encrypt: Using 160 bit message hash 'SHA1' for HMAC authentication Mon Sep 23 22:07:46 2013 us=772508 Data Channel Decrypt: Cipher 'BF-CBC' initialized with 128 bit key Mon Sep 23 22:07:46 2013 us=772508 Data Channel Decrypt: Using 160 bit message hash 'SHA1' for HMAC authentication Mon Sep 23 22:07:46 2013 us=772508 Control Channel: TLSv1, cipher TLSv1/SSLv3 DHE-RSA-AES256-SHA, 2048 bit RSA So I have searched for a solution online and I have included what I have attempted below, however I fear (know) I am not knowledgeable enough in this area to fix this myself. I apologize in advance for my ignorance. I do tech support for a living, but not this kind of tech support unfortunately. Other notes and troubleshooting done - - Windows Firewall is disabled completely, as well as other Anti-virus programs - Tor is disabled completely - No Proxies running - Time is correct in all locations - Router Firmware is up to date - Able to connect to the internet and as far as I can tell all necessary ports are open. - No settings have been altered since I was able to connect successfully. - Ethernet as well as wifi connections attempted, resulted in same error. Also tried adding the following lines to my config file (without success or change in error): persist-key persist-tun proto tcp (after reading that this error generally occurs on UDP connections, and is extremely rare on TCP) resolv-retry infinite (thinking the connection may have timed out since the issues occurred after leaving VPN connected during about 10 hrs of computer in sleep mode) All attempts resulted in exact same error code included at the top of this post. The original suggestions I found online stated - (regarding the TLS Error) - This error should resolve itself within 60 seconds, or if not quit wait 120 seconds and try again. (Which isnt the case here...) (regarding the Out of Sync" error) - If you continue to get "out of sync" errors and the link does not come up, then it means that something is probably wrong with your config file. You must use either ping and ping-restart on both sides of the connection, or keepalive on the server side of a client/server connection, in order to gracefully recover from "local/remote TLS keys are out of sync" errors. I wouldn't be surprised if my config file is lacking, or not correct. However I can confirm I followed the instructions to a tee. And was able to connect originally (and have not modified my settings or config file since I was able to connect to when the error began occurring). I have a very simple config file: client dev tun tun-mtu 1500 remote vpn.riseup.net auth-user-pass ca RiseupCA.pem redirect-gateway verb 4 <ca> -----BEGIN CERTIFICATE----- [OMITTED] -----END CERTIFICATE----- </ca> I would really appreciate any help or suggestions. I am at a total loss here, I know I'm asking a lot here. Though I am a new user on this site I help others on many forums including Microsoft's support community and especially Apple's support communities, so I will definitely pass on anything I learn here to help others. Thanks so so so much in advance for reading this.

    Read the article

  • "power limit notification" clobbering on 12G Dell servers with RHEL6

    - by Andrew B
    Server: Poweredge r620 OS: RHEL 6.4 Kernel: 2.6.32-358.18.1.el6.x86_64 I'm experiencing application alarms in my production environment. Critical CPU hungry processes are being starved of resources and causing a processing backlog. The problem is happening on all the 12th Generation Dell servers (r620s) in a recently deployed cluster. As near as I can tell, instances of this happening are matching up to peak CPU utilization, accompanied by massive amounts of "power limit notification" spam in dmesg. An excerpt of one of these events: Nov 7 10:15:15 someserver [.crit] CPU12: Core power limit notification (total events = 14) Nov 7 10:15:15 someserver [.crit] CPU0: Core power limit notification (total events = 14) Nov 7 10:15:15 someserver [.crit] CPU6: Core power limit notification (total events = 14) Nov 7 10:15:15 someserver [.crit] CPU14: Core power limit notification (total events = 14) Nov 7 10:15:15 someserver [.crit] CPU18: Core power limit notification (total events = 14) Nov 7 10:15:15 someserver [.crit] CPU2: Core power limit notification (total events = 14) Nov 7 10:15:15 someserver [.crit] CPU4: Core power limit notification (total events = 14) Nov 7 10:15:15 someserver [.crit] CPU16: Core power limit notification (total events = 14) Nov 7 10:15:15 someserver [.crit] CPU0: Package power limit notification (total events = 11) Nov 7 10:15:15 someserver [.crit] CPU6: Package power limit notification (total events = 13) Nov 7 10:15:15 someserver [.crit] CPU14: Package power limit notification (total events = 14) Nov 7 10:15:15 someserver [.crit] CPU18: Package power limit notification (total events = 14) Nov 7 10:15:15 someserver [.crit] CPU20: Core power limit notification (total events = 14) Nov 7 10:15:15 someserver [.crit] CPU8: Core power limit notification (total events = 14) Nov 7 10:15:15 someserver [.crit] CPU2: Package power limit notification (total events = 12) Nov 7 10:15:15 someserver [.crit] CPU10: Core power limit notification (total events = 14) Nov 7 10:15:15 someserver [.crit] CPU22: Core power limit notification (total events = 14) Nov 7 10:15:15 someserver [.crit] CPU4: Package power limit notification (total events = 14) Nov 7 10:15:15 someserver [.crit] CPU16: Package power limit notification (total events = 13) Nov 7 10:15:15 someserver [.crit] CPU20: Package power limit notification (total events = 14) Nov 7 10:15:15 someserver [.crit] CPU8: Package power limit notification (total events = 14) Nov 7 10:15:15 someserver [.crit] CPU10: Package power limit notification (total events = 14) Nov 7 10:15:15 someserver [.crit] CPU22: Package power limit notification (total events = 14) Nov 7 10:15:15 someserver [.crit] CPU15: Core power limit notification (total events = 369) Nov 7 10:15:15 someserver [.crit] CPU3: Core power limit notification (total events = 369) Nov 7 10:15:15 someserver [.crit] CPU1: Core power limit notification (total events = 369) Nov 7 10:15:15 someserver [.crit] CPU5: Core power limit notification (total events = 369) Nov 7 10:15:15 someserver [.crit] CPU17: Core power limit notification (total events = 369) Nov 7 10:15:15 someserver [.crit] CPU13: Core power limit notification (total events = 369) Nov 7 10:15:15 someserver [.crit] CPU15: Package power limit notification (total events = 375) Nov 7 10:15:15 someserver [.crit] CPU3: Package power limit notification (total events = 374) Nov 7 10:15:15 someserver [.crit] CPU1: Package power limit notification (total events = 376) Nov 7 10:15:15 someserver [.crit] CPU5: Package power limit notification (total events = 376) Nov 7 10:15:15 someserver [.crit] CPU7: Core power limit notification (total events = 369) Nov 7 10:15:15 someserver [.crit] CPU19: Core power limit notification (total events = 369) Nov 7 10:15:15 someserver [.crit] CPU17: Package power limit notification (total events = 377) Nov 7 10:15:15 someserver [.crit] CPU9: Core power limit notification (total events = 369) Nov 7 10:15:15 someserver [.crit] CPU21: Core power limit notification (total events = 369) Nov 7 10:15:15 someserver [.crit] CPU23: Core power limit notification (total events = 369) Nov 7 10:15:15 someserver [.crit] CPU11: Core power limit notification (total events = 369) Nov 7 10:15:15 someserver [.crit] CPU13: Package power limit notification (total events = 376) Nov 7 10:15:15 someserver [.crit] CPU7: Package power limit notification (total events = 375) Nov 7 10:15:15 someserver [.crit] CPU19: Package power limit notification (total events = 375) Nov 7 10:15:15 someserver [.crit] CPU9: Package power limit notification (total events = 374) Nov 7 10:15:15 someserver [.crit] CPU21: Package power limit notification (total events = 375) Nov 7 10:15:15 someserver [.crit] CPU23: Package power limit notification (total events = 374) A little Google Fu reveals that this is typically associated with the CPU running hot, or voltage regulation kicking in. I don't think that's what is happening though. Temperature sensors for all servers in the cluster are running fine, Power Cap Policy is disabled in the iDRAC, and my System Profile is set to "Performance" on all of these servers: # omreport chassis biossetup | grep -A10 'System Profile' System Profile Settings ------------------------------------------ System Profile : Performance CPU Power Management : Maximum Performance Memory Frequency : Maximum Performance Turbo Boost : Enabled C1E : Disabled C States : Disabled Monitor/Mwait : Enabled Memory Patrol Scrub : Standard Memory Refresh Rate : 1x Memory Operating Voltage : Auto Collaborative CPU Performance Control : Disabled A Dell mailing list post describes the symptoms almost perfectly. Dell suggested that the author try using the Performance profile, but that didn't help. He ended up applying some settings in Dell's guide for configuring a server for low latency environments and one of those settings (or a combination thereof) seems to have fixed the problem. Kernel.org bug #36182 notes that power-limit interrupt debugging was enabled by default, which is causing performance degradation in scenarios where CPU voltage regulation is kicking in. A RHN KB article (RHN login required) mentions a problem impacting PE r620 and r720 servers not running the Performance profile, and recommends an update to a kernel released two weeks ago. ...Except we are running the Performance profile... Everything I can find online is running me in circles here. What's the heck is going on?

    Read the article

  • How to set up spf records to send mail from google hosted apps to gmail addresses

    - by Chris Adams
    Hi there, I'm trying to work out why email I send from one domain I own is rejected by another that I own, and while I think it may be related to how I've setup spf records, I'm not sure what steps I need to take to fix it. Here's the error message I receive: Technical details of permanent failure: Google tried to deliver your message, but it was rejected by the recipient domain. We recommend contacting the other email provider for further information about the cause of this error. The error that the other server returned was: 550 550-Verification failed for <[email protected]> 550-No Such User Here 550 Sender verify failed (state 14). Here's the response from [email protected] Delivered-To: [email protected] Received: by 10.86.92.9 with SMTP id p9cs85371fgb; Wed, 2 Sep 2009 22:33:32 -0700 (PDT) Received: by 10.90.205.4 with SMTP id c4mr2406190agg.29.1251956007562; Wed, 02 Sep 2009 22:33:27 -0700 (PDT) Return-Path: <[email protected]> Received: from verifier.port25.com (207-36-201-235.ptr.primarydns.com [207.36.201.235]) by mx.google.com with ESMTP id 26si831174aga.24.2009.09.02.22.33.25; Wed, 02 Sep 2009 22:33:26 -0700 (PDT) Received-SPF: pass (google.com: domain of [email protected] designates 207.36.201.235 as permitted sender) client-ip=207.36.201.235; Authentication-Results: mx.google.com; spf=pass (google.com: domain of [email protected] designates 207.36.201.235 as permitted sender) [email protected]; dkim=pass [email protected] DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; s=auth; d=port25.com; h=Date:From:To:Subject:Message-Id:In-Reply-To; [email protected]; bh=GRMrcnoucTl4upzqJYTG5sOZMLU=; b=uk6TjADEyZVRkceQGjH94ZzfVeRTsiZPzbXuhlqDt1m+kh1zmdUEoiTOzd89ryCHMbVcnG1JajBj 5vOMKYtA3g== DomainKey-Signature: a=rsa-sha1; c=nofws; q=dns; s=auth; d=port25.com; b=NqKCPK00Xt49lbeO009xy4ZRgMGpghvcgfhjNy7+qI89XKTzi6IUW0hYqCQyHkd2p5a1Zjez2ZMC l0u9CpZD3Q==; Received: from verifier.port25.com (127.0.0.1) by verifier.port25.com (PowerMTA(TM) v3.6a1) id hjt9pq0hse8u for <[email protected]>; Thu, 3 Sep 2009 01:26:52 -0400 (envelope-from <[email protected]>) Date: Thu, 3 Sep 2009 01:26:52 -0400 From: [email protected] To: [email protected] Subject: Authentication Report Message-Id: <[email protected]> Precedence: junk (auto_reply) In-Reply-To: <[email protected]> This message is an automatic response from Port25's authentication verifier service at verifier.port25.com. The service allows email senders to perform a simple check of various sender authentication mechanisms. It is provided free of charge, in the hope that it is useful to the email community. While it is not officially supported, we welcome any feedback you may have at <[email protected]>. Thank you for using the verifier, The Port25 Solutions, Inc. team ========================================================== Summary of Results ========================================================== SPF check: pass DomainKeys check: neutral DKIM check: neutral Sender-ID check: pass SpamAssassin check: ham ========================================================== Details: ========================================================== HELO hostname: fg-out-1718.google.com Source IP: 72.14.220.158 mail-from: [email protected] ---------------------------------------------------------- SPF check details: ---------------------------------------------------------- Result: pass ID(s) verified: [email protected] DNS record(s): stemcel.co.uk. 14400 IN TXT "v=spf1 include:aspmx.googlemail.com ~all" aspmx.googlemail.com. 7200 IN TXT "v=spf1 redirect=_spf.google.com" _spf.google.com. 300 IN TXT "v=spf1 ip4:216.239.32.0/19 ip4:64.233.160.0/19 ip4:66.249.80.0/20 ip4:72.14.192.0/18 ip4:209.85.128.0/17 ip4:66.102.0.0/20 ip4:74.125.0.0/16 ip4:64.18.0.0/20 ip4:207.126.144.0/20 ?all" ---------------------------------------------------------- DomainKeys check details: ---------------------------------------------------------- Result: neutral (message not signed) ID(s) verified: [email protected] DNS record(s): ---------------------------------------------------------- DKIM check details: ---------------------------------------------------------- Result: neutral (message not signed) ID(s) verified: NOTE: DKIM checking has been performed based on the latest DKIM specs (RFC 4871 or draft-ietf-dkim-base-10) and verification may fail for older versions. If you are using Port25's PowerMTA, you need to use version 3.2r11 or later to get a compatible version of DKIM. ---------------------------------------------------------- Sender-ID check details: ---------------------------------------------------------- Result: pass ID(s) verified: [email protected] DNS record(s): stemcel.co.uk. 14400 IN TXT "v=spf1 include:aspmx.googlemail.com ~all" aspmx.googlemail.com. 7200 IN TXT "v=spf1 redirect=_spf.google.com" _spf.google.com. 300 IN TXT "v=spf1 ip4:216.239.32.0/19 ip4:64.233.160.0/19 ip4:66.249.80.0/20 ip4:72.14.192.0/18 ip4:209.85.128.0/17 ip4:66.102.0.0/20 ip4:74.125.0.0/16 ip4:64.18.0.0/20 ip4:207.126.144.0/20 ?all" ---------------------------------------------------------- SpamAssassin check details: ---------------------------------------------------------- SpamAssassin v3.2.5 (2008-06-10) Result: ham (-2.6 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 SPF_PASS SPF: sender matches SPF record -2.6 BAYES_00 BODY: Bayesian spam probability is 0 to 1% [score: 0.0000] 0.0 HTML_MESSAGE BODY: HTML included in message I've registered the spf records for my domain, as advised here Both domains pass validate according to Kitterman's spf record testing tools, so I'm somewhat confused about this. I also have the catchall address set up on the stemcel.co.uk domain here, but I don't have one setup for chrisadams.me.uk. Instead, we have the following forwarders setup [email protected] to [email protected] [email protected] to [email protected] [email protected] to [email protected] [email protected] to [email protected] Any ideas how to get this working? I'm not sure what I should be looking for here.

    Read the article

  • Delay of mail delivery - Hosted exchange provider

    - by alex
    Hi, I recently signed up to a new hosted email provider. When I send mail (from OWA, OR Outlook) there is a delay of up to 3 minutes from when i send the message, to when it's received (in my gmail account for example) I've listed the headers below. Is there anything I can advise my new email host to do? My previous email host delivers within 5 seconds!! New email provider: Delivered-To: ****.*****@******.co.uk.test-google-a.com Received: by 10.223.120.148 with SMTP id d20cs333125far; Mon, 30 Nov 2009 08:49:43 -0800 (PST) Received: by 10.213.106.202 with SMTP id y10mr4864870ebo.35.1259599782838; Mon, 30 Nov 2009 08:49:42 -0800 (PST) Return-Path: Received: from relay005.apm-internet.net (relay005.apm-internet.net [85.119.248.8]) by mx.google.com with SMTP id 26si13016480ewy.43.2009.11.30.08.49.42; Mon, 30 Nov 2009 08:49:42 -0800 (PST) Received-SPF: neutral (google.com: 85.119.248.8 is neither permitted nor denied by best guess record for domain of ****@*******.com) client-ip=85.119.248.8; Authentication-Results: mx.google.com; spf=neutral (google.com: 85.119.248.8 is neither permitted nor denied by best guess record for domain of ****@*******.com) smtp.mail=****@*******.com Received: (qmail 63915 invoked from network); 30 Nov 2009 16:49:41 -0000 Received: from unknown (HELO mx-out-manc2.simplymailsolutions.com) (88.151.129.22) by relay005.apm-internet.net with SMTP; 30 Nov 2009 16:49:42 -0000 X-APM-IP: 88.151.129.22 X-APM-Score: 4 Received-SPF: none (relay005.apm-internet.net: domain at alexjamesbrown.com does not designate permitted sender hosts) Received: from [10.1.20.1] (helo=win-s-manc1.shared.ifeltd.com) by mx-out-manc2.simplymailsolutions.com with esmtp (Exim 4.63) (envelope-from ) id 1NF9QZ-0005By-Hw for ****.*****@******.co.uk; Mon, 30 Nov 2009 16:48:46 +0000 Received: from sha-exch8.shared.ifeltd.com ([10.1.20.8]) by win-s-manc1.shared.ifeltd.com with Microsoft SMTPSVC(6.0.3790.3959); Mon, 30 Nov 2009 16:48:34 +0000 Received: from sha-exch9.shared.ifeltd.com ([10.1.20.9]) by sha-exch8.shared.ifeltd.com with Microsoft SMTPSVC(6.0.3790.3959); Mon, 30 Nov 2009 16:48:34 +0000 Received: from SHA-EXCH13.shared.ifeltd.com (10.1.20.13) by sha-exch9.shared.ifeltd.com (10.1.20.9) with Microsoft SMTP Server (TLS) id 8.1.393.1; Mon, 30 Nov 2009 16:48:25 +0000 Received: from SHA-EXCH12.shared.ifeltd.com ([fe80::ecba:36d0:eec5:c928]) by SHA-EXCH13.shared.ifeltd.com ([fe80::212b:916c:70c7:a4e5%11]) with mapi; Mon, 30 Nov 2009 16:48:05 +0000 From: Alex Brown To: "****.*****@*****.co.uk" Date: Mon, 30 Nov 2009 16:48:04 +0000 Subject: testing Thread-Topic: testing Thread-Index: AQHKcdzZg4oiDsOYIEio/7k6bCk8BQ== Message-ID: Accept-Language: en-US, en-GB Content-Language: en-GB X-MS-Has-Attach: X-MS-TNEF-Correlator: acceptlanguage: en-US, en-GB Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-OriginalArrivalTime: 30 Nov 2009 16:48:34.0235 (UTC) FILETIME=[F48178B0:01CA71DC] Here are the headers using my previous exchange host: Delivered-To: ****.*****@******.co.uk.test-google-a.com Received: by 10.223.120.148 with SMTP id d20cs333076far; Mon, 30 Nov 2009 08:48:35 -0800 (PST) Received: by 10.213.2.70 with SMTP id 6mr4797985ebi.25.1259599715739; Mon, 30 Nov 2009 08:48:35 -0800 (PST) Return-Path: Received: from relay005.apm-internet.net (relay005.apm-internet.net [85.119.248.8]) by mx.google.com with SMTP id 26si13030993ewy.23.2009.11.30.08.48.35; Mon, 30 Nov 2009 08:48:35 -0800 (PST) Received-SPF: neutral (google.com: 85.119.248.8 is neither permitted nor denied by best guess record for domain of ****@*********.com) client-ip=85.119.248.8; Authentication-Results: mx.google.com; spf=neutral (google.com: 85.119.248.8 is neither permitted nor denied by best guess record for domain of ****@*********.com) smtp.mail=****@*********.com Received: (qmail 60920 invoked from network); 30 Nov 2009 16:48:34 -0000 Received: from unknown (HELO MTAb.MsExchange2007.com) (89.31.236.50) by relay005.apm-internet.net with SMTP; 30 Nov 2009 16:48:35 -0000 X-APM-IP: 89.31.236.50 X-APM-Score: 1 Received-SPF: none (relay005.apm-internet.net: domain at alexjamesbrown.com does not designate permitted sender hosts) Received: from EXHUB02.SL.local (no.ptr.hostlogic.biz [89.31.236.28]) by MTAb.MsExchange2007.com (Spam Firewall) with ESMTP id B677A34FE0F for ; Mon, 30 Nov 2009 16:48:33 +0000 (GMT) Received: from EXHUB02.SL.local (no.ptr.hostlogic.biz [89.31.236.28]) by MTAb.MsExchange2007.com with ESMTP id 8X5B8V4tExVzoNyU for ; Mon, 30 Nov 2009 16:48:34 +0000 (GMT) Received: from EXCCR03STORE.SL.local ([10.0.0.2]) by EXHUB02.SL.local ([192.168.92.64]) with mapi; Mon, 30 Nov 2009 16:48:31 +0000 From: Alex James Brown To: "****.*****@******.co.uk" Date: Mon, 30 Nov 2009 16:48:30 +0000 Subject: testing from o Thread-Topic: testing from o Thread-Index: AQHKcdzyY1iBFWiol0ykG6xPQUZiTg== Message-ID: Accept-Language: en-US, en-GB Content-Language: en-GB X-MS-Has-Attach: X-MS-TNEF-Correlator: acceptlanguage: en-US, en-GB Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0

    Read the article

  • Email sent from server with rDNS & SPF being blocked by Hotmail

    - by Canadaka
    I have been unable to send email to users on hotmail or other Microsoft email servers for some time. Its been a major headache trying to find out why and how to fix the issue. The emails being sent that are blocked from my domain canadaka.net. I use Google Aps to host my regular email serverice for my @canadaka.net email addresses. I can sent email from my desktop or gmail to a hotmail without any problem. But any email sent from my server on behalf of canadaka.net is blocked, not even arriving in the junk email. The IP that the emails are being sent from is the same IP that my site is hosted on: 66.199.162.177 This IP is new to me since August 2010, I had a different IP for the previous 3-4 years. This IP is not on any credible spam lists http://www.anti-abuse.org/multi-rbl-check-results/?host=66.199.162.177 The one list spamcannibal.org my IP is listed on seems to be out of my control, says "no reverse DNS, MX host should have rDNS - RFC1912 2.1". But since I use Google for my email hosting, I don't have control over setting up RDNS for all the MX records. I do have Reverse DNS setup for my IP though, it resolves to "mail.canadaka.net". I have signed up for SNDS and was approved. My ip says "All of the specified IPs have normal status." Sender Score: 100 https://www.senderscore.org/lookup.php?lookup=66.199.162.177&ipLookup.x=55&ipLookup.y=14 My Mcafee threat level seems fine I have a TXT SPF record setup, I am currently using xname.org as my DNS, and they don't have a field for SPF, but their FAQ says to add the SPF info as a TXT entry. v=spf1 a include:_spf.google.com ~all Some "SPF checking" tools ive used detect that my domain has a valid SPF, but others don't. Like Microsoft's SPF wizard, i think this is because its specifically looking for an SPF record and not in the TXT. "No SPF Record Found. A and MX Records Available". From my home I can run "nslookup -type=TXT canadaka.net" and it returns: Server: google-public-dns-a.google.com Address: 8.8.8.8 Non-authoritative answer: canadaka.net text = "v=spf1 a include:_spf.google.com ~all" One strange thing I found is i'm unable to ping hotmail.com or msn.com or do a "telnet mail.hotmail.com 25". I am able to ping gmail.com and many other domains I tried. I tried changing my DNS servers to Google's Public DNS and did a ipconfig /flushdns but that had no effect. I am however able to connect with telnet to mx1.hotmail.com This is what the email headers look like when I send to a Google email server and I receive the email with no troubles. You can see that SPF is passing. Delivered-To: [email protected] Received: by 10.146.168.12 with SMTP id q12cs91243yae; Sun, 27 Feb 2011 18:01:49 -0800 (PST) Received: by 10.43.48.7 with SMTP id uu7mr4292541icb.68.1298858509242; Sun, 27 Feb 2011 18:01:49 -0800 (PST) Return-Path: Received: from canadaka.net ([66.199.162.177]) by mx.google.com with ESMTP id uh9si8493137icb.127.2011.02.27.18.01.45; Sun, 27 Feb 2011 18:01:48 -0800 (PST) Received-SPF: pass (google.com: domain of [email protected] designates 66.199.162.177 as permitted sender) client-ip=66.199.162.177; Authentication-Results: mx.google.com; spf=pass (google.com: domain of [email protected] designates 66.199.162.177 as permitted sender) [email protected] Message-Id: <[email protected] Received: from coruscant ([127.0.0.1]:12907) by canadaka.net with [XMail 1.27 ESMTP Server] id for from ; Sun, 27 Feb 2011 18:01:29 -0800 Date: Sun, 27 Feb 2011 18:01:29 -0800 Subject: Test To: [email protected] From: XXXX Reply-To: [email protected] X-Mailer: PHP/5.2.13 I can send to gmail and other email services fine. I don't know what i'm doing wrong! UPDATE 1 I have been removed from hotmails IP block and am now able to send emails to hotmail, but they are all going directly to the JUNK folder. UPDATE 2 I used Telnet to send a test message to port25.com, seems my SPF is not being detected. Result: neutral (SPF-Result: None) canadaka.net. SPF (no records) canadaka.net. TXT (no records) I do have a TXT record, its been there for years, I did change it a week ago. Other sites that allow you to check your SPF detect it, but some others like Microsofts Wizard doesn't. This iw what my SPF record in my xname.org DNS file looks like: canadaka.net. 86400 IN TXT "v=spf1 a include:_spf.google.com ~all" I did have a nameserver as my 4th option that doens't have the TXT records since it doens't support it. So I removed it from the list and instead added wtfdns.com as my 4th adn 5th nameservers, which does support TXT.

    Read the article

  • Java 7 update 6 installation fails on Windows 7 when Chrome is default browser

    - by ali1234
    I am configuring a brand new Lenovo U410 system with Windows 7 Home Premium for a user. I received the system direct from the shop. As part of the configuration I installed Java using the online installer. This worked correctly. Later, due to a mistake I made, I needed to restore the system to factory default. The factory default FORMATS C:\ and puts back (supposedly) the exact factory configuration. However, after doing this, I was no longer able to install Java successfully using the same method I used before. Now, whenever I attempt to use the online Java installer, the following happens. First of all, a window always appears "Welcome to Java", "Downloading Java Installer...". After short time this window disappears and then one of three things happens: The very first time I do this after doing a factory reset, I get a Windows error report, which contains this information: Application Name: JavaSetup7u5.exe Application Version: 7.0.50.6 Application Timestamp: 4feacd84 Fault Module Name: JavaIC.dll Fault Module Version: 9.9.9.9 Fault Module Timestamp: 4f2343d6 Exception Offset: 000052cb Exception Code: c0000417 Exception Data: 00000000 OS Version: 6.1.7600.2.0.0.768.3 Locale ID: 1033 Additional Information 1: 773c Additional Information 2: 773cd78cf06816f8246f359fa270f3bb Additional Information 3: f51a Additional Information 4: f51aaea7d22f36fa9e3a626b5a5cd1c3 2. Subsequent runs produce either this error message: "Error: Java(TM) installer - Downloaded file C:\Users\\AppData\Local\Temp\fx-runtime.exe is corrupt." or Nothing happens at all. I Believe this is a red herring. Running the installer again causes a different error because the files were downloaded and the installer crashed before it could clean up. This isn't the actual problem, as when this happens the installer deletes the downloaded files, and then when you run it for the third time, it downloads everything again and does the javaic.dll crash. I suspect the downloader is appending to the existing files or something, causing the corruption. I have tried all of the above as Administrator and as a normal user. I have tried reseting the system to factory defaults several times. I have tried downloading with Chrome and Internet Explorer 9. I have tried uninstalling all anti-virus software and disabling the windows firewall entirely. The only thing which makes a difference is running the installer in Windows XP compatibility mode, which allows the installation to complete. I know I can workaround this error by using the offline installer so please don't post that as an answer. I am looking for an explanation of the root cause. Additionally, if I use the offline installer, the updater does not work. The updater also does not work if I install in XP mode. The updater fails because it works by just downloading the newest online setup and running it. Also remember that the installers are digitally signed. The signitures verify correctly so there is no way in hell that this is caused by corrupted downloads. Some theories I have: The Java setup files on java.com actually changed in between the first successful install and my later attempts. Seems unlikely as none of the version numbers have changed. However, I have seen a couple of reports of this error which showed up in the past 24 hours. This looks like the most likely explanation right now: http://www.oracle.com/us/corporate/press/1735645 - Oracle released 7 update 6 two days ago. Careful inspection of the installers reveal that they are in fact attempting to download .6, not .5 as the download page claims. Not actually correct. Only the update tool tries to install 7u6. The online installer still tries 7u5. However, 7u6 being released two days ago is too much of a coincidence to ignore. Update: The 7u6 online installer is available from Oracle technetwork. It crashes in exactly the same way. The factory reset software uses GMT-8 and I am on GMT-1. As a result, after factory reset, any software which cares to check would think that the system was restored 7 hours in the future, due to Window's awful policy of storing local time in the system clock. This could be confusing a certificate check or similar. Update: I discovered that this does cause Windows Update to fail. The workaround, setting the clock back before starting factory reset, does not enable Java to install correctly. The factory reset image isn't really the same as what is installed in the main partition when you buy the system. Naughty Lenovo. The installer appears to crash while installing or displaying something to do with the Ask.com toolbar. That seems to be what javaic.dll does. Microsoft Tuesday was the 14th. Some update in that could be causing this. However, I'm factory reseting the machine every time, so unless the patches get slipstreamed into the recovery image, or there is some mechanism by which they get silently installed even if updates are disabled, then I don't see how this can be the cause. Major breakthrough: The default browser on Lenovo systems is Google Chrome. I noticed that the JavaIC.dll "sponsor check" actually does a check on your default browser in order to decide which sponsor ad to display. Normally that would get you the Ask toolbar on IE9. But that toolbar doesn't work on Chrome, and so the installer tries to display a different ad. The different ad is what causes the crash. Changing the default browser to IE9 allows the installer to run correctly. So this looks like a genuine bug in the sponsor ad code in the installer, caused by a combination of Google Chrome default browser and not being in the US. (Installer also checks your location using IP geolocation service and displays different ads based on that.)

    Read the article

< Previous Page | 81 82 83 84 85 86 87 88 89  | Next Page >