Search Results

Search found 12584 results on 504 pages for 'sharepoint2010 tools'.

Page 485/504 | < Previous Page | 481 482 483 484 485 486 487 488 489 490 491 492  | Next Page >

  • DBA Best Practices - A Blog Series: Episode 1 - Backups

    - by Argenis
      This blog post is part of the DBA Best Practices series, on which various topics of concern for daily database operations are discussed. Your feedback and comments are very much welcome, so please drop by the comments section and be sure to leave your thoughts on the subject. Morning Coffee When I was a DBA, the first thing I did when I sat down at my desk at work was checking that all backups have completed successfully. It really was more of a ritual, since I had a dual system in place to check for backup completion: 1) the scheduled agent jobs to back up the databases were set to alert the NOC in failure, and 2) I had a script run from a central server every so often to check for any backup failures. Why the redundancy, you might ask. Well, for one I was once bitten by the fact that database mail doesn't work 100% of the time. Potential causes for failure include issues on the SMTP box that relays your server email, firewall problems, DNS issues, etc. And so to be sure that my backups completed fine, I needed to rely on a mechanism other than having the servers do the taking - I needed to interrogate the servers and ask each one if an issue had occurred. This is why I had a script run every so often. Some of you might have monitoring tools in place like Microsoft System Center Operations Manager (SCOM) or similar 3rd party products that would track all these things for you. But at that moment, we had no resort but to write our own Powershell scripts to do it. Now it goes without saying that if you don't have backups in place, you might as well find another career. Your most sacred job as a DBA is to protect the data from a disaster, and only properly safeguarded backups can offer you peace of mind here. "But, we have a cluster...we don't need backups" Sadly I've heard this line more than I would have liked to. You need to understand that a cluster is comprised of shared storage, and that is precisely your single point of failure. A cluster will protect you from an issue at the Operating System level, and also under an outage of any SQL-related service or dependent devices. But it will most definitely NOT protect you against corruption, nor will it protect you against somebody deleting data from a table - accidentally or otherwise. Backup, fine. How often do I take a backup? The answer to this is something you will hear frequently when working with databases: it depends. What does it depend on? For one, you need to understand how much data your business is willing to lose. This is what's called Recovery Point Objective, or RPO. If you don't know how much data your business is willing to lose, you need to have an honest and realistic conversation about data loss expectations with your customers, internal or external. From my experience, their first answer to the question "how much data loss can you withstand?" will be "zero". In that case, you will need to explain how zero data loss is very difficult and very costly to achieve, even in today's computing environments. Do you want to go ahead and take full backups of all your databases every hour, or even every day? Probably not, because of the impact that taking a full backup can have on a system. That's what differential and transaction log backups are for. Have I answered the question of how often to take a backup? No, and I did that on purpose. You need to think about how much time you have to recover from any event that requires you to restore your databases. This is what's called Recovery Time Objective. Again, if you go ask your customer how long of an outage they can withstand, at first you will get a completely unrealistic number - and that will be your starting point for discussing a solution that is cost effective. The point that I'm trying to get across is that you need to have a plan. This plan needs to be practiced, and tested. Like a football playbook, you need to rehearse the moves you'll perform when the time comes. How often is up to you, and the objective is that you feel better about yourself and the steps you need to follow when emergency strikes. A backup is nothing more than an untested restore Backups are files. Files are prone to corruption. Put those two together and realize how you feel about those backups sitting on that network drive. When was the last time you restored any of those? Restoring your backups on another box - that, by the way, doesn't have to match the specs of your production server - will give you two things: 1) peace of mind, because now you know that your backups are good and 2) a place to offload your consistency checks with DBCC CHECKDB or any of the other DBCC commands like CHECKTABLE or CHECKCATALOG. This is a great strategy for VLDBs that cannot withstand the additional load created by the consistency checks. If you choose to offload your consistency checks to another server though, be sure to run DBCC CHECKDB WITH PHYSICALONLY on the production server, and if you're using SQL Server 2008 R2 SP1 CU4 and above, be sure to enable traceflags 2562 and/or 2549, which will speed up the PHYSICALONLY checks further - you can read more about this enhancement here. Back to the "How Often" question for a second. If you have the disk, and the network latency, and the system resources to do so, why not backup the transaction log often? As in, every 5 minutes, or even less than that? There's not much downside to doing it, as you will have to clear the log with a backup sooner than later, lest you risk running out space on your tlog, or even your drive. The one drawback to this approach is that you will have more files to deal with at restore time, and processing each file will add a bit of extra time to the entire process. But it might be worth that time knowing that you minimized the amount of data lost. Again, test your plan to make sure that it matches your particular needs. Where to back up to? Network share? Locally? SAN volume? This is another topic where everybody has a favorite choice. So, I'll stick to mentioning what I like to do and what I consider to be the best practice in this regard. I like to backup to a SAN volume, i.e., a drive that actually lives in the SAN, and can be easily attached to another server in a pinch, saving you valuable time - you wouldn't need to restore files on the network (slow) or pull out drives out a dead server (been there, done that, it’s also slow!). The key is to have a copy of those backup files made quickly, and, if at all possible, to a remote target on a different datacenter - or even the cloud. There are plenty of solutions out there that can help you put such a solution together. That right there is the first step towards a practical Disaster Recovery plan. But there's much more to DR, and that's material for a different blog post in this series.

    Read the article

  • Execution plan warnings–The final chapter

    - by Dave Ballantyne
    In my previous posts (here and here), I showed examples of some of the execution plan warnings that have been added to SQL Server 2012.  There is one other warning that is of interest to me : “Unmatched Indexes”. Firstly, how do I know this is the final one ?  The plan is an XML document, right ? So that means that it can have an accompanying XSD.  As an XSD is a schema definition, we can poke around inside it to find interesting things that *could* be in the final XML file. The showplan schema is stored in the folder Microsoft SQL Server\110\Tools\Binn\schemas\sqlserver\2004\07\showplan and by comparing schemas over releases you can get a really good idea of any new functionality that has been added. Here is the section of the Sql Server 2012 showplan schema that has been interesting me so far : <xsd:complexType name="AffectingConvertWarningType"> <xsd:annotation> <xsd:documentation>Warning information for plan-affecting type conversion</xsd:documentation> </xsd:annotation> <xsd:sequence> <!-- Additional information may go here when available --> </xsd:sequence> <xsd:attribute name="ConvertIssue" use="required"> <xsd:simpleType> <xsd:restriction base="xsd:string"> <xsd:enumeration value="Cardinality Estimate" /> <xsd:enumeration value="Seek Plan" /> <!-- to be extended here --> </xsd:restriction> </xsd:simpleType> </xsd:attribute> <xsd:attribute name="Expression" type ="xsd:string" use="required" /></xsd:complexType><xsd:complexType name="WarningsType"> <xsd:annotation> <xsd:documentation>List of all possible iterator or query specific warnings (e.g. hash spilling, no join predicate)</xsd:documentation> </xsd:annotation> <xsd:choice minOccurs="1" maxOccurs="unbounded"> <xsd:element name="ColumnsWithNoStatistics" type="shp:ColumnReferenceListType" minOccurs="0" maxOccurs="1" /> <xsd:element name="SpillToTempDb" type="shp:SpillToTempDbType" minOccurs="0" maxOccurs="unbounded" /> <xsd:element name="Wait" type="shp:WaitWarningType" minOccurs="0" maxOccurs="unbounded" /> <xsd:element name="PlanAffectingConvert" type="shp:AffectingConvertWarningType" minOccurs="0" maxOccurs="unbounded" /> </xsd:choice> <xsd:attribute name="NoJoinPredicate" type="xsd:boolean" use="optional" /> <xsd:attribute name="SpatialGuess" type="xsd:boolean" use="optional" /> <xsd:attribute name="UnmatchedIndexes" type="xsd:boolean" use="optional" /> <xsd:attribute name="FullUpdateForOnlineIndexBuild" type="xsd:boolean" use="optional" /></xsd:complexType> I especially like the “to be extended here” comment,  high hopes that we will see more of these in the future.   So “Unmatched Indexes” was a warning that I couldn’t get and many thanks must go to Fabiano Amorim (b|t) for showing me the way.   Filtered indexes were introduced in Sql Server 2008 and are really useful if you only need to index only a portion of the data within a table.  However,  if your SQL code uses a variable as a predicate on the filtered data that matches the filtered condition, then the filtered index cannot be used as, naturally,  the value in the variable may ( and probably will ) change and therefore will need to read data outside the index.  As an aside,  you could use option(recompile) here , in which case the optimizer will build a plan specific to the variable values and use the filtered index,  but that can bring about other problems.   To demonstrate this warning, we need to generate some test data :   DROP TABLE #TestTab1GOCREATE TABLE #TestTab1 (Col1 Int not null, Col2 Char(7500) not null, Quantity Int not null)GOINSERT INTO #TestTab1 VALUES (1,1,1),(1,2,5),(1,2,10),(1,3,20), (2,1,101),(2,2,105),(2,2,110),(2,3,120)GO and then add a filtered index CREATE INDEX ixFilter ON #TestTab1 (Col1)WHERE Quantity = 122 Now if we execute SELECT COUNT(*) FROM #TestTab1 WHERE Quantity = 122 We will see the filtered index being scanned But if we parameterize the query DECLARE @i INT = 122SELECT COUNT(*) FROM #TestTab1 WHERE Quantity = @i The plan is very different a table scan, as the value of the variable used in the predicate can change at run time, and also we see the familiar warning triangle. If we now look at the properties pane, we will see two pieces of information “Warnings” and “UnmatchedIndexes”. So, handily, we are being told which filtered index is not being used due to parameterization.

    Read the article

  • I Clobbered a Leopard with a Window Last Night

    - by D'Arcy Lussier
    I’ve had my 15” Mac Book Pro for a little over a year now, and its hands-down the best laptop I’ve ever owned…hardware wise. And I tried, I really really tried, to like OSX. I even bought Parallels so I could run Windows 7 and all my development tools while still trying to live in an OSX world. But in the end, I missed Windows too much. There were just too many shortcomings with OSX that kept me from being productive. For one thing, Office for Mac is *not* Office for Windows. The applications are written by different teams, and Excel on the Mac is just different enough to be painful. The VM experience was adequate, but my MBP would heat up like crazy when running it and the experience trying to get Windows apps to interact with an OSX file system was awkward. And I found I was in the VM more than I thought I’d be. iMovie is not as easy to use for doing simple movie editing as Windows Movie Maker. There’s no free blog editing software for OSX that’s on par with Windows Live Writer. And really, all I was using OSX for was Twitter (which I can use a Windows client for) and web browsing (also something Windows can provide obviously). So I had to ask myself – why am I forcing myself to use an operating system I don’t like, on a laptop that can support Windows 7? And so I paved my MBP and am happily running Windows 7 on it…and its fantastic! All the good stuff with the hardware is still there with the goodness of Win 7. Happy happy. I did run into some snags doing this though, and that’s really what this blog post is about – things to be aware of if you want to install Win 7 directly on your MBP metal. First, Ensure You Have Your Original Mac Install Disk This was a warning my buddy Dylan, who’s been running Win 7 on his MBP for a while now, gave me early on. The reason you need that original disk is that the hardware drivers you need are all located there. Apparently you can’t easily download them, so make sure you have them ahead of time. Second, Forget BootCamp The only reason you need BootCamp is if you still want the option to boot into OSX. If you don’t, then you don’t need BootCamp. In fact, you don’t even need BootCamp to install Win 7. What you *will* need though is a DVD with Win 7 burnt on it. Apple doesn’t support bootable USB drives. Well, actually they do for Mac Book Airs which don’t come with optical drives…but to get it working you’ll need to edit a system file of BootCamp so your make of MBP is included in an XML document, and even then you *still* are using BootCamp meaning you’ll be making an OSX partition. So don’t worry about BootCamp, just burn a Windows 7 disc, put it into the DVD drive, and restart your MBP. Third, Know The Secret Commands So after putting in the Windows 7 DVD and restarting your MBP, you’ll want to hold down the ‘C’ key during boot up. This tells the MBP that it should boot from the DVD drive instead of the hard drive. Interestingly, it appears you don’t have to do this if its the Mac OSX install disc (more on that in a second), but regardless – hold down C and Windows will start the install process. Next up is the partition process. You’ll notice that there’s a partition called ETI or something like that. This has to do with the drive format that Apple uses and how they partition their system drives. What I did – I blew it away! At first I didn’t, but I was told I couldn’t install Windows on the remaining space due to the different drive format. Blowing away the ETI partition (and all other partitions) allowed me to continue the Windows install. *REMEMBER –  No warranty is provided or implied, just telling you what I did and how I got it to work. Ok, so now Windows is installed and I’m rebooting. Everything looks good, but I need drivers! So I put in the OSX install DVD and run the BootCamp assistant which installs all the Windows drivers I need. Fantastic! Oh, I need to restart – no problem. OH NO, PROBLEM! I left the OSX install DVD in the drive and now the MBP wants to boot from the drive and install OSX! I’m not holding down the C key, what the heck?! Ok, well there must be a way to eject this disk…hmm…no physical button on the side…the eject button doesn’t seem to work on the keyboard…no little pin hole to insert something to force the disc out…well what the…?! It turns out, if you want to eject a disc at boot up, you need (and I kid you not) to plug a mouse into the laptop and hold down the right-click button while its booting. This ejected the disc for me. Seriously. Finally, Things You Should Be Aware Of Once you have Windows up and running there’s a few things you need to be aware of, mainly new keyboard shortcuts. For instance, on the Mac keyboard there is no Home, End, PageUp or PageDown. There’s also no obvious way to do something like select large amounts of text (like you would by holding Shift-Home at the end of a line of text for instance). So here’s some shortcuts you need to know: Home – fn + left arrow End – fn + right arrow Select a line of text as you would with the Home key – Shift + fn + left arrow Select a line of text as you would with the End key – Shift + fn + right arrow Page Up – fn + up arrow Page Down – fn + down arrow Also, you’ll notice that the awesome Mac track pad doesn’t respond to taps as clicks. No fear, this is just a setting that needs to be altered in the BootCamp control panel (that controls the Mac Hardware-specific settings within Windows, you can access it easily from the system tray icon) One other thing, battery life seems a bit lower than with OSX, but then again I’m also doing more than Twitter or web browsing on this thing now. Conclusion My laptop runs awesome now that I have Windows 7 on there. It’s obviously up to individual taste, but for me I just didn’t see benefits to living in an OSX world when everything I needed lived in Windows. And also, I finally am back to an operating system that doesn’t require me to eject a USB drive before physically removing it! It’s 2012 folks, how has this not been fixed?! D

    Read the article

  • Blogging locally and globally–my experience

    - by DigiMortal
    In Baltic MVP Summit 2011 there was discussion about having two blogs - one for local and another for global audience – and how to publish once written information in these blogs. There are many ways how to optimize your blogging activities if you have more than one audience and here you can find my experiences, best practices and advices about this topic. My two blogs I have to working blogs: this one here technology and programming blog for local market My local blog is almost five years old and it makes it one of the oldest company blogs in Estonia. It is still active and I write there as much as I have time for it. This blog here is active since September 2007, so it is about 3.5 years old right now. Both of these blogs are  my major hits in my MVP carrier and they have very good web statistics too. My local blog My local blog is about programming, web and technology. It has way wider target audience then this blog here has. By example, in my local blog I blog also about local events, cool new concept phones, different webs providing some interesting services etc. But local guys can find there also my postings about how to solve one or another programming problem and postings about Microsoft technologies I am playing with. This far my local blog has a lot of readers for such a small country that Estonia is. This blog has made me a lot of cool contacts and I have had there a lot of interesting discussions about different technical topics. Why I started this blog? Living in small country is different than living in big country. In small country you have less people and therefore smaller audience so you have to target more than one technical topic to find enough readers. In a same time you are still interested in your main topics and you want to reach to more people who are sharing same interests with you. Practically one day y will grow out from local market and you go global. This is how this blog was born. Was it worth to create, promote and mess with it? Every second I have put on my time to this blog has been worth of it. Thanks to this blog I have found new good friends and without them I think it is more boring to work on different problems and solutions. Defining target audiences One thing you should always do when having more than one blog is defining target audiences. If you are just technomaniac interested in sharing your stuff and make some new friends and have something to write to your MVP nomination form then you don’t have to go through complex targeting process. You can do it simple way and same effectively. Here is how I defined target audiences to my blogs: local blog – reader of my local blog is IT professional, software developer, technology innovator or just some guy who is interested in technology,   this blog – reader of this blog is experienced professional software developer who works on Microsoft technologies or software developer who is open minded and open to new technologies and interesting solutions to development problems. You can see how local blog – due to small market with less people – has wider definition for audience while this blog is heavily targeted to Microsoft technologies and specially to software development. On practical side these decisions are also made well I think because it is very hard to build up popular common IT blog. On global level it is better to target some specific niche and find readers who are professionals on your favorite topics. Thanks to this blog I have found new friends who are professional developers and I am very happy about all the discussions I have had with them. Publishing content to different blogs My local blog and this blog have some overlapping topics like .NET, databases and SEO. Due to this overlapping there is question: when I write posting to my local blog then should I have to publish same thing in my global blog? And if I write something to my global blog then should I publish same thing also in my local blog? Well, it really depends on the definition of your target audiences. If they match then of course it is good idea to translate you post and publish it also to another blog. But if you have different audiences then you may need to modify your posting before publishing it. The questions you have to answer are: is target audience interested in this topic? is target audience expecting more specific and deeper handling of this topic or are they expecting more general handling of topic? is the problem you are discussing actual for target audience or not? You have to answer these questions and after that make your decision. If you need to modify your original posting then take some time and do it. Provide quality to all your readers because they will respect you if you respect them. Cross-posting and referencing It is tempting to save time that preparing some blog post takes and if you have are done with posting in one blog it may seem like good idea to make short posting to another blog and add reference to first one where topic is discussed longer. Well, don’t do it – all your readers expect good quality content from you and jumping from one blog post to another is disturbing for them. Of course, there is problem with differences between target audiences. You may have wider target audience and some people may be interested in more specific handling of topic. In this case feel free to refer your blog you are writing in english. This is not working very well in opposite direction because almost all my global blog readers understand english but not estonian. By example, estonian language is complex one and online translating tools make very poor translations from estonian language. This is why I don’t even plan to publish postings here that refer to my local blog for more information. I am keeping these two blogs as two different worlds and if there is posting that fits well to both blogs I will write my posting to one blog and then answer previous three questions before posting same thing to another blog. Conclusion Growing out of your local market is not anything mysterious if you are living in small country. As it is harder to find people there who are interested in same topics with you then sooner or later you will start finding these new contacts from global audience. Global audience is bigger and to be visible there you must provide high quality content to your audience. It is something you will learn over time and you will learn every day something new when you are posting to your global blog. You may ask: if global blog is much more complex thing to do then is it worth to do at all? My answer is: yes, do it for sure. It is not easy thing to do when you start but if you work on your global blog and improve it over time you will get over all obstacles pretty soon. Just don’t forget one thing – content is king and your readers expect high quality from you.

    Read the article

  • Code Contracts: How they look after compiling?

    - by DigiMortal
    When you are using new tools that make also something at code level then it is good idea to check out what additions are made to code during compilation. Code contracts have simple syntax when we are writing code at Visual Studio but what happens after compilation? Are our methods same as they look in code or are they different after compilation? In this posting I will show you how code contracts look after compiling. In my previous examples about code contracts I used randomizer class with method called GetRandomFromRangeContracted. public int GetRandomFromRangeContracted(int min, int max) {     Contract.Requires<ArgumentOutOfRangeException>(         min < max,         "Min must be less than max"     );       Contract.Ensures(         Contract.Result<int>() >= min &&         Contract.Result<int>() <= max,         "Return value is out of range"     );       return _generator.Next(min, max); } Okay, it is nice to dream about similar code when we open our assembly with Reflector and disassemble it. But… this time we have something interesting. While reading this code don’t feel uncomfortable about the names of variables. This is disassembled code. .NET Framework internally allows these names. It is our compilators that doesn’t accept them when we are building our code. public int GetRandomFromRangeContracted(int min, int max) {     int Contract.Old(min);     int Contract.Old(max);     if (__ContractsRuntime.insideContractEvaluation <= 4)     {         try         {             __ContractsRuntime.insideContractEvaluation++;             __ContractsRuntime.Requires<ArgumentOutOfRangeException>(                min < max,                "Min must be less than max", "min < max");         }         finally         {             __ContractsRuntime.insideContractEvaluation--;         }     }     try     {         Contract.Old(min) = min;     }     catch (Exception exception1)     {         if (exception1 == null)         {             throw;         }     }     try     {         Contract.Old(max) = max;         catch (Exception exception2)     {         if (exception2 == null)         {             throw;         }     }     int CS$1$0000 = this._generator.Next(min, max);     int Contract.Result<int>() = CS$1$0000;     if (__ContractsRuntime.insideContractEvaluation <= 4)     {         try         {             __ContractsRuntime.insideContractEvaluation++;             __ContractsRuntime.Ensures(                (Contract.Result<int>() >= Contract.Old(min)) &&                (Contract.Result<int>() <= Contract.Old(max)),                "Return value is out of range",                "Contract.Result<int>() >= min && Contract.Result<int>() <= max");         }         finally         {             __ContractsRuntime.insideContractEvaluation--;         }     }     return Contract.Result<int>(); } As we can see then contracts are not simply if-then-else checks and exceptions throwing. We can see that there is counter that is incremented before checks and decremented after these whatever the result of check was. One thing that is annoying for me are null checks for exception1 and exception2. Is there really some situation possible when null is thrown instead of some instance that is Exception or that inherits from exception? Conclusion Code contracts are more complex mechanism that it seems when we look at it on our code level. Internally there are done more things than we know. I don’t say it is wrong, it is just good to know how our code looks after compiling. Looking at this example it is sure we need also performance tests for contracted code to see how heavy is their impact to system performance when we run code that makes heavy use of code contracts.

    Read the article

  • ANTS CLR and Memory Profiler In Depth Review (Part 2 of 2 &ndash; Memory Profiler)

    - by ToStringTheory
    One of the things that people might not know about me, is my obsession to make my code as efficient as possible. Many people might not realize how much of a task or undertaking that this might be, but it is surely a task as monumental as climbing Mount Everest, except this time it is a challenge for the mind… In trying to make code efficient, there are many different factors that play a part – size of project or solution, tiers, language used, experience and training of the programmer, technologies used, maintainability of the code – the list can go on for quite some time. I spend quite a bit of time when developing trying to determine what is the best way to implement a feature to accomplish the efficiency that I look to achieve. One program that I have recently come to learn about – Red Gate ANTS Performance (CLR) and Memory profiler gives me tools to accomplish that job more efficiently as well. In this review, I am going to cover some of the features of the ANTS memory profiler set by compiling some hideous example code to test against. Notice As a member of the Geeks With Blogs Influencers program, one of the perks is the ability to review products, in exchange for a free license to the program. I have not let this affect my opinions of the product in any way, and Red Gate nor Geeks With Blogs has tried to influence my opinion regarding this product in any way. Introduction – Part 2 In my last post, I reviewed the feature packed Red Gate ANTS Performance Profiler.  Separate from the Red Gate Performance Profiler is the Red Gate ANTS Memory Profiler – a simple, easy to use utility for checking how your application is handling memory management…  A tool that I wish I had had many times in the past.  This post will be focusing on the ANTS Memory Profiler and its tool set. The memory profiler has a large assortment of features just like the Performance Profiler, with the new session looking nearly exactly alike: ANTS Memory Profiler Memory profiling is not something that I have to do very often…  In the past, the few cases I’ve had to find a memory leak in an application I have usually just had to trace the code of the operations being performed to look for oddities…  Sadly, I have come across more undisposed/non-using’ed IDisposable objects, usually from ADO.Net than I would like to ever see.  Support is not fun, however using ANTS Memory Profiler makes this task easier.  For this round of testing, I am going to use the same code from my previous example, using the WPF application. This time, I will choose the ‘Profile Memory’ option from the ANTS menu in Visual Studio, which launches the solution in its currently configured state/start-up project, and then launches the ANTS Memory Profiler to help.  It prepopulates all of the fields with the current project information, and all I have to do is select the ‘Start Profiling’ option. When the window comes up, it is actually quite barren, just giving ideas on how to work the profiler.  You start by getting to the point in your application that you want to profile, and then taking a ‘Memory Snapshot’.  This performs a full garbage collection, and snapshots the managed heap.  Using the same WPF app as before, I will go ahead and take a snapshot now. As you can see, ANTS is already giving me lots of information regarding the snapshot, however this is just a snapshot.  The whole point of the profiler is to perform an action, usually one where a memory problem is being noticed, and then take another snapshot and perform a diff between them to see what has changed.  I am going to go ahead and generate 5000 primes, and then take another snapshot: As you can see, ANTS is already giving me a lot of new information about this snapshot compared to the last.  Information such as difference in memory usage, fragmentation, class usage, etc…  If you take more snapshots, you can use the dropdown at the top to set your actual comparison snapshots. If you beneath the timeline, you will see a breadcrumb trail showing how best to approach profiling memory using ANTS.  When you first do the comparison, you start on the Summary screen.  You can either use the charts at the bottom, or switch to the class list screen to get to the next step.  Here is the class list screen: As you can see, it lists information about all of the instances between the snapshots, as well as at the bottom giving you a way to filter by telling ANTS what your problem is.  I am going to go ahead and select the Int16[] to look at the Instance Categorizer Using the instance categorizer, you can travel backwards to see where all of the instances are coming from.  It may be hard to see in this image, but hopefully the lightbox (click on it) will help: I can see that all of these instances are rooted to the application through the UI TextBlock control.  This image will probably be even harder to see, however using the ‘Instance Retention Graph’, you can trace an objects memory inheritance up the chain to see its roots as well.  This is a simple example, as this is simply a known element.  Usually you would be profiling an actual problem, and comparing those differences.  I know in the past, I have spotted a problem where a new context was created per page load, and it was rooted into the application through an event.  As the application began to grow, performance and reliability problems started to emerge.  A tool like this would have been a great way to identify the problem quickly. Overview Overall, I think that the Red Gate ANTS Memory Profiler is a great utility for debugging those pesky leaks.  3 Biggest Pros: Easy to use interface with lots of options for configuring profiling session Intuitive and helpful interface for drilling down from summary, to instance, to root graphs ANTS provides an API for controlling the profiler. Not many options, but still helpful. 2 Biggest Cons: Inability to automatically snapshot the memory by interval Lack of complete integration with Visual Studio via an extension panel Ratings Ease of Use (9/10) – I really do believe that they have brought simplicity to the once difficult task of memory profiling.  I especially liked how it stepped you further into the drilldown by directing you towards the best options. Effectiveness (10/10) – I believe that the profiler does EXACTLY what it purports to do.  Features (7/10) – A really great set of features all around in the application, however, I would like to see some ability for automatically triggering snapshots based on intervals or framework level items such as events. Customer Service (10/10) – My entire experience with Red Gate personnel has been nothing but good.  their people are friendly, helpful, and happy! UI / UX (9/10) – The interface is very easy to get around, and all of the options are easy to find.  With a little bit of poking around, you’ll be optimizing Hello World in no time flat! Overall (9/10) – Overall, I am happy with the Memory Profiler and its features, as well as with the service I received when working with the Red Gate personnel.  Thank you for reading up to here, or skipping ahead – I told you it would be shorter!  Please, if you do try the product, drop me a message and let me know what you think!  I would love to hear any opinions you may have on the product. Code Feel free to download the code I used above – download via DropBox

    Read the article

  • Let your Signature Experience drive IT-decision making

    - by Tania Le Voi
    Today’s CIO job description:  ‘’Align IT infrastructure and solutions with business goals and objectives ; AND while doing so reduce costs; BUT ALSO, be innovative, ensure the architectures are adaptable and agile as we need to act today on the changes that we may request tomorrow.”   Sound like an unachievable request? The fact is, reality dictates that CIO’s are put under this type of pressure to deliver more with less. In a past career phase I spent a few years as an IT Relationship Manager for a large Insurance company. This is a role that we see all too infrequently in many of our customers, and it’s a shame.  The purpose of this role was to build a bridge, a relationship between IT and the business. Key to achieving that goal was to ensure the same language was being spoken and more importantly that objectives were commonly understood - hence service and projects were delivered to time, to budget and actually solved the business problems. In reality IT and the business are already married, but the relationship is most often defined as ‘supplier’ of IT rather than a ‘trusted partner’. To deliver business value they need to understand how to work together effectively to attain this next level of partnership. The Business cannot compete if they do not get a new product to market ahead of the competition, or for example act in a timely manner to address a new industry problem such as a legislative change. An even better example is when the Application or Service fails and the Business takes a hit by bad publicity, being trending topics on social media and losing direct revenue from online channels. For this reason alone Business and IT need the alignment of their priorities and deliverables now more than ever! Take a look at Forrester’s recent study that found ‘many IT respondents considering themselves to be trusted partners of the business but their efforts are impaired by the inadequacy of tools and organizations’.  IT Meet the Business; Business Meet IT So what is going on? We talk about aligning the business with IT but the reality is it’s difficult to do. Like any relationship each side has different goals and needs and language can be a barrier; business vs. technology jargon! What if we could translate the needs of both sides into actionable information, backed by data both sides understand, presented in a meaningful way?  Well now we can with the Business-Driven Application Management capabilities in Oracle Enterprise Manager 12cR2! Enterprise Manager’s Business-Driven Application Management capabilities provide the information that IT needs to understand the impact of its decisions on business criteria.  No longer does IT need to be focused solely on speeds and feeds, performance and throughput – now IT can understand IT’s impact on business KPIs like inventory turns, order-to-cash cycle, pipeline-to-forecast, and similar.  Similarly, now the line of business can understand which IT services are most critical for the KPIs they care about. There are a good deal of resources on Oracle Technology Network that describe the functionality of these products, so I won’t’ rehash them here.  What I want to talk about is what you do with these products. What’s next after we meet? Where do you start? Step 1:  Identify the Signature Experience. This is THE business process (or set of processes) that is core to the business, the one that drives the economic engine, the process that a customer recognises the company brand for, reputation, the customer experience, the process that a CEO would state as his number one priority. The crème de la crème of your business! Once you have nailed this it gets easy as Enterprise Manager 12c makes it easy. Step 2:  Map the Signature Experience to underlying IT.  Taking the signature experience, map out the touch points of the components that play a part in ensuring this business transaction is successful end to end, think of it like mapping out a critical path; the applications, middleware, databases and hardware. Use the wealth of Enterprise Manager features such as Systems, Services, Business Application Targets and Business Transaction Management (BTM) to assist you. Adding Real User Experience Insight (RUEI) into the mix will make the end to end customer satisfaction story transparent. Work with the business and define meaningful key performance indicators (KPI’s) and thresholds to enable you to report and action upon. Step 3:  Observe the data over time.  You now have meaningful insight into every step enabling your signature experience and you understand the implication of that experience on your underlying IT.  Watch if for a few months, see what happens and reconvene with your business stakeholders and set clear and measurable targets which can re-define service levels.  Step 4:  Change the information about which you and the business communicate.  It’s amazing what happens when you and the business speak the same language.  You’ll be able to make more informed business and IT decisions. From here IT can identify where/how budget is spent whether on the level of support, performance, capacity, HA, DR, certification etc. IT SLA’s no longer need be focused on metrics such as %availability but structured around business process requirements. The power of this way of thinking doesn’t end here. IT staff get to see and understand how their own role contributes to the business making them accountable for the business service. Take a step further and appraise your staff on the business competencies that are linked to the service availability. For the business, the language barrier is removed by producing targeted reports on the signature experience core to the business and therefore key to the CEO. Chargeback or show back becomes easier to justify as the ‘cost of day per outage’ can be more easily calculated; the business will be able to translate the cost to the business to the cost/value of the underlying IT that supports it. Used this way, Oracle Enterprise Manager 12c is a key enabler to a harmonious relationship between the end customer the business and IT to deliver ultimate service and satisfaction. Just engage with the business upfront, make the signature experience visible and let Enterprise Manager 12c do the rest. In the next blog entry we will cover some of the Enterprise Manager features mentioned to enable you to implement this new way of working.  

    Read the article

  • Are you reporting Visual Studio 2012 issues to Microsoft correctly?

    - by Tarun Arora
    Issues you may run into while using Visual Studio need to be reported to the Microsoft Product Team via the Microsoft connect site. The Microsoft team then tries to reproduce the issue using the details provided by you. If the information you provide isn’t sufficient to reproduce the issue the team tries to contact you for specifics, this not only increases the cycle time to resolution but the lack of communication also results in issues not being resolved. So, when I report an issue one part of me tells me to include as much detail about the issue as I can clubbing screen shots, repo steps, system information, visual studio version information,… the other half tells me this is so time consuming, leave it for now and come back to fill all these details later. Reporting a bug but not including the supporting information is an invitation to excuses like …     Microsoft has absolutely changed this experience for VS 2012. The Microsoft Visual Studio Feedback tool is designed to simplify the process of providing feedback and reporting issues to Microsoft that you may encounter while using Microsoft Visual Studio 2012. Note – The Microsoft Visual Studio 2012 Feedback client currently only works for VS 2012 and not any other versions of Visual Studio. Setting up the Microsoft Visual Studio 2012 Feedback client Open Visual Studio, from the Tools menu select Extension and Updates. In the Extension and Updates window, click Online from the left pane and search using the text ‘feedback’, download and Install Microsoft Visual Studio 2012 Feedback Tool by following the instructions from the wizard. Note - Restarting Visual Studio after the install is a must! How to report a bug for Visual Studio 2012? Click on the Help menu and choose Report a Bug You should see an icon Microsoft Visual Studio 2012 Feedback Tool come up in the system tray icon area You’ll need to accept the Privacy statement. You have the option of reporting the feedback as private or public. Microsoft works with several Partners, MVP’s and Vendors who get access to early bits of Microsoft products for valuation. This is where it becomes essential to report the feedback privately. I would choose the Public option otherwise. After all if it’s out there in the public, others can discover and add to it easily. You now have the option to report a new issue or add to an existing issue. Should you choose to add to an existing issue you should have the feedback ID of the issue available. This can be obtained from the Microsoft Connect site. For now I am going to focus on reporting a new feedback privately. Filling out the feedback details You will notice that VsInfo.xml and DxDiagOutput.txt are automatically attached as you enter this screen (more on that later).  Feedback Type Choose the feedback type from (Performance, Hang, Crash, Other) Note – The record button will only be enabled once you have enabled once you have chosen the feedback type, Bug-repro recording is not available for Windows Server 2008.     Effective Title and Description Enter a title that helps us differentiate the bug when it appears in a list, so that we can group it with any related bugs, assign it to a developer more effectively, and resolve it more quickly. Example: Imagine that you are submitting a bug because you tried to install Service Pack 1 and got a message that Visual Studio is not installed even though it is. Helpful:  Installed Visual Studio version not detected during Service Pack 1 setup. Not helpful:  Service Pack 1 problem. Tip: Write the problem description first, and then distil it to create a title. Example Description: Helpful: When I run Service Pack 1 Setup, I get the message "No Visual Studio version is detected" even though I have Visual Studio 2010 Ultimate and Visual C++ 2010 Express installed on my machine. Even though I uninstalled both editions, and then first reinstalled Ultimate and then Express, I still get the message. Record: Becoming a first class citizen Often a repro report is invaluable to describe and decipher the issue. Please use this feature to send actionable feedback. The record repro feature works differently depending on the feedback type you selected. Please find below details for each recording option. You can start recording simply by selecting a feedback type, and clicking on the “Record” button. When "Performance" is the bug type: When the Microsoft Visual Studio trace recorder starts, perform the actions that show the performance problem you want to report and then click on the "Stop Recording" button as soon as you experience the performance problem. Because the tool optimizes trace collection, you can run it for as long as it takes to show the problem, up to two hours. Note that, you need to stop recording as soon as the performance issue occurs, because the tool captures only the last couple minutes of your actions to optimize the trace collection. After you stop the recording, the tool takes up to two minutes to assemble the data and attach an ETLTrace.zip file to your bug report. The data includes information about Windows events and the Visual Studio code path. Note that, running the Microsoft Visual Studio trace recorder requires elevated user privilege. When "Crash" is the bug type: When the dialog box appears, select the running Visual Studio instance for which you want to show the steps that cause a crash. When the crash occurs, click on the "Stop Record" button. After you do this, two files are attached to your bug report - an AutomaticCrashDump.zip file that contains information about the crash and a ReproSteps.zip file that shows the repro steps. Repro steps are captured by Windows Problem Steps Recorder. Note that, you can pause the recording, and resume later, or for a specific step, you can add additional comments. When "Hang" is the bug type: The process for recording the steps that cause a hang resembles the one for crashes. The difference is, you can even collect a dump file after the VS hangs; start the VSFT either from the system tray or by starting a new instance of VS, select "Hang" as feedback type and click on the "Record" button. You will be prompted which VS to collect dump about, select the VS instance that hanged. VSFT collects a dump file regarding the hang, called MiniDump.zip, and attaches to your bug report. When "Other" is the bug type: When the problem step recorder starts, perform the actions that show the issue you want to report and then choose the "Stop” button. You can pause the recording, and resume later, or for a specific step, you can add additional comments. Once you’re done, ReproSteps.zip is added to your bug report. Pre-attached files It is essential for Microsoft to know what version of the the product are you currently using and what is the current configuration of your system. Note – The total size of all attachments in a bug report cannot exceed 2 GB, and every uncompressed attachment must be smaller than 512 MB. We recommend that you assemble all of your attachments, compress them together into a .zip file, and then attach the .zip file. Taking a screenshot Associate a screen shot by clicking the Take screenshot button, choose either the entire desktop, the specific monitor (useful if you are working in a multi monitor configuration) or the specific window in question. And finally … click Submit If you need further help, more details can be found here. You can view your feedback online by using the following URL “">https://connect.microsoft.com/VisualStudio/SearchResults.aspx?SearchQuery=<feedbackId>” Happy bug logging

    Read the article

  • JPA 2?EJB 3.1?JSF 2????????! WebLogic Server 12c?????????Java EE 6??????|WebLogic Channel|??????

    - by ???02
    2012?2???????????????WebLogic Server 12c?????????Java EE 6?????????????????????????????????????????????????????????????Oracle Enterprise Pack for Eclipse 12c??WebLogic Server 12c(???)????Java EE 6??????3??????????????????????????????JPA 2.0??????????·?????????EJB 3.1???????·???????????????(???)???????O/R?????????????JPA 2.0 Java EE 6????????????????????Web?????????????3?????(3????)???????·????????????·????????????????????????????????JPA(Java Persistence API) 2.0???EJB(Enterprise JavaBeans) 3.1???JSF(JavaServer Faces) 2.0????3????????????????·???????????JPA??Java??????????????·?????????????O/R?????????????????????·???????????EJB?Session Bean??????????????????·??????????????????????JSF??????????????????????????????????????? ??????JPA????Oracle Database??EMPLOYEES?????Java??????????????????????Entity Bean??????XML?????????????????????????XML????????????????????????????????????????????????????·?????????????????????????????????????????????????????????????Java EE 6??????JPA 2.0??????????·???????????????????????????????????????????????????????????????????????????????????????? ?????????????????????????????????????????????????????????????Oracle Enterprise Pack for Eclipse(OEPE)??????File????????New?-?Other??????? ??????New??????????????????????????Web?-?Dynamic Web Project???????Next????????????????Dynamic Web Project?????????????Project name????OOW???????????Target Runtime????New Runtime????????? ???New Server Runtime Environment???????????????Oracle?-?Oracle WebLogic Server 12c(12.1.1)???????Next???????????????????????????WebLogic home????C:\Oracle\Middleware\wlserver_12.1???????Finish?????????????WebLogic Home????????????????????????Java home?????????????????????Finish??????????????????????Dynamic Web Project????????????????Finish??????????????????JPA 2.0??????????·?????? ???????????????JPA 2.0???????????????·??????????????????Eclipse??Project Explorer?(??????·???)?????????OOW?????????????????????????????·???????????????Properties?????????????????·???·????????????????????????????Project Facets?????????????JPA??????(?????????????Details?????JPA 2.0?????????????????????)???????????????????Further configuration available????????? ???Modify Faceted Project??????????????????????????????????Connection????????????????????????????Add Connection????????? ??????New Connection Profile????????????????Connection Profile Type????Oracle Database Connection??????Next???????????? ???Specify a Driver and Connection Details???????Drivers????Oracle Database 10g Driver Default???????????Properties?????????????????????SIDxeHostlocalhostPort number1521User nameHRPasswordhr ???????????Test Connection??????????????????Ping Succeeded!?????????????????????????????Finish???????????Modify Faceted Project????????OK????????????????Properties for OOW????????OK?????????????????? ?????????Eclipse????????????????OOW?????????????????·???????????????JPA Tools?-?Generate Entities from Tables...??????? ????Generate Custom Entities???????????????????????????????Schema????HR??????Tables????EMPLOYEES???????????Next???????????? ???????????Next???????????Customize Default Entity Generation??????Package????model???????Finish?????????????JPQL?????????? ?????????Oracle Database??EMPLOYEES??????????????????·????model.Employee.java?????????????????????????????????·?????OOW????Java Resources?-?src?-?model???????Employee.java????????????????????????????????·???Employee????(Employee.java)?package model; import java.io.Serializable; import java.math.BigDecimal; import java.util.Date; import java.util.Set; import javax.persistence.Column;<...?...>/**  * The persistent class for the EMPLOYEES database table.  *  */ @Entity  // ?@Table(name="EMPLOYEES")  // ?// Apublic class Employee implements Serializable {        private static final long serialVersionUID = 1L;       @Id  // ?       @Column(name="EMPLOYEE_ID")        private long employeeId;        @Column(name="COMMISSION_PCT")        private BigDecimal commissionPct;        @Column(name="DEPARTMENT_ID")        private BigDecimal departmentId;        private String email;        @Column(name="FIRST_NAME")        private String firstName;       @Temporal( TemporalType.DATE)  //?       @Column(name="HIRE_DATE")        private Date hireDate;        @Column(name="JOB_ID")        private String jobId;        @Column(name="LAST_NAME")        private String lastName;        @Column(name="PHONE_NUMBER")        private String phoneNumber;        private BigDecimal salary;        //bi-directional many-to-one association to Employee<...?...>}  ???????????????·???????????????????????????????????????????@Table(name="")??????@Table??????????????????????????????????????? ?????????????????????????????????????·???????????????? ?????????????????????????????SQL?Data?????????? ???????????????A?????JPA?????????JPQL(Java Persistence Query Language)?????????????JPQL?????SQL???????????????????????????????????????????????????????????????????????????????????Employee.selectByNameEmployee??firstName????????????????????employeeId????????? ?????????????????????import java.util.Date;import java.util.Set;import javax.persistence.Column;<...?...>/**  * The persistent class for the EMPLOYEES database table.  *  */ @Entity  // ?@Table(name="EMPLOYEES")  // ?@NamedQueries({       @NamedQuery(name="Employee.selectByName" , query="select e from Employee e where e.firstName like :name order by e.employeeId")})<...?...> ?????????·??????OOW?-?JPA Content?-?persistent.xml??????Connection???????????????Database????JTA data source:???jdbc/test????????????????????????Java EE 6??????JPA 2.0???????????????????????????????????·??????????????????????????????????????SQL????????????????????????·????????????·??????????????XML??????????????????1??????????????????????????????????????????????????????????????????EJB 3.1????????·???????????EJB 3.1????????·?????????????????EJB 3.1?Stateless Session Bean?????·????????????????·???????????????????·??????????????????? EJB3.1?????JPA 2.0???????????·???????????????????????XML???????????????????????????????EJB 3.1?????????·????EJB?????????????????????????????????????????????????????????????? ????????EJB 3.1?Session Bean?????·????????????????????????????????????????????????????public List<Employee> getEmp(String keyword)firstName????????????Employee?????? ????????????????????·???????????OOW????????????·???????????????New?-?Other???????????????????????????????????EJB?-?Session Bean(EJB 3.x)??????NEXT????????????????????Create EJB 3.x Session Bean?????????????Java Package????ejb???class name????EmpLogic???????????State Type????Stateless?????????No-interface???????????????????????Finish???????????? ?????????Stateless Session Bean??????·?????EmpLogic.java????????????????????EmpLogic????·????????EJB?????????????Stateless Session Bean?????????@Stateless?????????????????????????????????????EmpLogic????(EmpLogic.java)?package ejb;import javax.ejb.LocalBean;import javax.ejb.Stateless;<...?...>import model.Employee;@Stateless@LocalBeanpublic class EmpLogic {       public EmpLogic() {       }} ??????????????????????????????????????·???????????????????????import??????????????????EmpLogic??????????????????????????·???????????????????????import????????(EmpLogic.java)?package ejb;import javax.ejb.LocalBean;import javax.ejb.Stateless;import javax.persistence.EntityManager;  // ?import javax.persistence.PersistenceContext;  // ?<...?...>import model.Employee;@Stateless@LocalBeanpublic class EmpLogic {      @PersistenceContext(unitName = "OOW")  // ?      private EntityManager em;  // ?       public EmpLogic() {       }} ?????????·???????JPA???????????????????·????????????????????????????CRUD???????????????????·????????????EntityManager???????????????????????????1????????????????·???????????????????????@PersistenceContext?????unitName?????????????persistence.xml????persistence-unit???name?????????????? ???????EmpLogic?????·???????????????????????????????????????????????????????????????????????????????EmpLogic????????·???????(EmpLogic.java)?package ejb;import java.util.List;  // ? import javax.ejb.LocalBean;import javax.ejb.Stateless;import javax.persistence.EntityManager;  // ? import javax.persistence.PersistenceContext;  // ? <...?...>import model.Employee;@Stateless@LocalBeanpublic class EmpLogic {       @PersistenceContext(unitName = "OOW")  // ?        private EntityManager em;  // ?        public EmpLogic() {       }      @SuppressWarnings("unchecked")  // ?      public List<Employee> getEmp(String keyword) {  // ?             StringBuilder param = new StringBuilder();  // ?             param.append("%");  // ?             param.append(keyword);  // ?             param.append("%");  // ?             return em.createNamedQuery("Employee.selectByName")  // ?                    .setParameter("name", param.toString()).getResultList();  // ?      }} ???EJB 3.1???Stateless Session Bean?????????? ???JSF 2.0???????????????????????????????????????????????????JAX-RS????RESTful?Web??????????????????????

    Read the article

  • How to set up spf records to send mail from google hosted apps to gmail addresses

    - by Chris Adams
    Hi there, I'm trying to work out why email I send from one domain I own is rejected by another that I own, and while I think it may be related to how I've setup spf records, I'm not sure what steps I need to take to fix it. Here's the error message I receive: Technical details of permanent failure: Google tried to deliver your message, but it was rejected by the recipient domain. We recommend contacting the other email provider for further information about the cause of this error. The error that the other server returned was: 550 550-Verification failed for <[email protected]> 550-No Such User Here 550 Sender verify failed (state 14). Here's the response from [email protected] Delivered-To: [email protected] Received: by 10.86.92.9 with SMTP id p9cs85371fgb; Wed, 2 Sep 2009 22:33:32 -0700 (PDT) Received: by 10.90.205.4 with SMTP id c4mr2406190agg.29.1251956007562; Wed, 02 Sep 2009 22:33:27 -0700 (PDT) Return-Path: <[email protected]> Received: from verifier.port25.com (207-36-201-235.ptr.primarydns.com [207.36.201.235]) by mx.google.com with ESMTP id 26si831174aga.24.2009.09.02.22.33.25; Wed, 02 Sep 2009 22:33:26 -0700 (PDT) Received-SPF: pass (google.com: domain of [email protected] designates 207.36.201.235 as permitted sender) client-ip=207.36.201.235; Authentication-Results: mx.google.com; spf=pass (google.com: domain of [email protected] designates 207.36.201.235 as permitted sender) [email protected]; dkim=pass [email protected] DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; s=auth; d=port25.com; h=Date:From:To:Subject:Message-Id:In-Reply-To; [email protected]; bh=GRMrcnoucTl4upzqJYTG5sOZMLU=; b=uk6TjADEyZVRkceQGjH94ZzfVeRTsiZPzbXuhlqDt1m+kh1zmdUEoiTOzd89ryCHMbVcnG1JajBj 5vOMKYtA3g== DomainKey-Signature: a=rsa-sha1; c=nofws; q=dns; s=auth; d=port25.com; b=NqKCPK00Xt49lbeO009xy4ZRgMGpghvcgfhjNy7+qI89XKTzi6IUW0hYqCQyHkd2p5a1Zjez2ZMC l0u9CpZD3Q==; Received: from verifier.port25.com (127.0.0.1) by verifier.port25.com (PowerMTA(TM) v3.6a1) id hjt9pq0hse8u for <[email protected]>; Thu, 3 Sep 2009 01:26:52 -0400 (envelope-from <[email protected]>) Date: Thu, 3 Sep 2009 01:26:52 -0400 From: [email protected] To: [email protected] Subject: Authentication Report Message-Id: <[email protected]> Precedence: junk (auto_reply) In-Reply-To: <[email protected]> This message is an automatic response from Port25's authentication verifier service at verifier.port25.com. The service allows email senders to perform a simple check of various sender authentication mechanisms. It is provided free of charge, in the hope that it is useful to the email community. While it is not officially supported, we welcome any feedback you may have at <[email protected]>. Thank you for using the verifier, The Port25 Solutions, Inc. team ========================================================== Summary of Results ========================================================== SPF check: pass DomainKeys check: neutral DKIM check: neutral Sender-ID check: pass SpamAssassin check: ham ========================================================== Details: ========================================================== HELO hostname: fg-out-1718.google.com Source IP: 72.14.220.158 mail-from: [email protected] ---------------------------------------------------------- SPF check details: ---------------------------------------------------------- Result: pass ID(s) verified: [email protected] DNS record(s): stemcel.co.uk. 14400 IN TXT "v=spf1 include:aspmx.googlemail.com ~all" aspmx.googlemail.com. 7200 IN TXT "v=spf1 redirect=_spf.google.com" _spf.google.com. 300 IN TXT "v=spf1 ip4:216.239.32.0/19 ip4:64.233.160.0/19 ip4:66.249.80.0/20 ip4:72.14.192.0/18 ip4:209.85.128.0/17 ip4:66.102.0.0/20 ip4:74.125.0.0/16 ip4:64.18.0.0/20 ip4:207.126.144.0/20 ?all" ---------------------------------------------------------- DomainKeys check details: ---------------------------------------------------------- Result: neutral (message not signed) ID(s) verified: [email protected] DNS record(s): ---------------------------------------------------------- DKIM check details: ---------------------------------------------------------- Result: neutral (message not signed) ID(s) verified: NOTE: DKIM checking has been performed based on the latest DKIM specs (RFC 4871 or draft-ietf-dkim-base-10) and verification may fail for older versions. If you are using Port25's PowerMTA, you need to use version 3.2r11 or later to get a compatible version of DKIM. ---------------------------------------------------------- Sender-ID check details: ---------------------------------------------------------- Result: pass ID(s) verified: [email protected] DNS record(s): stemcel.co.uk. 14400 IN TXT "v=spf1 include:aspmx.googlemail.com ~all" aspmx.googlemail.com. 7200 IN TXT "v=spf1 redirect=_spf.google.com" _spf.google.com. 300 IN TXT "v=spf1 ip4:216.239.32.0/19 ip4:64.233.160.0/19 ip4:66.249.80.0/20 ip4:72.14.192.0/18 ip4:209.85.128.0/17 ip4:66.102.0.0/20 ip4:74.125.0.0/16 ip4:64.18.0.0/20 ip4:207.126.144.0/20 ?all" ---------------------------------------------------------- SpamAssassin check details: ---------------------------------------------------------- SpamAssassin v3.2.5 (2008-06-10) Result: ham (-2.6 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 SPF_PASS SPF: sender matches SPF record -2.6 BAYES_00 BODY: Bayesian spam probability is 0 to 1% [score: 0.0000] 0.0 HTML_MESSAGE BODY: HTML included in message I've registered the spf records for my domain, as advised here Both domains pass validate according to Kitterman's spf record testing tools, so I'm somewhat confused about this. I also have the catchall address set up on the stemcel.co.uk domain here, but I don't have one setup for chrisadams.me.uk. Instead, we have the following forwarders setup [email protected] to [email protected] [email protected] to [email protected] [email protected] to [email protected] [email protected] to [email protected] Any ideas how to get this working? I'm not sure what I should be looking for here.

    Read the article

  • Using pfSense, OpenVPN Connects but Still Can't See the Network

    - by nicorellius
    I am having an OpenVPN issue. I have a pfSense box at home configured to allow traffic through a VPN tunnel. The client computer is Windows XP Home, behind a standard Comcast connection and a Netgear wireless router. I use OpenVPN to access my work network (from where I am trying to get out of in this post) from home (with an XP Pro machine behind pfSense), and this works fine. The client config is similar but has the changes specific to my setup... Here is my XP Home config: client dev tun proto tcp remote pfsense.*.org 1194 (starred out by me) resolv-retry infinite nobind persist-key persist-tun ca ca.crt cert client.crt key client.key ns-cert-type server comp-lzo verb 3 When I launch the OpenVPN GUI, the Tunnel TAP network connection turns red, and I can right-click that to connect to the server. Everything seems to work fine until I browse for the actual network. The Tunnel TAP connection turns green and it says connected to 10.1.1.6 (I have tried different IP pools here too with no luck). I can see the internal network fine, but my home network behind pfSense is not there. I have tried browsing there by using Tools Map Network Drive, using the browser, with no success. When I open the command line on the client and use the ipconfig -all command, I get the following: Connection-specific DNS Suffix . : Description . . . . . . . . . . . : TAP-Win32 Adapter V9 Physical Address. . . . . . . . . : *** (starred out by me) Dhcp Enabled. . . . . . . . . . . : Yes Autoconfiguration Enabled . . . . : Yes IP Address. . . . . . . . . . . . : 10.1.1.6 Subnet Mask . . . . . . . . . . . : 255.255.255.252 Default Gateway . . . . . . . . . : DHCP Server . . . . . . . . . . . : 10.1.1.5 Lease Obtained. . . . . . . . . . : Monday, March 15, 2010 1:18:37 PM Lease Expires . . . . . . . . . . : Tuesday, March 15, 2011 1:18:37 PM I noticed that the default gateway is not present. Could this be my problem? I am still relatively new to firewalls, VPN, and network configuration so I'm sure I am messing up something simple. Oh yah, I should note that I have firewall rules configured for pfSense to allow traffic through the WAN and the LAN. At first there was just the WAN firewall rule, because that is what I got from the literature I was reading. I then created a LAN rule as well, but I'm not sure if this was correct. Neither way works, though. Screen shots below: Any help is much appreciated.

    Read the article

  • Exchange 2010 PST-Export fails

    - by Chake
    I'm horribly failing at exporting Exchnange Mailboxes to PST files. Perhaps You are able to help me? The System I'm running some legacy machines here. The one I'm currently working on (CurrentDC) is a Windows 2008 R2 Server with Exchange 2010 on it. Exchange seems to be poorly patched: [PS] C:\>get-exchangeserver Name Site ServerRole Edition AdminDisplayVersion ---- ---- ---------- ------- ------------------- OldDC None Enterprise Version 6.5 (Bui... CurrentDC company.local Mailbox,... Enterprise Version 14.0 (Bu... The Problem After some trouble I managed to get the Export-Mailbox command run: [PS] C:\>Export-Mailbox -Identity marco -PSTFolderPath C:\ExchangeExport According to several Websites that seems to be the right command to export the mailbox of the user "marco" to "C:\ExchangeExport". But after running the command an error occurs (I'm sorry, it is the german version of Windows 2008 - but if you translate Fehler with error and Vorgang with process you should be prepared enough to go ;)) [PS] C:\Export-Mailbox -Identity marco -PSTFolderPath C:\ExchangeExport Fehler für Marco S ([email protected]). Ursache: Fehler bei diesem Vorgang., Fehlercode: -2147467259. + CategoryInfo : InvalidOperation: (0:Int32) [Export-Mailbox], RecipientTaskException + FullyQualifiedErrorId : 2317FD3A,Microsoft.Exchange.Management.RecipientTasks.ExportMailbox RunspaceId : 44415363-371e-44a1-a682-61e6a9b90c86 Identity : company.local/Company User/Marco S DistinguishedName : CN=Marco S,OU=Company User,DC=company,DC=local DisplayName : Marco S Alias : marco LegacyExchangeDN : /o=Erste Organisation/ou=Erste administrative Gruppe/cn=Recipients/cn=marco PrimarySmtpAddress : [email protected] SourceServer : CurrentDC.company.local SourceDatabase : Mailbox Database 0279110169 SourceGlobalCatalog : CurrentDC SourceDomainController : TargetGlobalCatalog : CurrentDC TargetDomainController : TargetMailbox : TargetServer : TargetDatabase : MailboxSize : 0 B (0 bytes) IsResourceMailbox : False SIDUsedInMatch : SMTPProxies : SourceManager : SourceDirectReports : SourcePublicDelegates : SourcePublicDelegatesBL : SourceAltRecipient : SourceAltRecipientBL : SourceDeliverAndRedirect : MatchedTargetNTAccountDN : IsMatchedNTAccountMailboxEnabled : MatchedContactsDNList : TargetNTAccountDNToCreate : TargetManager : TargetDirectReports : TargetPublicDelegates : TargetPublicDelegatesBL : TargetAltRecipient : TargetAltRecipientBL : TargetDeliverAndRedirect : Options : Default SourceForestCredential : TargetForestCredential : TargetFolder : PSTFilePath : C:\ExchangeExport\marco.pst RecoveryMailboxGuid : RecoveryMailboxLegacyExchangeDN : RecoveryMailboxDisplayName : RecoveryDatabaseGuid : StandardMessagesDeleted : 0 AssociatedMessagesDeleted : 0 DumpsterMessagesDeleted : 0 MoveType : ExportToPST MoveStage : Validation StartTime : 05.10.2012 13:55:46 EndTime : 05.10.2012 13:55:46 StatusCode : -2147467259 StatusMessage : Fehler bei diesem Vorgang. ReportFile : C:\Program Files\Microsoft\Exchange Server\V14\Logging\MigrationLogs\export-Mailbox20121005-135545-8170000.xml ServerName : CurrentDC.company.local What I have done Well, I must say I'm quite clueless. I was wondering why MailboxSize is 0 - so I checked it: [PS] C:\>Get-MailboxStatistics marco | ft DisplayName, TotalItemSize, ItemCount DisplayName TotalItemSize ItemCount ----------- ------------- --------- Marco S 473 MB (496,011,572 bytes) 4173 Well, this i not 0 bytes - but I don't know what to do with this information. Also I had a look at the ReportFile mentioned in the output: <?xml version="1.0"?> <export-Mailbox> <TaskHeader> <RunningAs>NT-AUTORITÄT\SYSTEM</RunningAs> <Name>export-Mailbox</Name> <Type>ExportToPST</Type> <MaxBadItems>0</MaxBadItems> <Version>14.0.639.21</Version> <StartTime>10.05.2012 14:19:12</StartTime> <Options Identity="marco" PSTFolderPath="C:\ExchangeExport" DeleteContent="False" DeleteAssociatedMessages="False" GlobalCatalog="CurrentDC" MaxThreads="4" BadItemLimit="0" ValidateOnly="False" IncludeFolders="" ExcludeFolders="" StartDate="01.01.0001 00:00:00" EndDate="31.12.9999 23:59:59" SubjectKeywords="" ContentKeywords="" AllContentKeywords="" AttachmentFilenames="" SenderKeywords="" RecipientKeywords="" Locale="" /> </TaskHeader> <TaskDetails> <Item MailboxName="Marco S"> <Source> <Identity>company.local/Company User/Marco S</Identity> <DistinguishName>CN=Marco Sc,OU=Company User,DC=company,DC=local</DistinguishName> <DisplayName>Marco S</DisplayName> <Alias>marco</Alias> <LegacyExchangeDN>/o=Erste Organisation/ou=Erste administrative Gruppe/cn=Recipients/cn=marco</LegacyExchangeDN> <PrimarySmtpAddress>[email protected]</PrimarySmtpAddress> <SourceServer>CurrentDC.company.local</SourceServer> <SourceDatabase>Mailbox Database 0279110169</SourceDatabase> <IsResourceMailbox>False</IsResourceMailbox> <SourceGlobalCatalog>CurrentDC</SourceGlobalCatalog> </Source> <Target> <PSTFilePath>C:\ExchangeExport\marco.pst</PSTFilePath> </Target> <MailboxSize>0 B (0 bytes)</MailboxSize> <Duration>00:00:00</Duration> <Result IsWarning="False" ErrorCode="-2147467259">Fehler bei diesem Vorgang.</Result> </Item> </TaskDetails> <TaskFooter> <EndTime>10.05.2012 14:19:13</EndTime> <TotalSize>0 B (0 bytes)</TotalSize> <StandardMessagesDeleted>0</StandardMessagesDeleted> <AssociatedMessagesDeleted>0</AssociatedMessagesDeleted> <DumpsterMessagesDeleted>0</DumpsterMessagesDeleted> <Result ErrorCount="1" CompletedCount="0" WarningCount="0" /> </TaskFooter> </export-Mailbox> Do you have any clue? <UPDATE> Regarding to the answer from downthepub I tried to use UNC paths - no change. Also I tried installing the management tools to a client and run the scripts from there - no way, too. </UPDATE> Thanks a lot for reading this mess!

    Read the article

  • Email sent from server with rDNS & SPF being blocked by Hotmail

    - by Canadaka
    I have been unable to send email to users on hotmail or other Microsoft email servers for some time. Its been a major headache trying to find out why and how to fix the issue. The emails being sent that are blocked from my domain canadaka.net. I use Google Aps to host my regular email serverice for my @canadaka.net email addresses. I can sent email from my desktop or gmail to a hotmail without any problem. But any email sent from my server on behalf of canadaka.net is blocked, not even arriving in the junk email. The IP that the emails are being sent from is the same IP that my site is hosted on: 66.199.162.177 This IP is new to me since August 2010, I had a different IP for the previous 3-4 years. This IP is not on any credible spam lists http://www.anti-abuse.org/multi-rbl-check-results/?host=66.199.162.177 The one list spamcannibal.org my IP is listed on seems to be out of my control, says "no reverse DNS, MX host should have rDNS - RFC1912 2.1". But since I use Google for my email hosting, I don't have control over setting up RDNS for all the MX records. I do have Reverse DNS setup for my IP though, it resolves to "mail.canadaka.net". I have signed up for SNDS and was approved. My ip says "All of the specified IPs have normal status." Sender Score: 100 https://www.senderscore.org/lookup.php?lookup=66.199.162.177&ipLookup.x=55&ipLookup.y=14 My Mcafee threat level seems fine I have a TXT SPF record setup, I am currently using xname.org as my DNS, and they don't have a field for SPF, but their FAQ says to add the SPF info as a TXT entry. v=spf1 a include:_spf.google.com ~all Some "SPF checking" tools ive used detect that my domain has a valid SPF, but others don't. Like Microsoft's SPF wizard, i think this is because its specifically looking for an SPF record and not in the TXT. "No SPF Record Found. A and MX Records Available". From my home I can run "nslookup -type=TXT canadaka.net" and it returns: Server: google-public-dns-a.google.com Address: 8.8.8.8 Non-authoritative answer: canadaka.net text = "v=spf1 a include:_spf.google.com ~all" One strange thing I found is i'm unable to ping hotmail.com or msn.com or do a "telnet mail.hotmail.com 25". I am able to ping gmail.com and many other domains I tried. I tried changing my DNS servers to Google's Public DNS and did a ipconfig /flushdns but that had no effect. I am however able to connect with telnet to mx1.hotmail.com This is what the email headers look like when I send to a Google email server and I receive the email with no troubles. You can see that SPF is passing. Delivered-To: [email protected] Received: by 10.146.168.12 with SMTP id q12cs91243yae; Sun, 27 Feb 2011 18:01:49 -0800 (PST) Received: by 10.43.48.7 with SMTP id uu7mr4292541icb.68.1298858509242; Sun, 27 Feb 2011 18:01:49 -0800 (PST) Return-Path: Received: from canadaka.net ([66.199.162.177]) by mx.google.com with ESMTP id uh9si8493137icb.127.2011.02.27.18.01.45; Sun, 27 Feb 2011 18:01:48 -0800 (PST) Received-SPF: pass (google.com: domain of [email protected] designates 66.199.162.177 as permitted sender) client-ip=66.199.162.177; Authentication-Results: mx.google.com; spf=pass (google.com: domain of [email protected] designates 66.199.162.177 as permitted sender) [email protected] Message-Id: <[email protected] Received: from coruscant ([127.0.0.1]:12907) by canadaka.net with [XMail 1.27 ESMTP Server] id for from ; Sun, 27 Feb 2011 18:01:29 -0800 Date: Sun, 27 Feb 2011 18:01:29 -0800 Subject: Test To: [email protected] From: XXXX Reply-To: [email protected] X-Mailer: PHP/5.2.13 I can send to gmail and other email services fine. I don't know what i'm doing wrong! UPDATE 1 I have been removed from hotmails IP block and am now able to send emails to hotmail, but they are all going directly to the JUNK folder. UPDATE 2 I used Telnet to send a test message to port25.com, seems my SPF is not being detected. Result: neutral (SPF-Result: None) canadaka.net. SPF (no records) canadaka.net. TXT (no records) I do have a TXT record, its been there for years, I did change it a week ago. Other sites that allow you to check your SPF detect it, but some others like Microsofts Wizard doesn't. This iw what my SPF record in my xname.org DNS file looks like: canadaka.net. 86400 IN TXT "v=spf1 a include:_spf.google.com ~all" I did have a nameserver as my 4th option that doens't have the TXT records since it doens't support it. So I removed it from the list and instead added wtfdns.com as my 4th adn 5th nameservers, which does support TXT.

    Read the article

  • Trouble recovering MySQL InnoDB database after server crash

    - by Andy Shinn
    I had a server crash due to a broken iSCSI link (the filesystem went into read-only mode). After repairing the link and rebooting the machine (CentOS 5 / MySQL 5.1), the MySQL server would not start and gave the following error: 100603 19:11:46 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql InnoDB: The log sequence number in ibdata files does not match InnoDB: the log sequence number in the ib_logfiles! 100603 19:11:46 InnoDB: Database was not shut down normally! InnoDB: Starting crash recovery. InnoDB: Reading tablespace information from the .ibd files... InnoDB: Restoring possible half-written data pages from the doublewrite InnoDB: buffer... InnoDB: Warning: database page corruption or a failed InnoDB: file read of page 112541. InnoDB: Trying to recover it from the doublewrite buffer. InnoDB: Dump of the page: 100603 19:11:46 InnoDB: Page dump in ascii and hex (16384 bytes): lots of binary data 100603 19:11:46 InnoDB: Page checksum 953720272, prior-to-4.0.14-form checksum 2641912043 InnoDB: stored checksum 617821918, prior-to-4.0.14-form stored checksum 2080617765 InnoDB: Page lsn 115 2632899642, low 4 bytes of lsn at page end 2641594600 InnoDB: Page number (if stored to page already) 112541, InnoDB: space id (if created with = MySQL-4.1.1 and stored already) 0 InnoDB: Page may be an index page where index id is 0 616 InnoDB: Dump of corresponding page in doublewrite buffer: 100603 19:11:46 InnoDB: Page dump in ascii and hex (16384 bytes): more binary data 100603 19:11:46 InnoDB: Page checksum 908374788, prior-to-4.0.14-form checksum 824841363 InnoDB: stored checksum 912869634, prior-to-4.0.14-form stored checksum 2210927931 InnoDB: Page lsn 115 2635312169, low 4 bytes of lsn at page end 2633173354 InnoDB: Page number (if stored to page already) 112541, InnoDB: space id (if created with = MySQL-4.1.1 and stored already) 0 InnoDB: Page may be an index page where index id is 0 616 InnoDB: Also the page in the doublewrite buffer is corrupt. InnoDB: Cannot continue operation. InnoDB: You can try to recover the database with the my.cnf InnoDB: option: InnoDB: innodb_force_recovery=6 100603 19:11:46 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended Per the error message, I have tried setting set-variable=innodb_force_recovery=6 in the my.cnf to get to the data. This allows the MySQL server to start. But when I try to do a mysqldump of the database or a SELECT * INTO OUTFILE "filename" FROM broken_table; it seems to endlessly just export the same line over and over again. I have also tried http://code.google.com/p/innodb-tools/. But this tool fails with an error that 'blob' type is not supported. If I try to access the data using the PHP application it crashes MySQL: `100603 19:19:19 - mysqld got signal 11 ; This could be because you hit a bug. It is also possible that this binary or one of the libraries it was linked against is corrupt, improperly built, or misconfigured. This error can also be caused by malfunctioning hardware. We will try our best to scrape up some info that will hopefully help diagnose the problem, but since we have already crashed, something is definitely wrong and this may fail. key_buffer_size=8384512 read_buffer_size=131072 max_used_connections=2 max_threads=151 threads_connected=2 It is possible that mysqld could use up to key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 338317 K bytes of memory Hope that's ok; if not, decrease some variables in the equation. thd: 0x15d33f0 Attempting backtrace. You can use the following information to find out where mysqld died. If you see no messages after this, something went terribly wrong... stack_bottom = 0x453aff00 thread_stack 0x40000 /usr/libexec/mysqld(my_print_stacktrace+0x24) [0x874364] /usr/libexec/mysqld(handle_segfault+0x346) [0x5c9166] /lib64/libpthread.so.0 [0x3a6e40eb10] /usr/libexec/mysqld(rec_get_offsets_func+0x30) [0x7cc310] /usr/libexec/mysqld [0x7674d8] /usr/libexec/mysqld(btr_search_info_update_slow+0x638) [0x768d48] /usr/libexec/mysqld(btr_cur_search_to_nth_level+0xc7d) [0x75f86d] /usr/libexec/mysqld [0x7dd1c1] /usr/libexec/mysqld(row_search_for_mysql+0x18b0) [0x7e03d0] /usr/libexec/mysqld(ha_innobase::general_fetch(unsigned char*, unsigned int, unsigned int)+0x7c) [0x7526fc] /usr/libexec/mysqld(handler::read_multi_range_next(st_key_multi_range**)+0x29) [0x6aed09] /usr/libexec/mysqld(QUICK_RANGE_SELECT::get_next()+0x194) [0x69a964] /usr/libexec/mysqld [0x6aafe9] /usr/libexec/mysqld(sub_select(JOIN*, st_join_table*, bool)+0x56) [0x635196] /usr/libexec/mysqld [0x63f9cd] /usr/libexec/mysqld(JOIN::exec()+0x950) [0x6497c0] /usr/libexec/mysqld(mysql_select(THD*, Item*, TABLE_LIST*, unsigned int, List&, Item*, unsigned int, st_order*, st_order*, Item*, st_order*, unsign ed long long, select_result*, st_select_lex_unit*, st_select_lex*)+0x17b) [0x64b34b] /usr/libexec/mysqld(handle_select(THD*, st_lex*, select_result*, unsigned long)+0x169) [0x64bc79] /usr/libexec/mysqld [0x5d34b6] /usr/libexec/mysqld(mysql_execute_command(THD*)+0x4e5) [0x5d6b45] /usr/libexec/mysqld(mysql_parse(THD*, char const*, unsigned int, char const)+0x211) [0x5dc321] /usr/libexec/mysqld(dispatch_command(enum_server_command, THD*, char*, unsigned int)+0x10b8) [0x5dd3f8] /usr/libexec/mysqld(do_command(THD*)+0xe6) [0x5dd9e6] /usr/libexec/mysqld(handle_one_connection+0x73d) [0x5d036d] /lib64/libpthread.so.0 [0x3a6e40673d] /lib64/libc.so.6(clone+0x6d) [0x3a6d4d3d1d] Trying to get some variables. Some pointers may be invalid and cause the dump to abort... thd-query at 0x15de5e0 = SELECT DISTINCT count(DISTINCT i.itemid) as rowscount,i.hostid FROM items i WHERE ((i.itemid BETWEEN 000000000000000 AND 0999999 99999999)) AND i.type<9 AND (i.hostid IN (10017,10047,10050,10054,10056,10059,10062,10063,10064,10065,10066,10067,10068,10069,10070,10071,10072,10073,100 74,10075,10076,10077,10078,10079,10080,10081,10082,10084,10088,10089,10090,10091,10092,10093,10094,10095,10096,10097,10098,10099,10100,10101,10102,10103,10 104,10105,10106,10107,10108,10109)) GROUP BY i.hostid thd-thread_id=3 thd-killed=NOT_KILLED The manual page at contains information that should help you find out what is causing the crash. 100603 19:19:19 mysqld_safe Number of processes running now: 0 100603 19:19:19 mysqld_safe mysqld restarted` Before recovering form an older backup as a last resort I am looking for anymore suggestions. Thanks!

    Read the article

  • Why does Process Explorer cause highly targeted failure of some applications / basic UI functions in a high-power EC2 Windows instance?

    - by Dan Nissenbaum
    Update: I have determined that Process Explorer itself - the program I am using to debug a performance issue - seems to be the cause of the issue. See note, with updated question, at end. I am running a high-power (cc2.8xlarge) Amazon AWS EC2 Windows instance off of a boot EBS volume, provisioned at 2500 PIOPS, which was created from a snapshot of a previous boot volume. My purpose with the instance is to use it as a development workstation with many developer tools installed, such as Visual Studio, a local XAMPP stack, etc. I have upwards of 40 programs installed on the machine. The usability of the instance as a development machine often works quite well. The RDP lag is adequately small. I have used it for hours on end without problems for some of my most intense development tasks. As a result, I have just purchased a reserved instance, and I opted to rebuild my development machine starting from scratch with a Windows Server 2012 AMI. After having installed all of my desired/required applications for development over this past week, again the machine seems to often work well and I have worked for up to an hour at a time without problems doing heavy development work. However, I continue to run into catastrophic OS usability issues that may prevent me from being able to rely on this machine as a development machine. I would like to track down the source of the problem, if there is an easily identifiable source. (Update: I have tracked down the source to be Process Explorer, the very program I was using to debug the problem. See update at end.) The issues are as follows. (These are some primary examples) Some applications, after a period of adequate responsiveness, suddenly begin to respond very, very slowly to basic user interface actions such as clicking on menus and pressing Ctrl-Tab to switch between open documents. Two examples are UltraEdit and PhpEd. It typically takes ~2 seconds for a menu to appear, and ~4 seconds to switch between open documents. Additionally, insertion point motion in the editor is lagged by upwards of ~2 seconds. Process Explorer, which I am using to help debug the problem, seems to run acceptably for a couple of minutes, but on multiple occasions Process Explorer itself hangs completely. It hangs at the same time as the problems noted above. When it hangs, it is 100% unresponsive. Clicking on its taskbar icon neither causes it to come to the top or go behind, and its viewable area is filled with nothing but a region partially containing pure white and partially containing incomplete windows widgets that are unreadable, and that never change. Waiting 10 minutes does not clear the problem. Attempting to force-quit Process Explorer by right-clicking on its taskbar icon and choosing "Close Window" takes about 5 full minutes to exit (Process Explorer itself can't be used to exit Process Explorer, and it is registered as a Task Manager substitute). Other programs work just fine during this time. For example, Chrome tabs flip very quickly back and forth, menus pop open instantly, web pages load quickly, and typing in forms/web applications inside the browser works promptly. Another example of an application that works crisply is Filemaker - its menus open instantly, and switching views in this application occurs promptly. Other applications also work without issue. Also, switching between applications occurs promptly as well. It is only a handful of applications that exhibit the problem, with some primary examples given above. At first I thought that EBS IOPS might be a problem. Therefore, I ran Performance Monitor, and watched the "Disk Transfers/sec" monitor in real time. At no point did this measure come anywhere close to hitting the 2500 PIOPS provisioned for the EBS volume. The RAM was also well under the limit (~10 GB used out of 60 GB). I did notice that one CPU core (out of 32 logical cores) was fully thrashing at 100% (i.e., ~3.1%) during the problematic periods. This seems to indicate that a single CPU core is handling the menus / flipping between open documents (for some applications only) / managing the Process Explorer user interface, and that this single core was hosed for some reason during the problematic periods. Also note that I have a desktop workstation (Windows 7) that I also use as a development machine, via a remote connection, with a nearly identical set of programs installed, and this desktop workstation does not exhibit any of the problems I've discussed above. I have been using it heavily for well over a year now. Any suggestions regarding either the source of the problem, or steps I might take to investigate the source of the problem, would be appreciated. Thanks. Note: After extensive testing & investigation, I have noticed that when I quit Process Explorer, the problem vanishes and the system performance returns to normal, and then reappears quickly when I run Process Explorer again (note: again, the performance problems only appear for a subset of applications - other applications work perfectly fine during the same period). My question is therefore (thankfully) more specific: Why does Process Explorer cause highly targeted failure of some applications (including itself) and basic UI functions, in a high-power EC2 Windows instance?

    Read the article

  • System halts for a fraction of second after every 2-3 seconds

    - by iSam
    I'm using Windows 7 on my HP ProBook 4250s. The problem I face is that my system halts for a fraction of second after every 2-3 seconds. These jerks are not letting me concentrate or work properly. This happens even when I'm just typing in notepad while no other application is running. I tried to install every driver from HP's website and there's no item in device manager marked with yellow icon. Following are my system specs: Machine: HP ProBook 4250s OS: Windows 7 professional RAM: 2GB Processor: Intel Core i3 2.27GHz Following is my HijackThis Log: **Logfile of HijackThis v1.99.1** Scan saved at 9:34:03 PM, on 11/13/2012 Platform: Unknown Windows (WinNT 6.01.3504) MSIE: Internet Explorer v9.00 (9.00.8112.16450) **Running processes:** C:\Windows\system32\taskhost.exe C:\Windows\System32\rundll32.exe C:\Windows\system32\Dwm.exe C:\Windows\Explorer.EXE C:\Windows\System32\igfxtray.exe C:\Windows\System32\hkcmd.exe C:\Windows\System32\igfxpers.exe C:\Program Files\Synaptics\SynTP\SynTPEnh.exe C:\Program Files\PowerISO\PWRISOVM.EXE C:\Program Files\AVAST Software\Avast\AvastUI.exe C:\Program Files\Free Download Manager\fdm.exe C:\Windows\system32\wuauclt.exe C:\Program Files\Windows Media Player\wmplayer.exe C:\Program Files\Microsoft Office\Office12\WINWORD.EXE C:\HijackThis\HijackThis.exe R1 - HKCU\Software\Microsoft\Internet Explorer\Main,Search Page = http://go.microsoft.com/fwlink/?LinkId=54896 R0 - HKCU\Software\Microsoft\Internet Explorer\Main,Start Page = http://bing.com/ R1 - HKLM\Software\Microsoft\Internet Explorer\Main,Default_Page_URL = http://go.microsoft.com/fwlink/?LinkId=69157 R1 - HKLM\Software\Microsoft\Internet Explorer\Main,Default_Search_URL = http://go.microsoft.com/fwlink/?LinkId=54896 R1 - HKLM\Software\Microsoft\Internet Explorer\Main,Search Page = http://go.microsoft.com/fwlink/?LinkId=54896 R0 - HKLM\Software\Microsoft\Internet Explorer\Main,Start Page = http://go.microsoft.com/fwlink/?LinkId=69157 R0 - HKLM\Software\Microsoft\Internet Explorer\Search,SearchAssistant = R0 - HKLM\Software\Microsoft\Internet Explorer\Search,CustomizeSearch = R0 - HKCU\Software\Microsoft\Internet Explorer\Toolbar,LinksFolderName = R3 - URLSearchHook: (no name) - {7473b6bd-4691-4744-a82b-7854eb3d70b6} - (no file) O2 - BHO: Adobe PDF Reader Link Helper - {06849E9F-C8D7-4D59-B87D-784B7D6BE0B3} - C:\Program Files\Common Files\Adobe\Acrobat\ActiveX\AcroIEHelper.dll O2 - BHO: Babylon toolbar helper - {2EECD738-5844-4a99-B4B6-146BF802613B} - (no file) O2 - BHO: MrFroggy - {856E12B5-22D7-4E22-9ACA-EA9A008DD65B} - C:\Program Files\Minibar\Froggy.dll O2 - BHO: avast! WebRep - {8E5E2654-AD2D-48bf-AC2D-D17F00898D06} - C:\Program Files\AVAST Software\Avast\aswWebRepIE.dll O2 - BHO: Windows Live ID Sign-in Helper - {9030D464-4C02-4ABF-8ECC-5164760863C6} - C:\Program Files\Common Files\Microsoft Shared\Windows Live\WindowsLiveLogin.dll O2 - BHO: Minibar BHO - {AA74D58F-ACD0-450D-A85E-6C04B171C044} - C:\Program Files\Minibar\Kango.dll O2 - BHO: Free Download Manager - {CC59E0F9-7E43-44FA-9FAA-8377850BF205} - C:\Program Files\Free Download Manager\iefdm2.dll O2 - BHO: HP Network Check Helper - {E76FD755-C1BA-4DCB-9F13-99BD91223ADE} - C:\Program Files\Hewlett-Packard\HP Support Framework\Resources\HPNetworkCheck\HPNetworkCheckPlugin.dll O3 - Toolbar: (no name) - {98889811-442D-49dd-99D7-DC866BE87DBC} - (no file) O3 - Toolbar: avast! WebRep - {8E5E2654-AD2D-48bf-AC2D-D17F00898D06} - C:\Program Files\AVAST Software\Avast\aswWebRepIE.dll O4 - HKLM\..\Run: [IgfxTray] C:\Windows\system32\igfxtray.exe O4 - HKLM\..\Run: [HotKeysCmds] C:\Windows\system32\hkcmd.exe O4 - HKLM\..\Run: [Persistence] C:\Windows\system32\igfxpers.exe O4 - HKLM\..\Run: [SynTPEnh] %ProgramFiles%\Synaptics\SynTP\SynTPEnh.exe O4 - HKLM\..\Run: [PWRISOVM.EXE] C:\Program Files\PowerISO\PWRISOVM.EXE -startup O4 - HKLM\..\Run: [AdobeAAMUpdater-1.0] "C:\Program Files\Common Files\Adobe\OOBE\PDApp\UWA\UpdaterStartupUtility.exe" O4 - HKLM\..\Run: [AdobeCS6ServiceManager] "C:\Program Files\Common Files\Adobe\CS6ServiceManager\CS6ServiceManager.exe" -launchedbylogin O4 - HKLM\..\Run: [SwitchBoard] C:\Program Files\Common Files\Adobe\SwitchBoard\SwitchBoard.exe O4 - HKLM\..\Run: [ROC_roc_ssl_v12] "C:\Program Files\AVG Secure Search\ROC_roc_ssl_v12.exe" / /PROMPT /CMPID=roc_ssl_v12 O4 - HKLM\..\Run: [avast] "C:\Program Files\AVAST Software\Avast\avastUI.exe" /nogui O4 - HKLM\..\Run: [Wordinn English to Urdu Dictionary] "C:\Program Files\Wordinn\Urdu Dictionary\bin\Lugat.exe" -h O4 - HKLM\..\Run: [Adobe Reader Speed Launcher] "C:\Program Files\Adobe\Reader 8.0\Reader\Reader_sl.exe" O4 - HKCU\..\Run: [Comparator Fast] "C:\Program Files\Interdesigner Software\Comparator Fast\ComparatorFast.exe" /STARTUP O4 - HKCU\..\Run: [Free Download Manager] "C:\Program Files\Free Download Manager\fdm.exe" -autorun O8 - Extra context menu item: Download all with Free Download Manager - file://C:\Program Files\Free Download Manager\dlall.htm O8 - Extra context menu item: Download selected with Free Download Manager - file://C:\Program Files\Free Download Manager\dlselected.htm O8 - Extra context menu item: Download video with Free Download Manager - file://C:\Program Files\Free Download Manager\dlfvideo.htm O8 - Extra context menu item: Download with Free Download Manager - file://C:\Program Files\Free Download Manager\dllink.htm O8 - Extra context menu item: E&xport to Microsoft Excel - res://C:\PROGRA~1\MICROS~2\Office12\EXCEL.EXE/3000 O9 - Extra button: @C:\Program Files\Hewlett-Packard\HP Support Framework\Resources\HPNetworkCheck\HPNetworkCheckPlugin.dll,-103 - {25510184-5A38-4A99-B273-DCA8EEF6CD08} - C:\Program Files\Hewlett-Packard\HP Support Framework\Resources\HPNetworkCheck\NCLauncherFromIE.exe O9 - Extra 'Tools' menuitem: @C:\Program Files\Hewlett-Packard\HP Support Framework\Resources\HPNetworkCheck\HPNetworkCheckPlugin.dll,-102 - {25510184-5A38-4A99-B273-DCA8EEF6CD08} - C:\Program Files\Hewlett-Packard\HP Support Framework\Resources\HPNetworkCheck\NCLauncherFromIE.exe O9 - Extra button: Research - {92780B25-18CC-41C8-B9BE-3C9C571A8263} - C:\PROGRA~1\MICROS~2\Office12\REFIEBAR.DLL O9 - Extra button: Change your facebook look - {AAA38851-3CFF-475F-B5E0-720D3645E4A5} - C:\Program Files\Minibar\MinibarButton.dll O10 - Unknown file in Winsock LSP: c:\windows\system32\nlaapi.dll O10 - Unknown file in Winsock LSP: c:\windows\system32\napinsp.dll O10 - Unknown file in Winsock LSP: c:\program files\common files\microsoft shared\windows live\wlidnsp.dll O10 - Unknown file in Winsock LSP: c:\program files\common files\microsoft shared\windows live\wlidnsp.dll O11 - Options group: [ACCELERATED_GRAPHICS] Accelerated graphics O11 - Options group: [INTERNATIONAL] International O13 - Gopher Prefix: O17 - HKLM\System\CCS\Services\Tcpip\..\{920289D7-5F75-4181-9A37-5627EAA163E3}: NameServer = 8.8.8.8,8.8.4.4 O17 - HKLM\System\CCS\Services\Tcpip\..\{AE83ED2F-EF14-4066-ACE2-C4ED07A68EAA}: NameServer = 9.9.9.9,8.8.8.8 O18 - Protocol: ms-help - {314111C7-A502-11D2-BBCA-00C04F8EC294} - C:\Program Files\Common Files\Microsoft Shared\Help\hxds.dll O18 - Protocol: skype4com - {FFC8B962-9B40-4DFF-9458-1830C7DD7F5D} - C:\PROGRA~1\COMMON~1\Skype\SKYPE4~1.DLL O18 - Protocol: wlpg - {E43EF6CD-A37A-4A9B-9E6F-83F89B8E6324} - C:\Program Files\Windows Live\Photo Gallery\AlbumDownloadProtocolHandler.dll O18 - Filter hijack: text/xml - {807563E5-5146-11D5-A672-00B0D022E945} - C:\PROGRA~1\COMMON~1\MICROS~1\OFFICE12\MSOXMLMF.DLL O20 - AppInit_DLLs: c:\progra~2\browse~1\23787~1.43\{16cdf~1\browse~1.dll c:\progra~2\browse~1\22630~1.40\{16cdf~1\browse~1.dll O20 - Winlogon Notify: igfxcui - C:\Windows\SYSTEM32\igfxdev.dll O23 - Service: avast! Antivirus - AVAST Software - C:\Program Files\AVAST Software\Avast\AvastSvc.exe O23 - Service: Google Software Updater (gusvc) - Google - C:\Program Files\Google\Common\Google Updater\GoogleUpdaterService.exe O23 - Service: HP Support Assistant Service - Hewlett-Packard Company - C:\Program Files\Hewlett-Packard\HP Support Framework\hpsa_service.exe O23 - Service: HP Quick Synchronization Service (HPDrvMntSvc.exe) - Hewlett-Packard Company - C:\Program Files\Hewlett-Packard\Shared\HPDrvMntSvc.exe O23 - Service: HP Software Framework Service (hpqwmiex) - Hewlett-Packard Company - C:\Program Files\Hewlett-Packard\Shared\hpqWmiEx.exe O23 - Service: HP Service (hpsrv) - Hewlett-Packard Company - C:\Windows\system32\Hpservice.exe O23 - Service: Intel(R) Management and Security Application Local Management Service (LMS) - Intel Corporation - C:\Program Files\Intel\Intel(R) Management Engine Components\LMS\LMS.exe O23 - Service: Mozilla Maintenance Service (MozillaMaintenance) - Mozilla Foundation - C:\Program Files\Mozilla Maintenance Service\maintenanceservice.exe O23 - Service: @%SystemRoot%\system32\qwave.dll,-1 (QWAVE) - Unknown owner - %windir%\system32\svchost.exe (file missing) O23 - Service: @%SystemRoot%\system32\seclogon.dll,-7001 (seclogon) - Unknown owner - %windir%\system32\svchost.exe (file missing) O23 - Service: Skype Updater (SkypeUpdate) - Skype Technologies - C:\Program Files\Skype\Updater\Updater.exe O23 - Service: Adobe SwitchBoard (SwitchBoard) - Adobe Systems Incorporated - C:\Program Files\Common Files\Adobe\SwitchBoard\SwitchBoard.exe O23 - Service: Intel(R) Management & Security Application User Notification Service (UNS) - Intel Corporation - C:\Program Files\Intel\Intel(R) Management Engine Components\UNS\UNS.exe O23 - Service: @%PROGRAMFILES%\Windows Media Player\wmpnetwk.exe,-101 (WMPNetworkSvc) - Unknown owner - %PROGRAMFILES%\Windows Media Player\wmpnetwk.exe (file missing)

    Read the article

  • Resolve Wrong IP from Domain Name only on certain networks

    - by Godric Seer
    I host a personal website on an old desktop that is LAMP based. There are several strange things about this problem so I will break it down into steps. Since I have a dynamic IP, I use no-ip to make sure I have a working domain name at all times. I use the automatic update client, but logged in and checked and my no-ip domain has the proper IP tied to it. Here is a link to the homepage through the no-ip domain for reference. Also, I do a ping and a traceroute on the no-ip domain and get: [eckertzs@localhost ~]$ ping -c 1 endradil.noip.me PING endradil.noip.me (65.24.215.99) 56(84) bytes of data. 64 bytes from endradil.noip.me (65.24.215.99): icmp_seq=1 ttl=64 time=2.23 ms --- endradil.noip.me ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 104ms rtt min/avg/max/mdev = 2.233/2.233/2.233/0.000 ms [eckertzs@localhost ~]$ traceroute endradil.noip.me traceroute to endradil.noip.me (65.24.215.99), 30 hops max, 60 byte packets 1 . (192.168.2.1) 1.755 ms 5.409 ms 5.380 ms 2 endradil.noip.me (65.24.215.99) 6.297 ms 9.543 ms 10.324 ms Using this domain, I can connect to my webserver without issue or interruption(the https is required to avoid a redirect serverside, but it works). I also have a domain I have bought on GoDaddy where I have a CNAME record forwarding the www subdomain to my no-ip domain. CNAME Record Host: www Points to: endradil.noip.me TTL: 1 hour For the past several weeks, I never had an issue using the GoDaddy domain to connect (ssh or https). As of the past few days, however, the GoDaddy domain has only worked intermittently, for a few minutes at a time and then will go down for hours at a time. I get server not found errors most of the time. Also, if I happen to be using the GoDaddy domain for an ssh connection, the connection will freeze. I have run online tests of the DNS and have seen that the website is visible by external servers and resolved to the correct IP. I also contacted GoDaddy support but they had no issues connecting to the website, and therefore did not see any issues. My personal computers (Windows desktop, linux laptop, android phone) all fail to connect when on my personal wifi. If I disconnect my phone from the wifi and use my AT&T wireless data, it can connect with both domains without issue. When I attempt to use Google webmaster tools to crawl the site using the GoDaddy domain, Google can not find the site. From my linux laptop, I have found some interesting results when I ping or traceroute the domain. The results from these: [eckertzs@localhost ~]$ ping -c 1 www.endradil.com PING www.endradil.com.Belkin (198.105.244.228) 56(84) bytes of data. --- www.endradil.com.Belkin ping statistics --- 1 packets transmitted, 0 received, 100% packet loss, time 10000ms [eckertzs@localhost ~]$ traceroute www.endradil.com traceroute to www.endradil.com (198.105.244.228), 30 hops max, 60 byte packets 1 . (192.168.2.1) 1.918 ms 2.806 ms 2.772 ms 2 cpe-65-24-208-1.insight.res.rr.com (65.24.208.1) 29.247 ms 29.654 ms 30.094 ms 3 cpe-69-23-24-117.new.res.rr.com (69.23.24.117) 15.597 ms 23.218 ms 23.581 ms 4 agg24.clmcohib01r.midwest.rr.com (65.29.1.52) 30.581 ms 30.556 ms 31.192 ms 5 be27.clevohek01r.midwest.rr.com (65.29.1.38) 30.580 ms 31.062 ms 31.038 ms 6 bu-ether25.atlngamq47w-bcr01.tbone.rr.com (107.14.19.38) 37.863 ms 68.844 ms 43.773 ms 7 107.14.17.178 (107.14.17.178) 51.866 ms 51.019 ms 50.989 ms 8 ae0.pr1.dca10.tbone.rr.com (107.14.17.200) 48.467 ms ae-4-0.a0.lax91.tbone.rr.com (66.109.1.113) 49.912 ms * 9 v413.core1.ash1.he.net (209.51.175.33) 60.270 ms 50.842 ms 50.819 ms 10 100ge5-1.core1.nyc4.he.net (184.105.223.166) 55.597 ms 56.045 ms 56.020 ms 11 xerocole-inc.10gigabitethernet12-4.core1.nyc4.he.net (216.66.41.242) 56.001 ms 55.969 ms 55.992 ms 12 * * * both show the incorrect IP. Also, the traceroute timesout on hops 12 through 255 (output truncated above). The traceroute using site24x7 works and shows reasonable results when run from their california server. From another linux box on a different network but in the same city as me (10 miles away), I still get timeout for traceroute, however the IP resolves correctly for the domain. From this I believe that the DNS result is incorrectly cached in either my router/modem or perhaps even at my ISP level. My question is, first, how do I find out exactly what is wrong, and second, how do I resolve it.

    Read the article

  • Web browsing is fast, but downloads are slow

    - by Ricket
    I work for a company on my university's campus, helping with general IT problems and some web development. But lately there has been a problem that has me and my boss completely stumped. We, plus one contractor, make up the entire IT department, so I'm reaching out to you for help. All around the office, we have wall jacks. These collect in a closet down the hall and all plug into a switch. This switch, along with our individual server jacks, plugs into another switch, and that switch plugs into our firewall hardware. Then the firewall is connected out to our campus network. Our campus internet is, well, very fast. I don't know exactly the terms, tiers, etc., but we have thousands of students and downloads can run as fast as 10 MB/s at night; uploads are sometimes even faster. I think we're practically ISP level. In short, I have a lot of faith that it is not the campus side of things that is causing a problem, combined with other evidence I'll mention in a moment. So our symptoms: web browsing is fast. Web pages, images, etc. load instantly. No problems there. But then when I go to download something, the download starts fast but very quickly (a matter of seconds) drops to nearly 0. Often it will actually drop to 0 and time out. This happens with even very small files, 1 MB or less. It smells to me like a QoS sort of thing. I'm not entirely sure, and I wanted to get your opinions first. My boss is hesitant to touch our firewall, much less let me touch it, and it was set up and is managed by a consultant remotely. These problems don't seem tied to a time of the day. I've tried downloads after 5:00 and still the same thing happens. From my desk, I can turn on my wireless adapter and pick up the campus wireless access point. If I unplug ethernet and connect to it, downloads are fast. This adds to my suspicion that it's limited to our company network. Also, a number of weeks ago the consultant upgraded our firewall firmware. Suddenly everything was very fast. I tested with downloads from Sun and speedtest.net and things were blazing fast, as they should be with our campus internet! It was wonderful, and I figured the slow speeds were an old firmware bug. In a matter of days, things steadily declined until they were back to the old symptoms. Oh, and we have antivirus installed on every computer, and we keep it up to date. Though I suppose the possibility is still there that someone could have spyware which is bogging down our internet, in which case what is the easiest/best way to find this out? (maybe this should go in a separate question) Thank you for your patience in reading all of this. Do you have any ideas as to what I can try? Is this something that you've experienced before? What sort of tools or methods can I use to try and diagnose the problem? P.S. everything here is Windows. Windows Server 2003 and 2008 on our servers, and Windows XP on employees' machines. Update: We are submitting a ticket to the university to just take a look and see if they see anything unusual and/or can suggestion methods for us to try and pinpoint our problem. Hopefully they'll be helpful! I'll update this to let you know what goes on. Update again: We found a hub (yes, a HUB) right between our campus connection and our firewall. It had only those two ethernet cables plugged into it, nothing else. After removing the hub, our speeds have jumped up to several mbps. However in talking with the campus, we got them to run a gigabit line to our firewall in place of the 100mbps line. As of friday, we are at about 65 mbps up and down (according to speedtest.net at 8am)!! Go NC State!!

    Read the article

  • BugZilla XML RPC Interface

    - by Damo
    I am attempting to setup BugZilla to receive but reports from another system using the XML-RPC interface. BugZilla works fine on its own with its own interface. When I attempt to test the XML-RPC functionality by accessing "xmlrpc.cgi" in my browser I get the error: The XML-RPC Interface feature is not available in this Bugzilla at C:\BugZilla\xmlrpc.cgi line 27 main::BEGIN(...) called at C:\BugZilla\xmlrpc.cgi line 29 eval {...} called at C:\BugZilla\xmlrpc.cgi line 29 Following this I install test-taint package from the default perl repository, this installs version 1.04. Re-running "xmlrpc.cgi" gives me an IIS error: 502 - Web server received an invalid response while acting as a gateway or proxy server. There is a problem with the page you are looking for, and it cannot be displayed. When the Web server (while acting as a gateway or proxy) contacted the upstream content server, it received an invalid response from the content server. So I run the checksetup.pl which inform me that: Use of uninitialized value in open at C:/Perl/site/lib/Test/Taint.pm line 334, <DATA> line 558. Installing Test-Taint from CPAN is the same. I assume XML-RPC is relent on Test-Taint, but Test-Taint doesn't seem to run correctly. If I ignore this error and attempt to invoke "bz_webservice_demo.pl" to add an entry the script times out. How can I get the XML-RPC / Test-Taint function working ? Current Setup: IIS7.5 on Windows Server 2008 Bugzilla 4.2.2 Perl 5.14.2 C:\BugZilla>perl checksetup.pl Set up gcc environment - 3.4.5 (mingw-vista special r3) * This is Bugzilla 4.2.2 on perl 5.14.2 * Running on Win2008 Build 6002 (Service Pack 2) Checking perl modules... Checking for CGI.pm (v3.51) ok: found v3.59 Checking for Digest-SHA (any) ok: found v5.62 Checking for TimeDate (v2.21) ok: found v2.24 Checking for DateTime (v0.28) ok: found v0.76 Checking for DateTime-TimeZone (v0.79) ok: found v1.48 Checking for DBI (v1.614) ok: found v1.622 Checking for Template-Toolkit (v2.22) ok: found v2.24 Checking for Email-Send (v2.16) ok: found v2.198 Checking for Email-MIME (v1.904) ok: found v1.911 Checking for URI (v1.37) ok: found v1.59 Checking for List-MoreUtils (v0.22) ok: found v0.33 Checking for Math-Random-ISAAC (v1.0.1) ok: found v1.004 Checking for Win32 (v0.35) ok: found v0.44 Checking for Win32-API (v0.55) ok: found v0.64 Checking available perl DBD modules... Checking for DBD-Pg (v1.45) ok: found v2.18.1 Checking for DBD-mysql (v4.001) ok: found v4.021 Checking for DBD-SQLite (v1.29) ok: found v1.33 Checking for DBD-Oracle (v1.19) ok: found v1.30 The following Perl modules are optional: Checking for GD (v1.20) ok: found v2.46 Checking for Chart (v2.1) ok: found v2.4.5 Checking for Template-GD (any) ok: found v1.56 Checking for GDTextUtil (any) ok: found v0.86 Checking for GDGraph (any) ok: found v1.44 Checking for MIME-tools (v5.406) ok: found v5.503 Checking for libwww-perl (any) ok: found v6.02 Checking for XML-Twig (any) ok: found v3.41 Checking for PatchReader (v0.9.6) ok: found v0.9.6 Checking for perl-ldap (any) ok: found v0.44 Checking for Authen-SASL (any) ok: found v2.15 Checking for RadiusPerl (any) ok: found v0.20 Checking for SOAP-Lite (v0.712) ok: found v0.715 Checking for JSON-RPC (any) ok: found v0.96 Checking for JSON-XS (v2.0) ok: found v2.32 Use of uninitialized value in open at C:/Perl/site/lib/Test/Taint.pm line 334, <DATA> line 558. Checking for Test-Taint (any) ok: found v1.04 Checking for HTML-Parser (v3.67) ok: found v3.68 Checking for HTML-Scrubber (any) ok: found v0.09 Checking for Encode (v2.21) ok: found v2.44 Checking for Encode-Detect (any) not found Checking for Email-MIME-Attachment-Stripper (any) ok: found v1.316 Checking for Email-Reply (any) ok: found v1.202 Checking for TheSchwartz (any) not found Checking for Daemon-Generic (any) not found Checking for mod_perl (v1.999022) not found Checking for Apache-SizeLimit (v0.96) not found *********************************************************************** * OPTIONAL MODULES * *********************************************************************** * Certain Perl modules are not required by Bugzilla, but by * * installing the latest version you gain access to additional * * features. * * * * The optional modules you do not have installed are listed below, * * with the name of the feature they enable. Below that table are the * * commands to install each module. * *********************************************************************** * MODULE NAME * ENABLES FEATURE(S) * *********************************************************************** * Encode-Detect * Automatic charset detection for text attachments * * TheSchwartz * Mail Queueing * * Daemon-Generic * Mail Queueing * * mod_perl * mod_perl * * Apache-SizeLimit * mod_perl * *********************************************************************** COMMANDS TO INSTALL OPTIONAL MODULES: Encode-Detect: ppm install Encode-Detect TheSchwartz: ppm install TheSchwartz Daemon-Generic: ppm install Daemon-Generic mod_perl: ppm install mod_perl Apache-SizeLimit: ppm install Apache-SizeLimit Reading ./localconfig... OPTIONAL NOTE: If you want to be able to use the 'difference between two patches' feature of Bugzilla (which requires the PatchReader Perl module as well), you should install patchutils from: http://cyberelk.net/tim/patchutils/ Checking for DBD-mysql (v4.001) ok: found v4.021 Checking for MySQL (v5.0.15) ok: found v5.5.27 WARNING: You need to set the max_allowed_packet parameter in your MySQL configuration to at least 3276750. Currently it is set to 3275776. You can set this parameter in the [mysqld] section of your MySQL configuration file. Removing existing compiled templates... Precompiling templates...done. checksetup.pl complete.

    Read the article

  • setting up Ubuntu 10.10 as paravirtualized guest in Xen on RHEL5 host - what kernel?

    - by kostmo
    I've discovered the tool ubuntu-vm-builder, which I've installed and then invoked on an Ubuntu workstation as: sudo vmbuilder xen ubuntu --suite maverick --flavour virtual --arch amd64 --mem=512 --rootsize 8192 This workstation is not the intended target host of the virtual machine, however; I would like to host the guest on a Red Hat Enterprise Linux 5 machine that is running Xen 3.0.3. The output of this command appears to be a folder named ubuntu-xen containing three files: tmpXXXXXX, a very large file which I assume is the root partition image tmpYYYYYY, a somewhat large file which I assume is the swap partition image xen.conf, a text file I have copied the xen.conf file to the RHEL server's /etc/xen directory under the new name newvm, adjusting the paths of tempXXXXXX and tempYYYYYYin the file after also copying them from my local workstation to the RHEL server. When I launch the Virtual Machine Manager virt-manager, I can see the newvm virtual machine listed underneath the Dom0 machine. When I try to start newvm, I get the error: Error starting domain: virDomainCreate() failed POST operation failed: (xend.err 'Error creating domain: Kernel image does not exist: None') Indeed, there exists an entry kernel = 'None' in the xen.conf file. How do I find out what the path of the kernel should be? Is this path supposed to be to a kernel stored on the local filesystem of the RHEL5 host, or is it supposed to be a path inside the guest image? I see that the vmbuilder command provides for a --xen-kernel option, along with a --xen-ramdisk option, but I'm not sure what to use for either. I think I should be able to get this to work, since Ubuntu is said to be supported as a Xen guest, even though the Xen 4.0.1 docs state support for only a limited set of distributions, Ubuntu excluded. Update 1 When running vmbuilder on my local workstation, I did observe an output line saying: Calling hook: install_kernel and later, output lines saying: update-initramfs: Generating /boot/initrd.img-2.6.35-23-virtual [...] run-parts: executing /etc/kernel/postinst.d/initramfs-tools 2.6.35-23-virtual /boot/vmlinuz-2.6.35-23-virtual So in the xen.conf file, I tried setting the lines: kernel = '/boot/vmlinuz-2.6.35-23-virtual' ramdisk = '/boot/initrd.img-2.6.35-23-virtual' When trying to start the VM, I got an error similar to last time: Error starting domain: virDomainCreate() failed POST operation failed: (xend.err 'Error creating domain: Kernel image does not exist: /boot/vmlinuz-2.6.35-23-virtual') This makes me think that the RHEL5 machine is looking for local files, rather than a file within the binary guest disk image. After running sudo updatedb on my workstation, neither of those files were found. If the vmbuilder tool had tried to install them, it must have failed. Update 2 I was able to extract the kernel and initrd images from the guest disk binary by mounting it: mkdir mnt_tmp sudo mount ubuntu-xen/tmpXXXXXX mnt_tmp/ -o loop cp mnt_tmp/boot/vmlinuz-2.6.35-23-virtual virtual_kernel_ubuntu cp mnt_tmp/boot/initrd.img-2.6.35-23-virtual virtual_initrd_ubuntu These two files I copied to the RHEL5 server, and edited the xen.conf file to point to them as kernel and ramdisk. With this done, I could "run" the newvm virtual machine from within virt-manager, but was met with the message Console Not Configured For Guest when I double clicked the entry to open the Virtual Machine Console. As suggested by a forum, I then added the line vfb = [ 'type=vnc' ] to the configuration file, recreated the virtual machine (a ~10 min process), and this time got the message: Connecting to console for guest This remained indefinitely; after selecting View - Serial Console, I found a kernel panic: [5442621.272173] Kernel panic - not syncing: Attempted to kill the idle task! [5442621.272179] Pid: 0, comm: swapper Tainted: G D 2.6.35-23-virtual #41-Ubuntu [5442621.272184] Call Trace: [5442621.272191] [<ffffffff815a1b81>] panic+0x90/0x111 [5442621.272199] [<ffffffff810652ee>] do_exit+0x3be/0x3f0 [5442621.272204] [<ffffffff815a5e20>] oops_end+0xb0/0xf0 [5442621.272211] [<ffffffff8100ddeb>] die+0x5b/0x90 [5442621.272216] [<ffffffff815a56c4>] do_trap+0xc4/0x170 [5442621.272221] [<ffffffff8100ba35>] do_invalid_op+0x95/0xb0 [5442621.272227] [<ffffffff8130851c>] ? intel_idle+0xac/0x180 [5442621.272232] [<ffffffff810072bf>] ? xen_restore_fl_direct_end+0x0/0x1 [5442621.272239] [<ffffffff815a48fe>] ? _raw_spin_unlock_irqrestore+0x1e/0x30 [5442621.272247] [<ffffffff8108dfb7>] ? tick_broadcast_oneshot_control+0xc7/0x120 [5442621.272253] [<ffffffff8100ad5b>] invalid_op+0x1b/0x20 [5442621.272259] [<ffffffff8130851c>] ? intel_idle+0xac/0x180 [5442621.272264] [<ffffffff813084e0>] ? intel_idle+0x70/0x180 [5442621.272269] [<ffffffff810072bf>] ? xen_restore_fl_direct_end+0x0/0x1 [5442621.272275] [<ffffffff8148a147>] cpuidle_idle_call+0xa7/0x140 [5442621.272281] [<ffffffff81008d93>] cpu_idle+0xb3/0x110 [5442621.272286] [<ffffffff815873aa>] rest_init+0x8a/0x90 [5442621.272291] [<ffffffff81b04c9d>] start_kernel+0x387/0x390 [5442621.272297] [<ffffffff81b04341>] x86_64_start_reservations+0x12c/0x130 [5442621.272303] [<ffffffff81b08002>] xen_start_kernel+0x55d/0x561 Update 3 I tried an i386 architecture instead of amd64, but got the same kernel panic. Also, it seems the Virtual Machine Manager pays attention to the format of the filename of the kernel; for the same kernel binary, I tried simply naming it vmlinuz-virtual, which threw out an error box about an invalid kernel. When I named it vmlinuz-2.6.35-23-virtual, it did not throw the error, but it did still result in the kernel panic shortly thereafter.

    Read the article

  • DNS problems on CentOS fresh install

    - by Rick Koshi
    I'm having some DNS issues on a new box I'm installing with CentOS 6.2. I am able to look up names using nslookup, dig, or host. I am able to ping machines by name or by IP address. However, when I try other tools, such as ssh, wget, or yum, they are unable to resolve names. For example: # wget http://www.google.com --2012-03-08 14:48:06-- http://www.google.com/ Resolving www.google.com... failed: Name or service not known. wget: unable to resolve host address `www.google.com' # ssh www.google.com ssh: Could not resolve hostname www.google.com: Name or service not known # ping -c 1 www.google.com PING www.l.google.com (74.125.113.106) 56(84) bytes of data. 64 bytes from vw-in-f106.1e100.net (74.125.113.106): icmp_seq=1 ttl=46 time=43.6 ms --- www.l.google.com ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 59ms rtt min/avg/max/mdev = 43.665/43.665/43.665/0.000 ms # host www.google.com www.google.com is an alias for www.l.google.com. www.l.google.com has address 74.125.113.99 www.l.google.com has address 74.125.113.103 www.l.google.com has address 74.125.113.104 www.l.google.com has address 74.125.113.105 www.l.google.com has address 74.125.113.106 www.l.google.com has address 74.125.113.147 My /etc/nsswitch.conf file is the default, including this (standard) line: hosts: files dns /etc/resolv.conf is as set up by DHCP: ; generated by /sbin/dhclient-script nameserver 192.168.1.254 192.168.1.254 is a working DNS server (my DSL modem, working for years with other machines) Anyone know why ping would work, but ssh/wget would fail? Per NcA's suggestion, I tried changing /etc/resolv.conf to point to 8.8.8.8. Oddly enough, this does make it work. Obviously, my DSL modem is responding to DNS requests in some way that some parts of Linux's resolution system don't like. Looking at the tcpdump, I am unable to see what the difference is. Certainly, both servers are sending the same addresses. Here's the output from tcpdump -nn -X with the server set to the DNS server on the DSL modem. It's clearly replying with the correct addresses, but ssh/wget don't seem happy with it for some reason: 15:53:52.133580 IP 192.168.1.254.53 > 192.168.1.2.54836: 33157 7/0/0 CNAME www.l.google.com., A 74.125.115.105, A 74.125.115.106, A 74.125.115.147, A 74.125.115.99, A 74.125.115.103, A 74.125.115.104 (148) 0x0000: 4500 00b0 e33a 0000 ff11 53b1 c0a8 01fe E....:....S..... 0x0010: c0a8 0102 0035 d634 009c 7528 8185 8180 .....5.4..u(.... 0x0020: 0001 0007 0000 0000 0377 7777 0667 6f6f .........www.goo 0x0030: 676c 6503 636f 6d00 0001 0001 c00c 0005 gle.com......... 0x0040: 0001 0007 acd0 0008 0377 7777 016c c010 .........www.l.. 0x0050: c02c 0001 0001 0000 0001 0004 4a7d 7369 .,..........J}si 0x0060: c02c 0001 0001 0000 0001 0004 4a7d 736a .,..........J}sj 0x0070: c02c 0001 0001 0000 0001 0004 4a7d 7393 .,..........J}s. 0x0080: c02c 0001 0001 0000 0001 0004 4a7d 7363 .,..........J}sc 0x0090: c02c 0001 0001 0000 0001 0004 4a7d 7367 .,..........J}sg 0x00a0: c02c 0001 0001 0000 0001 0004 4a7d 7368 .,..........J}sh 15:53:52.135669 IP 192.168.1.254.53 > 192.168.1.2.54836: 65062- 0/0/0 (32) 0x0000: 4500 003c e33b 0000 ff11 5424 c0a8 01fe E..<.;....T$.... 0x0010: c0a8 0102 0035 d634 0028 98f9 fe26 8000 .....5.4.(...&.. 0x0020: 0001 0000 0000 0000 0377 7777 0667 6f6f .........www.goo 0x0030: 676c 6503 636f 6d00 001c 0001 gle.com..... I'm not enough of an expert to know if this is malformed in some way, but ping seems to do the right thing with it. For comparison, here's the same thing when querying 8.8.8.8: 15:57:27.990270 IP 8.8.8.8.53 > 192.168.1.2.49028: 59114 7/0/0 CNAME www.l.google.com., A 74.125.113.105, A 74.125.113.103, A 74.125.113.106, A 74.125.113.147, A 74.125.113.104, A 74.125.113.99 (148) 0x0000: 4500 00b0 5530 0000 2f11 6453 0808 0808 E...U0../.dS.... 0x0010: c0a8 0102 0035 bf84 009c 39f8 e6ea 8180 .....5....9..... 0x0020: 0001 0007 0000 0000 0377 7777 0667 6f6f .........www.goo 0x0030: 676c 6503 636f 6d00 0001 0001 c00c 0005 gle.com......... 0x0040: 0001 0001 516a 0008 0377 7777 016c c010 ....Qj...www.l.. 0x0050: c02c 0001 0001 0000 0116 0004 4a7d 7169 .,..........J}qi 0x0060: c02c 0001 0001 0000 0116 0004 4a7d 7167 .,..........J}qg 0x0070: c02c 0001 0001 0000 0116 0004 4a7d 716a .,..........J}qj 0x0080: c02c 0001 0001 0000 0116 0004 4a7d 7193 .,..........J}q. 0x0090: c02c 0001 0001 0000 0116 0004 4a7d 7168 .,..........J}qh 0x00a0: c02c 0001 0001 0000 0116 0004 4a7d 7163 .,..........J}qc 15:57:28.018909 IP 8.8.8.8.53 > 192.168.1.2.49028: 31984 1/1/0 CNAME www.l.google.com. (102) 0x0000: 4500 0082 7b1b 0000 2f11 3e96 0808 0808 E...{.../.>..... 0x0010: c0a8 0102 0035 bf84 006e c67e 7cf0 8180 .....5...n.~|... 0x0020: 0001 0001 0001 0000 0377 7777 0667 6f6f .........www.goo 0x0030: 676c 6503 636f 6d00 001c 0001 c00c 0005 gle.com......... 0x0040: 0001 0001 517f 0008 0377 7777 016c c010 ....Q....www.l.. 0x0050: c030 0006 0001 0000 0258 0026 036e 7334 .0.......X.&.ns4 0x0060: c010 0964 6e73 2d61 646d 696e c010 0016 ...dns-admin.... 0x0070: 91f3 0000 0384 0000 0384 0000 0708 0000 ................ 0x0080: 003c .< I still don't know why the server's reply is adequate for ping but not for ssh/wget. If anyone has ideas, I'd be happy to hear them. For now, though, I can either refer to an outside DNS server or set up my own server on the new box. It's a workaround that seems like it should be unnecessary, but will allow me to proceed.

    Read the article

  • e2fsck extremly slow, although enough memory exists

    - by kaefert
    I've got this external USB-Disk: kaefert@blechmobil:~$ lsusb -s 2:3 Bus 002 Device 003: ID 0bc2:3320 Seagate RSS LLC As can be seen in this dmesg output, there are some problems that prevents that disk from beeing mounted: kaefert@blechmobil:~$ dmesg | grep sdb [ 114.474342] sd 5:0:0:0: [sdb] 732566645 4096-byte logical blocks: (3.00 TB/2.72 TiB) [ 114.475089] sd 5:0:0:0: [sdb] Write Protect is off [ 114.475092] sd 5:0:0:0: [sdb] Mode Sense: 43 00 00 00 [ 114.475959] sd 5:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA [ 114.477093] sd 5:0:0:0: [sdb] 732566645 4096-byte logical blocks: (3.00 TB/2.72 TiB) [ 114.501649] sdb: sdb1 [ 114.502717] sd 5:0:0:0: [sdb] 732566645 4096-byte logical blocks: (3.00 TB/2.72 TiB) [ 114.504354] sd 5:0:0:0: [sdb] Attached SCSI disk [ 116.804408] EXT4-fs (sdb1): ext4_check_descriptors: Checksum for group 3976 failed (47397!=61519) [ 116.804413] EXT4-fs (sdb1): group descriptors corrupted! So I went and fired up my favorite partition manager - gparted, and told it to verify and repair the partition sdb1. This made gparted call e2fsck (version 1.42.4 (12-Jun-2012)) e2fsck -f -y -v /dev/sdb1 Although gparted called e2fsck with the "-v" option, sadly it doesn't show me the output of my e2fsck process (bugreport https://bugzilla.gnome.org/show_bug.cgi?id=467925 ) I started this whole thing on Sunday (2012-11-04_2200) evening, so about 48 hours ago, this is what htop says about it now (2012-11-06-1900): PID USER PRI NI VIRT RES SHR S CPU% MEM% TIME+ Command 3704 root 39 19 1560M 1166M 768 R 98.0 19.5 42h56:43 e2fsck -f -y -v /dev/sdb1 Now I found a few posts on the internet that discuss e2fsck running slow, for example: http://gparted-forum.surf4.info/viewtopic.php?id=13613 where they write that its a good idea to see if the disk is just that slow because maybe its damaged, and I think these outputs tell me that this is not the case in my case: kaefert@blechmobil:~$ sudo hdparm -tT /dev/sdb /dev/sdb: Timing cached reads: 3562 MB in 2.00 seconds = 1783.29 MB/sec Timing buffered disk reads: 82 MB in 3.01 seconds = 27.26 MB/sec kaefert@blechmobil:~$ sudo hdparm /dev/sdb /dev/sdb: multcount = 0 (off) readonly = 0 (off) readahead = 256 (on) geometry = 364801/255/63, sectors = 5860533160, start = 0 However, although I can read quickly from that disk, this disk speed doesn't seem to be used by e2fsck, considering tools like gkrellm or iotop or this: kaefert@blechmobil:~$ iostat -x Linux 3.2.0-2-amd64 (blechmobil) 2012-11-06 _x86_64_ (2 CPU) avg-cpu: %user %nice %system %iowait %steal %idle 14,24 47,81 14,63 0,95 0,00 22,37 Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util sda 0,59 8,29 2,42 5,14 43,17 160,17 53,75 0,30 39,80 8,72 54,42 3,95 2,99 sdb 137,54 5,48 9,23 0,20 587,07 22,73 129,35 0,07 7,70 7,51 16,18 2,17 2,04 Now I researched a little bit on how to find out what e2fsck is doing with all that processor time, and I found the tool strace, which gives me this: kaefert@blechmobil:~$ sudo strace -p3704 lseek(4, 41026998272, SEEK_SET) = 41026998272 write(4, "\212\354K[_\361\3nl\212\245\352\255jR\303\354\312Yv\334p\253r\217\265\3567\325\257\3766"..., 4096) = 4096 lseek(4, 48404766720, SEEK_SET) = 48404766720 read(4, "\7t\260\366\346\337\304\210\33\267j\35\377'\31f\372\252\ffU\317.y\211\360\36\240c\30`\34"..., 4096) = 4096 lseek(4, 41027002368, SEEK_SET) = 41027002368 write(4, "\232]7Ws\321\352\t\1@[+5\263\334\276{\343zZx\352\21\316`1\271[\202\350R`"..., 4096) = 4096 lseek(4, 48404770816, SEEK_SET) = 48404770816 read(4, "\17\362r\230\327\25\346//\210H\v\311\3237\323K\304\306\361a\223\311\324\272?\213\tq \370\24"..., 4096) = 4096 lseek(4, 41027006464, SEEK_SET) = 41027006464 write(4, "\367yy>x\216?=\324Z\305\351\376&\25\244\210\271\22\306}\276\237\370(\214\205G\262\360\257#"..., 4096) = 4096 lseek(4, 48404774912, SEEK_SET) = 48404774912 read(4, "\365\25\0\21|T\0\21}3t_\272\373\222k\r\177\303\1\201\261\221$\261B\232\3142\21U\316"..., 4096) = 4096 ^CProcess 3704 detached around 16 of these lines every second, so 4 read and 4 write operations every second, which I don't consider to be a lot.. And finally, my question: Will this process ever finish? If those numbers from fseek (48404774912) represent bytes, that would be something like 45 gigabytes, with this beeing a 3 terrabyte disk, which would give me 134 days to go, if the speed stays constant, and he scans the disk like this completly and only once. Do you have some advice for me? I have most of the data on that disk elsewhere, but I've put a lot of hours into sorting and merging it to this disk, so I would prefer to getting this disk up and running again, without formatting it anew. I don't think that the hardware is damaged since the disk is only a few months and since I can't see any I/O errors in the dmesg output. UPDATE: I just looked at the strace output again (2012-11-06_2300), now it looks like this: lseek(4, 1419860611072, SEEK_SET) = 1419860611072 read(4, "3#\f\2447\335\0\22A\355\374\276j\204'\207|\217V|\23\245[\7VP\251\242\276\207\317:"..., 4096) = 4096 lseek(4, 43018145792, SEEK_SET) = 43018145792 write(4, "]\206\231\342Y\204-2I\362\242\344\6R\205\361\324\177\265\317C\334V\324\260\334\275t=\10F."..., 4096) = 4096 lseek(4, 1419860615168, SEEK_SET) = 1419860615168 read(4, "\262\305\314Y\367\37x\326\245\226\226\320N\333$s\34\204\311\222\7\315\236\336\300TK\337\264\236\211n"..., 4096) = 4096 lseek(4, 43018149888, SEEK_SET) = 43018149888 write(4, "\271\224m\311\224\25!I\376\16;\377\0\223H\25Yd\201Y\342\r\203\271\24eG<\202{\373V"..., 4096) = 4096 lseek(4, 1419860619264, SEEK_SET) = 1419860619264 read(4, ";d\360\177\n\346\253\210\222|\250\352T\335M\33\260\320\261\7g\222P\344H?t\240\20\2548\310"..., 4096) = 4096 lseek(4, 43018153984, SEEK_SET) = 43018153984 write(4, "\360\252j\317\310\251G\227\335{\214`\341\267\31Y\202\360\v\374\307oq\3063\217Z\223\313\36D\211"..., 4096) = 4096 So this number of the lseeks before the reads, like 1419860619264 are already a lot bigger, standing for 1.29 terabytes if the numbers are bytes, so it doesn't seem to be a linear progress on a big scale, maybe there are only some areas that need work, that have big gaps in between them. (times are in CET)

    Read the article

  • Mail Server on Google Apps, Hotmail Blocks

    - by detay
    My company provides private research tools (sites) for companies. Each site has its own domain name and users. These sites need to be able to send outbound mail for notifications such as password reminders. We started using Google Apps for this purpose, but lately we are having tremendous problems with mailing, especially to hotmail. I added the SPF record as required. Mxtoolbox says my domains are not in any blacklists. I was told to contact https://postmaster.live.com/snds/addnetwork.aspx and add my network. Given that I am using google's mail servers for sending mail, should I add their IP address for registering my domain? Or should I add my server's IP even though I am not hosting the mail server on it? Here's an example of a returned message; > Delivered-To: [email protected] Received: by 10.112.1.195 with SMTP > id 3csp62757lbo; > Mon, 10 Dec 2012 10:42:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; > d=gmail.com; s=20120113; > h=mime-version:from:to:x-failed-recipients:subject:message-id:date > :content-type; > bh=x41JiCH75hsonh+5c/kzwIs7R8u4Hum2u396lkV3g2w=; > b=v+bBPvufqfVkNz2UrnhyyoGj1Cuwf6x/yMj0IXJgH27uJU7TqBOvbwnNHf33sQ7B9T > uGzxwryC2fmxBh72cGz1spwWK348nUp6KK73MXKnF8mpc3nPQ8Ke+EQpSPeJdq/7oZvd > scZTGpy//IiGEUDU5bJ7YPqYXQRycY5N6AF5iI4mWWwbS4opybp3IKpDDktu11p/YEEO > Fj9wnSCx3nLMXB/XZgSjjmnaluGhYNdh3JFtz83Vmr50qXTG/TuIXlkirP73GfGvIt2S > 7sy8/YdEbZR92mYe9wucditYr7MOuyjyYrZYCg+weXeTZiJX1PfBVieD+kD9MnCairnP > rQeQ== Received: by 10.14.215.197 with SMTP id e45mr52766237eep.0.1355164974784; > Mon, 10 Dec 2012 10:42:54 -0800 (PST) MIME-Version: 1.0 Return-Path: <> Received: by 10.14.215.197 with SMTP id > e45mr65950764eep.0; Mon, 10 Dec 2012 10:42:54 -0800 (PST) From: Mail > Delivery Subsystem <[email protected]> To: > [email protected] X-Failed-Recipients: ********@hotmail.fr > Subject: Delivery Status Notification (Failure) Message-ID: > <[email protected]> Date: Mon, 10 Dec 2012 > 18:42:54 +0000 Content-Type: text/plain; charset=ISO-8859-1 > > Delivery to the following recipient failed permanently: > > *********@hotmail.fr > > Technical details of permanent failure: Message rejected. See > http://support.google.com/mail/bin/answer.py?answer=69585 for more > information. > > ----- Original message ----- > > Received: by 10.14.215.197 with SMTP id > e45mr52766228eep.0.1355164974700; > Mon, 10 Dec 2012 10:42:54 -0800 (PST) Return-Path: <[email protected]> Received: from WIN-1BBHFMJGUGS ([91.93.102.12]) > by mx.google.com with ESMTPS id l3sm26787704eem.14.2012.12.10.10.42.53 > (version=TLSv1/SSLv3 cipher=OTHER); > Mon, 10 Dec 2012 10:42:54 -0800 (PST) Message-ID: <[email protected]> MIME-Version: 1.0 > From: [email protected] To: *********@hotmail.fr Date: Mon, 10 Dec > 2012 10:42:54 -0800 (PST) Subject: Rejoignez-nous sur achbanlek.com > Content-Type: text/html; charset=utf-8 Content-Transfer-Encoding: > base64 > > ----- End of message -----

    Read the article

  • Why would Linux VM in vSphere ESXi 5.5 show dramatically increased disk i/o latency?

    - by mhucka
    I'm stumped and I hope someone else will recognize the symptoms of this problem. Hardware: new Dell T110 II, dual-core Pentium G860 2.9 GHz, onboard SATA controller, one new 500 GB 7200 RPM cabled hard drive inside the box, other drives inside but not mounted yet. No RAID. Software: fresh CentOS 6.5 virtual machine under VMware ESXi 5.5.0 (build 174 + vSphere Client). 2.5 GB RAM allocated. The disk is how CentOS offered to set it up, namely as a volume inside an LVM Volume Group, except that I skipped having a separate /home and simply have / and /boot. CentOS is patched up, ESXi patched up, latest VMware tools installed in the VM. No users on the system, no services running, no files on the disk but the OS installation. I'm interacting with the VM via the VM virtual console in vSphere Client. Before going further, I wanted to check that I configured things more or less reasonably. I ran the following command as root in a shell on the VM: for i in 1 2 3 4 5 6 7 8 9 10; do dd if=/dev/zero of=/test.img bs=8k count=256k conv=fdatasync done I.e., just repeat the dd command 10 times, which results in printing the transfer rate each time. The results are disturbing. It starts off well: 262144+0 records in 262144+0 records out 2147483648 bytes (2.1 GB) copied, 20.451 s, 105 MB/s 262144+0 records in 262144+0 records out 2147483648 bytes (2.1 GB) copied, 20.4202 s, 105 MB/s ... but after 7-8 of these, it then prints 262144+0 records in 262144+0 records out 2147483648 bytes (2.1 GG) copied, 82.9779 s, 25.9 MB/s 262144+0 records in 262144+0 records out 2147483648 bytes (2.1 GB) copied, 84.0396 s, 25.6 MB/s 262144+0 records in 262144+0 records out 2147483648 bytes (2.1 GB) copied, 103.42 s, 20.8 MB/s If I wait a significant amount of time, say 30-45 minutes, and run it again, it again goes back to 105 MB/s, and after several rounds (sometimes a few, sometimes 10+), it drops to ~20-25 MB/s again. Plotting the disk latency in vSphere's interface, it shows periods of high disk latency hitting 1.2-1.5 seconds during the times that dd reports the low throughput. (And yes, things get pretty unresponsive while that's happening.) What could be causing this? I'm comfortable that it is not due to the disk failing, because I also had configured two other disks as an additional volume in the same system. At first I thought I did something wrong with that volume, but after commenting the volume out from /etc/fstab and rebooting, and trying the tests on / as shown above, it became clear that the problem is elsewhere. It is probably an ESXi configuration problem, but I'm not very experienced with ESXi. It's probably something stupid, but after trying to figure this out for many hours over multiple days, I can't find the problem, so I hope someone can point me in the right direction. (P.S.: yes, I know this hardware combo won't win any speed awards as a server, and I have reasons for using this low-end hardware and running a single VM, but I think that's besides the point for this question [unless it's actually a hardware problem].) ADDENDUM #1: Reading other answers such as this one made me try adding oflag=direct to dd. However, it makes no difference in the pattern of results: initially the numbers are higher for many rounds, then they drop to 20-25 MB/s. (The initial absolute numbers are in the 50 MB/s range.) ADDENDUM #2: Adding sync ; echo 3 > /proc/sys/vm/drop_caches into the loop does not make a difference at all. ADDENDUM #3: To take out further variables, I now run dd such that the file it creates is larger than the amount of RAM on the system. The new command is dd if=/dev/zero of=/test.img bs=16k count=256k conv=fdatasync oflag=direct. Initial throughput numbers with this version of the command are ~50 MB/s. They drop to 20-25 MB/s when things go south. ADDENDUM #4: Here is the output of iostat -d -m -x 1 running in another terminal window while performance is "good" and then again when it's "bad". (While this is going on, I'm running dd if=/dev/zero of=/test.img bs=16k count=256k conv=fdatasync oflag=direct.) First, when things are "good", it shows this: When things go "bad", iostat -d -m -x 1 shows this:

    Read the article

  • Best Method to SFTP or FTPS Files via SSIS

    - by Registered User
    What is the best method using SSIS (SQL Server Integration Services) to upload a file to either a remote SFTP (secure FTP with SSH2 protocal) or FTPS (FTP over SSL) site? I've used the following methods, but each has short-comings I would like to avoid: COZYROC LIBRARY Method: Install the CozyRoc library on each development and production server and use the SFTP task to upload the files. Pros: Easy to use. It looks, smells, and feels like a normal SSIS task. SSIS also recognizes the password as sensitive information and allows you all the normal options for protecting the sensitive information instead of just storing it in clear text in a non-secure manner. Works well with other SSIS tasks such as ForEach Loop Containers. Errors out when uploads and downloads fail. Works well when you don't know the names of the files on the remote FTP site to download or when you won't know the name of the file to upload until run-time. Cons: Costs money to license in a production environment. Makes you dependent upon the vendor to update their libraries between each version. Although they already have a 2008 version, this caused me a problem during the CTP's of 2008. Requires installing the libraries on each development and production machine. COMMAND LINE SFTP PROGRAM Method: Install a free command-line SFTP application such as Putty and execute it either by running a batch file or operating system process task. Pros: Free, free, and free. You can be sure it is secure if you are using Putty since numerous GUI FTP clients appear to use Putty under the covers. You DEFINATELY know you are using SSH2 and not SSH. Cons: The two command-line utilities I tried (Putty and Cygwin) required storing the SFTP password in a non-secure location. I haven't found a good way to capture failures or errors when uploading files. The process doesn't look and smell like SSIS. Most of the code is encapsulated in text files instead of SSIS itself. Difficult to use if you don't know the exact name of the file you are uploading or downloading. A 3RD PARTY C# or VB.NET LIBRARY Method: Install a SFTP or FTPS library and use a Script Task that references the library to upload the files. (I've never tried this, so I'm going to guess at the pros and cons) Pros: Probably easy to capture errors. Should work well with variables, so it would probably be easy to use even when you don't know the exact name of the file you are uploading or downloading. Cons: It's a script task combined with .NET libraries. If you are using SSIS, then you probably are more comfortable with SSIS tasks then .NET code. Script tasks are also difficult to troubleshoot since they don't have the same debugging tools and features as regular .NET projects. Creates a dependency on 3rd party code that may not work between different versions of SQL Server. To be fair, it is probably MORE likely to work between different versions of SQL Server than a 3rd party SSIS task library. Another huge con -- I haven't found a free C# or VB.NET library that does this as of yet. So if anyone knows of one, then please let me know!

    Read the article

< Previous Page | 481 482 483 484 485 486 487 488 489 490 491 492  | Next Page >