Search Results

Search found 2590 results on 104 pages for 'robert morning'.

Page 101/104 | < Previous Page | 97 98 99 100 101 102 103 104  | Next Page >

  • Computer makes hissing noise, turns off after few seconds

    - by Kaustubh P
    I have a problem similar to the questions posted here and here. This is my config: Asus M3N78-EM, with AMD Phenom X3 720 2800 Black Edition, 4GB Transcend DDR2 RAM, Nvidia 9400GT. HD is a 160 GB IDE, and a LG IDE DVD-ROM. The power button is a bit off, I have removed the cover of the switch, and the only way it turns on is just giving the "stick" under the cover a gentle press. It turns on sometimes, and at other times, I have to cut-off the power from the PSU, and try again. I will describe my problem in as detail as possible, please bear with me: The problem has started in the last week, a few months after I changed the to the powerswitch arrangement as described above. The PC makes a hissing noise, and I wasn't able to pin-point the noise source, because of the various other fans. At first, removing the HD, rebooting w/o the HD, turning it off, reconnecting and booting made the problem go away. But of late, it doesn't happen. As suggested in the other questions, I tried reducing the load by disconnecting both the IDE drives, and the problem (noise + turn-off) still occurs. I also connected another 80G IDE HD,today morning, adn it still made that noise, and turned off. I also opened up the PSU, but I couldn't see any fault in that, I tried rotating the fan by blowing into the blades, and with my fingers, but the hissing noise didn't come from there. Or maybe the speed wasn't enough to evoke that noise. A few weeks ago: I had cleaned the Cabinet and had repasted the processor and its fan using some thermal paste. Could that be at fault? I also used a vacuum to blow the dust out of the PSU, could the power have been too much, to maybe offset the fan or something? A label on the PSU says it uses a ball-bearing fan. That only leaves me with the Processor fan and the processor itself. I didn't try removing the processor fan and processor from the motherboard, and then turning the PC on, fearing damage. Will doing so cause any damage? What can I do to localize and pin-point the problem? Also, after a few tries, the Computer starts up. Sometimes it turns of within 2 seconds, sometimes after the POST. Once it turned off at the grub. Another time it booted completely and then turned off. The only way to ensure that the PC wont turn off, is if the hissing noise stops. EDIT: I suspect it to be the Processor/Processor fan, owing to the source of noise. All the config, except for the Cabinet, is just over a year old. EDIT2: I also just remembered, that I had set the "On-power resume" to turn on, i.e. If I supply he PC with power, it will turn itself on, w/o me needing to press the switch. I had done that to workaround the faulty power-switch, as noted above. EDIT3: I calculated the power my system needs, from the antec site, and I just arrived at 292W

    Read the article

  • Finally, upgrade from Nokia X3 to Samsung Galaxy S III

    This time, something slightly different but nonetheless not less interesting, hopefully. Living on a remote island like Mauritius, ill-praised 'Cyber Island' in the Indian Ocean, has its advantages in life style and relaxed environment to life in but in terms of technological aspects it can be quite a nightmare. Well, I guess this might be different story to report about... one day. Cyber Island Mauritius Despite it's shiny advertisement as Cyber Island and business in ICT hub to Africa, Mauritius is not on the latest track of available models in computer hardware or, in the context of this article, cellulars or smart-phone, or communication technology in general. Okay, I have to admit that this statement is only partly true. Money can buy, even here in Mauritius. Luckily, there are ways and ways to deal with this outcry of modern, read: technological, civilisation issues. Online shopping you might think? Yes, for sure, until you discover in your checkout procedure that a small island in the Indian Ocean isn't a preferred destination for delivery and the precious time you spent on putting your items into your cart and feeding your personal level of anticipation gets ruined on the last stint. Ordering from abroad saves you money Anyway, I got in touch with my personal courier and luckily there were some extra-kilos left in the luggage. First obstacle sorted, we have a Transporter! Okay, on the next occasion off to Amazon online and using their Prime service for fast delivery. Actually, the order was placed on Saturday evening and everything got delivered on Tuesday morning - nice job in less than 72 hours. Okay, among the items of that shopping rush I ordered a shiny Samsung Galaxy S III 16GB in oceanic blue - did I mention, that you hardly get a blue model in Mauritius? - for my BWE. Interesting side-notes: First, Amazon Germany dropped the prices for roughly 30% on the S3, and we got the 16GB model for less than 500 Euro (or approx. Rs. 19.500,-) compared to the usual Rs. 27.000,- on the local market. It even varies whether the local price is inclusive or exclusive VAT (15%). Second, since a while she was bothering me to get an iPhone and an iPad for her, fair enough I thought, decent hardware, posh design and reliable services. Until we watched the 'magical' introduction of Samsung's new models at the IFA exhibition, she read the bashing comments on Google+ on the iPhone 5 and I gave her a brief summary on the law suit between Apple and Samsung in the USA. So, yes, Samsung USA is right, the next big thing is already here - literally. My BWE loves the look and touch of the Galaxy S3. And for me it was more cost-effective in terms of purchases done at the App Store, ups, Play Store. Transfer of contacts, text messages and media files Okay, now that the hardware is in place, how to transfer all those contacts, text messages, media files, etc. between those two devices? In the past, I used to use the Nokia Communication Suite between various models but now for Android? Well, as usual Google and Bing are reliable friends and among the first hits I came across an article about How to Transfer Contacts from Nokia to Android. Couldn't be easier, right? Well, sort of... my main Windows systems are already running on Windows 8, and this actually caused problems with the mobile/smart-phone device drivers. The article provides the download for an older version 1.10 which upgrades to 2.11 (as time of writing this entry) but both couldn't get the Galaxy S3 and the Nokia connected. Shame on me... the product page clearly doesn't mention Windows 8 (for now) and Windows 8 isn't available for the general audience at all... After I took a spare machine running on Windows Vista everything went smooth. Software installed, upgrade done, device drivers for Android automatically downloaded and installed, and the same painless routine for the Nokia part. I think, I rebooted the system twice during the whole setup procedure but hey, it was more or less a distraction while coding some stuff in ASP.NET MVC and Telerik Kendo UI. The transfer of contacts and text messages was done via Wondershare MobileGo for Android, and all media files by moving the additional microSD card from one device to the other. But even without an external SD card, it would have been very easy to copy the files via Windows Explorer directly. Little catch and excellent service Fine, we are almost done and the only step left is to shift the SIM card... Ouch, gotcha! The X3 uses a standard size SIM card while the S III only accepts microSIM form factor. What an irony, bigger smartphone needs smaller SIM card. Luckily, the next showroom of Emtel is just 5 mins away up the road, and the service staff over there know their job. Finally, after roughly 10 mins of paper work, activation and small chit-chat, the S3 came to life on the mobile network. Owning a smart-phone now and knowing that my BWE would like to interact more on social networks away from home, especially to upload pictures and provide local 'check-ins', I activated a data package for her in advance, too. Even that it is Saturday, everything was already done and ready to be used. Nice bonus: The Emtel clerk directly offered me to set up the configuration for the Emtel data services, yes sure, go ahead, this saves me to search for that in the settings. Okay, spoiler-alert here, setting a static APN to access the Emtel network and the internet wouldn't be a challenge. But hey, she already had the phone in her hands and I could keep my eyes on the children. Well done, Emtel! Resume Thanks to the useful software package by Wondershare is was a hands-free experience to transfer all the data from a Nokia mobile on Symbian S60 to a Samsung Galaxy S III on Android Ice Cream Sandwich (ICS). In the future, this wont be a serious issue at all anymore thanks to synchronisation services and cloud storage. And for now, I'm only waiting for the official upgrades for Jelly Bean.

    Read the article

  • H1 Visa interview tips–What you must know before attending the interview?

    - by Gopinath
    USA’s H1 visa allows highly qualified professionals from other countries to work in America. Many IT professionals in India aspire to go to USA on H1 and work for their clients. Recently I had a chance to study H1 visa process to help one of my friends and I would like to share what I learned. With the assumption that your H1 petition is approved and you got an interview scheduled at US Embassy for your visa stamping, here are tips you must know before attending the interview Dress Code – Formals Say no to casuals or any fancy dress when you attend the interview. It’s not a party or friends home you are visiting. Consider H1 Visa interview as your job interview and dress up in formals. There is no option B for your, you must be in formals. A plain formal shirt with a matching pant is suggested for men. Tie and Suit would not be required, but if you are a professional at management level you can consider wearing suit. Women can wear either formal Salwar or formal pant-shirt. Avoid heavy jewellery, wear what is must as per your tradition or culture. Body Language -  Smile on your face Your body language reflects what you are and what’s going on in your mind. Don’t be nervous or restless, be relaxed and wear a beautiful smile on your face. A smile is a curve that sets everything straight. When you are called for the interview, greet the interviewer with a beautiful smile. Say Good Morning/Afternoon/Evening depending on time you are visiting them. Whenever appropriate say Thank You. Generally American professionals are very friendly people and they reciprocate for your greetings. Make sure that you make them comfortable to start the interview. Carry original documents in a separate folder I don’t want to talk much about the documents that are required for your H1B interview as it’s big subject on it’s own and it requires a separate post. I assume that your consultant or employer helped you in gathering all the required documents like – petition, DS 160 forms, education & job related documents, resume, interview call letters, client letters, etc. For all the documents you are going to submit at the interview make sure that you have originals in a separate folder.  If required interviewer may ask you show the originals of any of the document you submitted for visa processing. Don’t mix the original documents with the documents you need to submit for interview. Have a separate folder for them. For those who are going to stamping along with their spouse and children, they need to carry few extra original documents like – marriage certificate, marriage photos(30 numbers)/album, birth certificates, passports, education and profession related certificates of the spouse and children. Know your role & responsibilities The interviewer will ask you questions on your roles and responsibilities at client location. Be clear what is your day to day tasks at client place and prepared to face detailed questions on the same. When asked explain clearly and also make sure what you say is inline with what is mentioned in your petition and client invitation letter. At times they may ask you questions specific to the project/technology you are going to work. So doing some homework in this area will help you easily answer the questions. Failing to answer basic questions on your role & responsibilities may result in rejection. You work for your Employer at Client location but NOT FOR CLIENT One of the important things to keep in mind that you work for your employer and you are being deputed to client location on a work visa.  Your employer is going to be solely responsible for your salary, work, promotion, pay hikes or what so ever during your stay at USA. Your client will not be responsible for anything. Lets say you are employed with Company X in India and they are applying for H1B to work at your client(ex: Microsoft) in USA, you must keep in my mind that Microsoft is not your employer. Microsoft will not pay your salaries or responsible for any employment related activities. Company X will be solely responsible for all your employer related activities. If you don’t get this correctly and say to Visa interviewer that your client is responsible, then you may get into troubles. Know your client It’s always good to know the clients with whom you are going to work in USA and their business. If your client is a well know organisation then you may not get many questions from interviewer else you need to be well prepared to provide details like – nature of business, location, size of the organisation, etc.  Get to know the basic details about your client and be confident while providing those details to the interviewer. Also make sure that you never talk about any confidential details of your client projects and business. Revealing confidential details of your client may land your job itself in soup. Make sure that your spouse is also in sync with you If you’ve applied a H4 visa for your spouse along with your H1, make sure that spouse is in sync with you. Your spouse also should know the basic details of your job, your employer, client and location where you will be travelling. Your spouse should also be prepared to answers questions related to marriage, their profession(if working), kids, education, etc. Interviewers will try to asses your spouse communication skills, whereabouts while staying in USA and would they prefer to work USA or not. On H4, which is a dependent visa, your spouse is not allowed to work in USA and at any point your spouse should not show the intentions to search for work in USA. Less luggage more comfort You would have definitely heard that there are lot of restrictions on what you can carry along with you to an US Embassy while attending the interview. To be frank it’s not good to say there are many restrictions, but there are a hell a lot of restrictions. There are unbelievable restrictions and it’s for the safety of everyone. You are not allowed to carry mobile phones, CD/DVDs, USBs, bank cards, cameras, cosmetics, food(except baby food), water, wallets, backpacks, sealed covers, etc. Trust me most of the things we carry with us regularly every day are not allowed inside. As there are 100s of restrictions, it would be easier if you understand what you can carry along with you and just carry them alone. Ask your employer/consultant to provide you a checklist of items that you can carry. Most what you would require are H1B related documents provided by the employer/consultant Photographs All original documents supporting your H1B Passports Some cash for your travel expenses (avoid coins) Any important phone number / details written in a paper(like your cab driver number, etc.) If you carry restricted stuff then you will be stopped at security checks, you have to find people who can safely keep all the restricted items. Due to heavy restrictions in and around the US Embassy you will not find any  place to keep your luggage. So just carry the bare minimum things required so that you feel more comfortable. Useful Links THE U.S. NON IMMIGRANT VISA APPLICATION PROCESS U.S VISA SECURITY REGULATIONS GENERAL FAQS Hope this information is helpful to you and best of luck for your interview. Creative commons Image credit: Flickr/ alexfrance, vinothchandar. hughelectronic, architratan, striatic

    Read the article

  • Down Tools Week Cometh: Kissing Goodbye to CVs/Resumes and Cover Letters

    - by Bart Read
    I haven't blogged about what I'm doing in my (not so new) temporary role as Red Gate's technical recruiter, mostly because it's been routine, business as usual stuff, and because I've been trying to understand the role by doing it. I think now though the time has come to get a little more radical, so I'm going to tell you why I want to largely eliminate CVs/resumes and cover letters from the application process for some of our technical roles, and why I think that might be a good thing for candidates (and for us). I have a terrible confession to make, or at least it's a terrible confession for a recruiter: I don't really like CV sifting, or reading cover letters, and, unless I've misread the mood around here, neither does anybody else. It's dull, it's time-consuming, and it's somewhat soul destroying because, when all is said and done, you're being paid to be incredibly judgemental about people based on relatively little information. I feel like I've dirtied myself by saying that - I mean, after all, it's a core part of my job - but it sucks, it really does. (And, of course, the truth is I'm still a software engineer at heart, and I'm always looking for ways to do things better.) On the flip side, I've never met anyone who likes writing their CV. It takes hours and hours of faffing around and massaging it into shape, and the whole process is beset by a gnawing anxiety, frustration, and insecurity. All you really want is a chance to demonstrate your skills - not just talk about them - and how do you do that in a CV or cover letter? Often the best candidates will include samples of their work (a portfolio, screenshots, links to websites, product downloads, etc.), but sometimes this isn't possible, or may not be appropriate, or you just don't think you're allowed because of what your school/university careers service has told you (more commonly an issue with grads, obviously). And what are we actually trying to find out about people with all of this? I think the common criteria are actually pretty basic: Smart Gets things done (thanks for these two Joel) Not an a55hole* (sorry, have to get around Simple Talk's swear filter - and thanks to Professor Robert I. Sutton for this one) *Of course, everyone has off days, and I don't honestly think we're too worried about somebody being a bit grumpy every now and again. We can do a bit better than this in the context of the roles I'm talking about: we can be more specific about what "gets things done" means, at least in part. For software engineers and interns, the non-exhaustive meaning of "gets things done" is: Excellent coder For test engineers, the non-exhaustive meaning of "gets things done" is: Good at finding problems in software Competent coder Team player, etc., to me, are covered by "not an a55hole". I don't expect people to be the life and soul of the party, or a wild extrovert - that's not what team player means, and it's not what "not an a55hole" means. Some of our best technical staff are quiet, introverted types, but they're still pleasant to work with. My problem is that I don't think the initial sift really helps us find out whether people are smart and get things done with any great efficacy. It's better than nothing, for sure, but it's not as good as it could be. It's also contentious, and potentially unfair/inequitable - if you want to get an idea of what I mean by this, check out the background information section at the bottom. Before I go any further, let's look at the Red Gate recruitment process for technical staff* as it stands now: (LOTS of) People apply for jobs. All these applications go through a brutal process of manual sifting, which eliminates between 75 and 90% of them, depending upon the role, and the time of year**. Depending upon the role, those who pass the sift will be sent an assessment or telescreened. For the purposes of this blog post I'm only interested in those that are sent some sort of programming assessment, or bug hunt. This means software engineers, test engineers, and software interns, which are the roles for which I receive the most applications. The telescreen tends to be reserved for project or product managers. Those that pass the assessment are invited in for first interview. This interview is mostly about assessing their technical skills***, although we're obviously on the look out for cultural fit red flags as well. If the first interview goes well we'll invite candidates back for a second interview. This is where team/cultural fit is really scoped out. We also use this interview to dive more deeply into certain areas of their skillset, and explore any concerns that may have come out of the first interview (these obviously won't have been serious or obvious enough to cause a rejection at that point, but are things we do need to look into before we'd consider making an offer). We might subsequently invite them in for lunch before we make them an offer. This tends to happen when we're recruiting somebody for a specific team and we'd like them to meet all the people they'll be working with directly. It's not an interview per se, but can prove pivotal if they don't gel with the team. Anyone who's made it this far will receive an offer from us. *We have a slightly quirky definition of "technical staff" as it relates to the technical recruiter role here. It includes software engineers, test engineers, software interns, user experience specialists, technical authors, project managers, product managers, and development managers, but does not include product support or information systems roles. **For example, the quality of graduate applicants overall noticeably drops as the academic year wears on, which is not to say that by now there aren't still stars in there, just that they're fewer and further between. ***Some organisations prefer to assess for team fit first, but I think assessing technical skills is a more effective initial filter - if they're the nicest person in the world, but can't cut a line of code they're not going to work out. Now, as I suggested in the title, Red Gate's Down Tools Week is upon us once again - next week in fact - and I had proposed as a project that we refactor and automate the first stage of marking our programming assessments. Marking assessments, and in fact organising the marking of them, is a somewhat time-consuming process, and we receive many assessment solutions that just don't make the cut, for whatever reason. Whilst I don't think it's possible to fully automate marking, I do think it ought to be possible to run a suite of automated tests over each candidate's solution to see whether or not it behaves correctly and, if it does, move on to a manual stage where we examine the code for structure, decomposition, style, readability, maintainability, etc. Obviously it's possible to use tools to generate potentially helpful metrics for some of these indices as well. This would obviously reduce the marking workload, and would provide candidates with quicker feedback about whether they've been successful - though I do wonder if waiting a tactful interval before sending a (nicely written) rejection might be wise. I duly scrawled out a picture of my ideal process, which looked like this: The problem is, as soon as I'd roughed it out, I realised that fundamentally it wasn't an ideal process at all, which explained the gnawing feeling of cognitive dissonance I'd been wrestling with all week, whilst I'd been trying to find time to do this. Here's what I mean. Automated assessment marking, and the associated infrastructure around that, makes it much easier for us to deal with large numbers of assessments. This means we can be much more permissive about who we send assessments out to or, in other words, we can give more candidates the opportunity to really demonstrate their skills to us. And this leads to a question: why not give everyone the opportunity to demonstrate their skills, to show that they're smart and can get things done? (Two or three of us even discussed this in the down tools week hustings earlier this week.) And isn't this a lot simpler than the alternative we'd been considering? (FYI, this was automated CV/cover letter sifting by some form of textual analysis to ideally eliminate the worst 50% or so of applications based on an analysis of the 20,000 or so historical applications we've received since 2007 - definitely not the basic keyword analysis beloved of recruitment agencies, since this would eliminate hardly anyone who was awful, but definitely would eliminate stellar Oxbridge candidates - #fail - or some nightmarishly complex Google-like system where we profile all our currently employees, only to realise that we're never going to get representative results because we don't have a statistically significant sample size in any given role - also #fail.) No, I think the new way is better. We let people self-select. We make them the masters (or mistresses) of their own destiny. We give applicants the power - we put their fate in their hands - by giving them the chance to demonstrate their skills, which is what they really want anyway, instead of requiring that they spend hours and hours creating a CV and cover letter that I'm going to evaluate for suitability, and make a value judgement about, in approximately 1 minute (give or take). It doesn't matter what university you attended, it doesn't matter if you had a bad year when you took your A-levels - here's your chance to shine, so take it and run with it. (As a side benefit, we cut the number of applications we have to sift by something like two thirds.) WIN! OK, yeah, sounds good, but will it actually work? That's an excellent question. My gut feeling is yes, and I'll justify why below (and hopefully have gone some way towards doing that above as well), but what I'm proposing here is really that we run an experiment for a period of time - probably a couple of months or so - and measure the outcomes we see: How many people apply? (Wouldn't be surprised or alarmed to see this cut by a factor of ten.) How many of them submit a good assessment? (More/less than at present?) How much overhead is there for us in dealing with these assessments compared to now? What are the success and failure rates at each interview stage compared to now? How many people are we hiring at the end of it compared to now? I think it'll work because I hypothesize that, amongst other things: It self-selects for people who really want to work at Red Gate which, at the moment, is something I have to try and assess based on their CV and cover letter - but if you're not that bothered about working here, why would you complete the assessment? Candidates who would submit a shoddy application probably won't feel motivated to do the assessment. Candidates who would demonstrate good attention to detail in their CV/cover letter will demonstrate good attention to detail in the assessment. In general, only the better candidates will complete and submit the assessment. Marking assessments is much less work so we'll be able to deal with any increase that we see (hopefully we will see). There are obviously other questions as well: Is plagiarism going to be a problem? Is there any way we can detect/discourage potential plagiarism? How do we assess candidates' education and experience? What about their ability to communicate in writing? Do we still want them to submit a CV afterwards if they pass assessment? Do we want to offer them the opportunity to tell us a bit about why they'd like the job when they submit their assessment? How does this affect our relationship with recruitment agencies we might use to hire for these roles? So, what's the objective for next week's Down Tools Week? Pretty simple really - we want to implement this process for the Graduate Software Engineer and Software Engineer positions that you can find on our website. I will be joined by a crack team of our best developers (Kevin Boyle, and new Red-Gater, Sam Blackburn), and recruiting hostess with the mostest Laura McQuillen, and hopefully a couple of others as well - if I can successfully twist more arms before Monday.* Hopefully by next Friday our experiment will be up and running, and we may have changed the way Red Gate recruits software engineers for good! Stay tuned and we'll let you know how it goes! *I'm going to play dirty by offering them beer and chocolate during meetings. Some background information: how agonising over the initial CV/cover letter sift helped lead us to bin it off entirely The other day I was agonising about the new university/good degree grade versus poor A-level results issue, and decided to canvas for other opinions to see if there was something I could do that was fairer than my current approach, which is almost always to reject. This generated quite an involved discussion on our Yammer site: I'm sure you can glean a pretty good impression of my own educational prejudices from that discussion as well, although I'm very open to changing my opinion - hopefully you've already figured that out from reading the rest of this post. Hopefully you can also trace a logical path from agonising about sifting to, "Uh, hang on, why on earth are we doing this anyway?!?" Technorati Tags: recruitment,hr,developers,testers,red gate,cv,resume,cover letter,assessment,sea change

    Read the article

  • How do I restrict concurrent statistics gathering to a small set of tables from a single schema?

    - by Maria Colgan
    I got an interesting question from one of my colleagues in the performance team last week about how to restrict a concurrent statistics gather to a small subset of tables from one schema, rather than the entire schema. I thought I would share the solution we came up with because it was rather elegant, and took advantage of concurrent statistics gathering, incremental statistics, and the not so well known “obj_filter_list” parameter in DBMS_STATS.GATHER_SCHEMA_STATS procedure. You should note that the solution outline below with “obj_filter_list” still applies, even when concurrent statistics gathering and/or incremental statistics gathering is disabled. The reason my colleague had asked the question in the first place was because he wanted to enable incremental statistics for 5 large partitioned tables in one schema. The first time you gather statistics after you enable incremental statistics on a table, you have to gather statistics for all of the existing partitions so that a synopsis may be created for them. If the partitioned table in question is large and contains a lot of partition, this could take a considerable amount of time. Since my colleague only had the Exadata environment at his disposal overnight, he wanted to re-gather statistics on 5 partition tables as quickly as possible to ensure that it all finished before morning. Prior to Oracle Database 11g Release 2, the only way to do this would have been to write a script with an individual DBMS_STATS.GATHER_TABLE_STATS command for each partition, in each of the 5 tables, as well as another one to gather global statistics on the table. Then, run each script in a separate session and manually manage how many of this session could run concurrently. Since each table has over one thousand partitions that would definitely be a daunting task and would most likely keep my colleague up all night! In Oracle Database 11g Release 2 we can take advantage of concurrent statistics gathering, which enables us to gather statistics on multiple tables in a schema (or database), and multiple (sub)partitions within a table concurrently. By using concurrent statistics gathering we no longer have to run individual statistics gathering commands for each partition. Oracle will automatically create a statistics gathering job for each partition, and one for the global statistics on each partitioned table. With the use of concurrent statistics, our script can now be simplified to just five DBMS_STATS.GATHER_TABLE_STATS commands, one for each table. This approach would work just fine but we really wanted to get this down to just one command. So how can we do that? You may be wondering why we didn’t just use the DBMS_STATS.GATHER_SCHEMA_STATS procedure with the OPTION parameter set to ‘GATHER STALE’. Unfortunately the statistics on the 5 partitioned tables were not stale and enabling incremental statistics does not mark the existing statistics stale. Plus how would we limit the schema statistics gather to just the 5 partitioned tables? So we went to ask one of the statistics developers if there was an alternative way. The developer told us the advantage of the “obj_filter_list” parameter in DBMS_STATS.GATHER_SCHEMA_STATS procedure. The “obj_filter_list” parameter allows you to specify a list of objects that you want to gather statistics on within a schema or database. The parameter takes a collection of type DBMS_STATS.OBJECTTAB. Each entry in the collection has 5 feilds; the schema name or the object owner, the object type (i.e., ‘TABLE’ or ‘INDEX’), object name, partition name, and subpartition name. You don't have to specify all five fields for each entry. Empty fields in an entry are treated as if it is a wildcard field (similar to ‘*’ character in LIKE predicates). Each entry corresponds to one set of filter conditions on the objects. If you have more than one entry, an object is qualified for statistics gathering as long as it satisfies the filter conditions in one entry. You first must create the collection of objects, and then gather statistics for the specified collection. It’s probably easier to explain this with an example. I’m using the SH sample schema but needed a couple of additional partitioned table tables to get recreate my colleagues scenario of 5 partitioned tables. So I created SALES2, SALES3, and COSTS2 as copies of the SALES and COSTS table respectively (setup.sql). I also deleted statistics on all of the tables in the SH schema beforehand to more easily demonstrate our approach. Step 0. Delete the statistics on the tables in the SH schema. Step 1. Enable concurrent statistics gathering. Remember, this has to be done at the global level. Step 2. Enable incremental statistics for the 5 partitioned tables. Step 3. Create the DBMS_STATS.OBJECTTAB and pass it to the DBMS_STATS.GATHER_SCHEMA_STATS command. Here, you will notice that we defined two variables of DBMS_STATS.OBJECTTAB type. The first, filter_lst, will be used to pass the list of tables we want to gather statistics on, and will be the value passed to the obj_filter_list parameter. The second, obj_lst, will be used to capture the list of tables that have had statistics gathered on them by this command, and will be the value passed to the objlist parameter. In Oracle Database 11g Release 2, you need to specify the objlist parameter in order to get the obj_filter_list parameter to work correctly due to bug 14539274. Will also needed to define the number of objects we would supply in the obj_filter_list. In our case we ere specifying 5 tables (filter_lst.extend(5)). Finally, we need to specify the owner name and object name for each of the objects in the list. Once the list definition is complete we can issue the DBMS_STATS.GATHER_SCHEMA_STATS command. Step 4. Confirm statistics were gathered on the 5 partitioned tables. Here are a couple of other things to keep in mind when specifying the entries for the  obj_filter_list parameter. If a field in the entry is empty, i.e., null, it means there is no condition on this field. In the above example , suppose you remove the statement Obj_filter_lst(1).ownname := ‘SH’; You will get the same result since when you have specified gather_schema_stats so there is no need to further specify ownname in the obj_filter_lst. All of the names in the entry are normalized, i.e., uppercased if they are not double quoted. So in the above example, it is OK to use Obj_filter_lst(1).objname := ‘sales’;. However if you have a table called ‘MyTab’ instead of ‘MYTAB’, then you need to specify Obj_filter_lst(1).objname := ‘”MyTab”’; As I said before, although we have illustrated the usage of the obj_filter_list parameter for partitioned tables, with concurrent and incremental statistics gathering turned on, the obj_filter_list parameter is generally applicable to any gather_database_stats, gather_dictionary_stats and gather_schema_stats command. You can get a copy of the script I used to generate this post here. +Maria Colgan

    Read the article

  • PC runs very slowly for no apparent reason

    - by GalacticCowboy
    I have a Dell Latitude D820 that I've owned for about 2.5 years. It is a Core 2 Duo T7200 2.0GHz, with 2 GB of RAM, an 80 GB hard drive and an NVidia Quadro 120M video card. The computer was purchased in late November of 2006 with XP Pro, and included a free upgrade to Vista Business. (Vista was available on MSDN but not yet via retail, so the Vista Business upgrades weren't shipped until March of '07.) Since we had an MSDN subscription at the time, I installed Vista Ultimate on it pretty much as soon as I got it. It ran happily until sometime in the spring of 2007 when Media Center (which I had never used except to watch DVDs) started throwing some kind of bizarre SQL (CE?) error. This error would pop up at random times just while using the computer. Furthermore, Media Center would no longer start. I never identified the cause of this error. I had the Vista Business upgrade by this time, so I nuked the machine, installed XP and all the drivers, and then the Vista Business upgrade. Again, it ran happily for a few months and then started behaving badly once again. Vista Business doesn't have Media Center, so this exhibited completely unrelated symptoms. For no apparent reason and at fairly random times, the machine would suddenly appear to freeze up or run very slowly. For example, launching a new application window (any app) might take 30-45 seconds to paint fully. However, Task Manager showed very low CPU load, memory, etc. I tried all the normal stuff (chkdsk, defrag, etc.) and ran several diagnostic programs to try to identify any problems, but none found anything. It eventually reached the point that the computer was all but unusable, so I nuked it again and installed XP. This time I decided to stick with XP instead of going to Vista. However, within the past couple of months it has started to exhibit the same symptoms in XP that I used to see in Vista. The computer is still under Dell warranty until December, but so far they aren't any help unless I can identify a specific problem. A friend (partner in a now-dead business) has an identical machine that was purchased at the same time. His machine exhibits none of these symptoms, which leads me to believe it is a hardware issue, but I can't figure out how to identify it. Any ideas? Utilities? Seen something similar? At this point I can't even identify any pattern to the behavior, but would be willing to run a "stress test"-style app for as long as a couple days if I had any hope that it would find something. EDIT July 17 I'm testing jerryjvl's answer regarding the video card, though I'm not sure it fully explains the symptoms yet. This morning I ran a video stress test. The test itself ran fine, but immediately afterward the PC started acting up again. I left ProcExp open and various system processes were consuming 50-60% of the CPU but with no apparent reason. For example, "services.exe" was eating about 40%, but the sum of its child processes wasn't higher than about 5%. I left it alone for several minutes to settle down, and then it was fine again. I used the "video card stability test" from firestone-group.com. Its output isn't very detailed, but it at least exercises the hardware pretty hard. EDIT July 22 Thanks for your excellent suggestions. Here is an update on what I have tried so far. Ran memtest86, SeaTools (Seagate), Hitachi drive fitness test, video card stability test (mentioned above). The video card test was the only one that seemed to produce any results, though it didn't occur during the actual test. I defragged the drive (again...) with JkDefrag I dropped the video card

    Read the article

  • Inconsistent file downloads of (what should be) the same file

    - by Austin A.
    I'm working on a system that archives large collections of timetstamped images. Part of the system deals with saving an image to a growing .zip file. This morning I noticed that the log system said that an image was successfully downloaded and placed in the zip file, but when I downloaded the .zip (from an apache alias running on our server), the images didn't match the log. For example, although the log said that camera 3484 captured on January 17, 2011, when I download from the apache alias, the downloaded zip file only contains images up to January 14. So, I sshed onto the server, and unzipped the file in its own directory, and that zip file has images from January 14 to today (January 17). What strikes me as odd is that this should be the exact same file as the one I downloaded from the apache alias. Other experiments: I scp-ed the file from the server to my local machine, and the zip file has the newer images. But when I use an SCP client (in this case, Fugu for OSX), I get the zip file for the older images. In short: unzipping a file on the server or after downloading through scp or after downloading through wget gives one zip file, but unzipping a file from Chrome, Firefox, or SCP client gives a different zip file, when they should be exactly the same. Unzipping on the server... [user@server ~]$ cd /export1/amos/images/2011/84/3484/00003484/ [user@server 00003484]$ ls -la total 6180 drwxr-sr-x 2 user groupname 24 Jan 17 11:20 . drwxr-sr-x 4 user groupname 36 Jan 11 19:58 .. -rw-r--r-- 1 user groupname 6309980 Jan 17 12:05 2011.01.zip [user@server 00003484]$ unzip 2011.01.zip Archive: 2011.01.zip extracting: 20110114_140547.jpg extracting: 20110114_143554.jpg replace 20110114_143554.jpg? [y]es, [n]o, [A]ll, [N]one, [r]ename: y extracting: 20110114_143554.jpg extracting: 20110114_153458.jpg (...bunch of files...) extracting: 20110117_170459.jpg extracting: 20110117_173458.jpg extracting: 20110117_180501.jpg Using the wget through apache alias. local:~ user$ wget http://example.com/zipfiles/2011/84/3484/00003484/2011.01.zip --12:38:13-- http://example.com/zipfiles/2011/84/3484/00003484/2011.01.zip => `2011.01.zip' Resolving example.com... ip.ip.ip.ip Connecting to example.com|ip.ip.ip.ip|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 6,327,747 (6.0M) [application/zip] 100% [=====================================================================================================>] 6,327,747 1.03M/s ETA 00:00 12:38:56 (143.23 KB/s) - `2011.01.zip' saved [6327747/6327747] local:~ user$ unzip 2011.01.zip Archive: 2011.01.zip extracting: 20110114_140547.jpg (... same as before...) extracting: 20110117_183459.jpg Using scp to grab the zip local:~ user$ scp user@server:/export1/amos/images/2011/84/3484/00003484/2011.01.zip . 2011.01.zip 100% 6179KB 475.3KB/s 00:13 local:~ user$ unzip 2011.01.zip Archive: 2011.01.zip extracting: 20110114_140547.jpg (...same as before...) extracting: 20110117_183459.jpg Using Fugu to download 2011.01.zip from /export1/amos/images/2011/84/3484/00003484/ gives images 20110113_090457.jpg through 201100114_010554.jpg Using Firefox to download 2011.01.zip from http://example.com/zipfiles/2011/84/3484/00003484/2011.01.zip gives images 20110113_090457.jpg through 201100114_010554.jpg Using Chrome gives same results as Firefox. Relevant section from apache httpd.conf: # ScriptAlias: This controls which directories contain server scripts. # ScriptAliases are essentially the same as Aliases, except that # documents in the realname directory are treated as applications and # run by the server when requested rather than as documents sent to the client. # The same rules about trailing "/" apply to ScriptAlias directives as to # Alias. # ScriptAlias /cgi-bin/ "/var/www/cgi-bin/" Alias /zipfiles/ /export1/amos/images/

    Read the article

  • DBA Best Practices - A Blog Series: Episode 1 - Backups

    - by Argenis
      This blog post is part of the DBA Best Practices series, on which various topics of concern for daily database operations are discussed. Your feedback and comments are very much welcome, so please drop by the comments section and be sure to leave your thoughts on the subject. Morning Coffee When I was a DBA, the first thing I did when I sat down at my desk at work was checking that all backups had completed successfully. It really was more of a ritual, since I had a dual system in place to check for backup completion: 1) the scheduled agent jobs to back up the databases were set to alert the NOC in failure, and 2) I had a script run from a central server every so often to check for any backup failures. Why the redundancy, you might ask. Well, for one I was once bitten by the fact that database mail doesn't work 100% of the time. Potential causes for failure include issues on the SMTP box that relays your server email, firewall problems, DNS issues, etc. And so to be sure that my backups completed fine, I needed to rely on a mechanism other than having the servers do the taking - I needed to interrogate the servers and ask each one if an issue had occurred. This is why I had a script run every so often. Some of you might have monitoring tools in place like Microsoft System Center Operations Manager (SCOM) or similar 3rd party products that would track all these things for you. But at that moment, we had no resort but to write our own Powershell scripts to do it. Now it goes without saying that if you don't have backups in place, you might as well find another career. Your most sacred job as a DBA is to protect the data from a disaster, and only properly safeguarded backups can offer you peace of mind here. "But, we have a cluster...we don't need backups" Sadly I've heard this line more than I would have liked to. You need to understand that a cluster is comprised of shared storage, and that is precisely your single point of failure. A cluster will protect you from an issue at the Operating System level, and also under an outage of any SQL-related service or dependent devices. But it will most definitely NOT protect you against corruption, nor will it protect you against somebody deleting data from a table - accidentally or otherwise. Backup, fine. How often do I take a backup? The answer to this is something you will hear frequently when working with databases: it depends. What does it depend on? For one, you need to understand how much data your business is willing to lose. This is what's called Recovery Point Objective, or RPO. If you don't know how much data your business is willing to lose, you need to have an honest and realistic conversation about data loss expectations with your customers, internal or external. From my experience, their first answer to the question "how much data loss can you withstand?" will be "zero". In that case, you will need to explain how zero data loss is very difficult and very costly to achieve, even in today's computing environments. Do you want to go ahead and take full backups of all your databases every hour, or even every day? Probably not, because of the impact that taking a full backup can have on a system. That's what differential and transaction log backups are for. Have I answered the question of how often to take a backup? No, and I did that on purpose. You need to think about how much time you have to recover from any event that requires you to restore your databases. This is what's called Recovery Time Objective. Again, if you go ask your customer how long of an outage they can withstand, at first you will get a completely unrealistic number - and that will be your starting point for discussing a solution that is cost effective. The point that I'm trying to get across is that you need to have a plan. This plan needs to be practiced, and tested. Like a football playbook, you need to rehearse the moves you'll perform when the time comes. How often is up to you, and the objective is that you feel better about yourself and the steps you need to follow when emergency strikes. A backup is nothing more than an untested restore Backups are files. Files are prone to corruption. Put those two together and realize how you feel about those backups sitting on that network drive. When was the last time you restored any of those? Restoring your backups on another box - that, by the way, doesn't have to match the specs of your production server - will give you two things: 1) peace of mind, because now you know that your backups are good and 2) a place to offload your consistency checks with DBCC CHECKDB or any of the other DBCC commands like CHECKTABLE or CHECKCATALOG. This is a great strategy for VLDBs that cannot withstand the additional load created by the consistency checks. If you choose to offload your consistency checks to another server though, be sure to run DBCC CHECKDB WITH PHYSICALONLY on the production server, and if you're using SQL Server 2008 R2 SP1 CU4 and above, be sure to enable traceflags 2562 and/or 2549, which will speed up the PHYSICALONLY checks further - you can read more about this enhancement here. Back to the "How Often" question for a second. If you have the disk, and the network latency, and the system resources to do so, why not backup the transaction log often? As in, every 5 minutes, or even less than that? There's not much downside to doing it, as you will have to clear the log with a backup sooner than later, lest you risk running out space on your tlog, or even your drive. The one drawback to this approach is that you will have more files to deal with at restore time, and processing each file will add a bit of extra time to the entire process. But it might be worth that time knowing that you minimized the amount of data lost. Again, test your plan to make sure that it matches your particular needs. Where to back up to? Network share? Locally? SAN volume? This is another topic where everybody has a favorite choice. So, I'll stick to mentioning what I like to do and what I consider to be the best practice in this regard. I like to backup to a SAN volume, i.e., a drive that actually lives in the SAN, and can be easily attached to another server in a pinch, saving you valuable time - you wouldn't need to restore files on the network (slow) or pull out drives out a dead server (been there, done that, it’s also slow!). The key is to have a copy of those backup files made quickly, and, if at all possible, to a remote target on a different datacenter - or even the cloud. There are plenty of solutions out there that can help you put such a solution together. That right there is the first step towards a practical Disaster Recovery plan. But there's much more to DR, and that's material for a different blog post in this series.

    Read the article

  • Troubleshooting Website problems within the local network

    - by HaydnWVN
    Have an external website which opens fine on some PC's, yet seems to time out (or symptoms of timing out, but never actually does) on others. Seems to only affect (some) of our newer HP Pro 3305 MT Workstations. All of which are running Win7 32bit SP1 with all updates. Older PC's (Win7 32bit SP1 & WinXP) are unaffected. Using Google Chrome & Firefox makes no difference. Opening the website in IE9 Compatibility Mode has exactly the same symptoms. All PC's are on the same local network (Workgroup) using the same DNS server & gateway (inhouse) on the same internet connection, on the same subnet. There is no proxy server, no content filtering, no load balancing etc etc. Only group policy in effect (locally) is for Update scheduling. Local firewalls are all the same (Kaspersky WP4) and our external facing firewall has no IP specific settings. I have no control over the external website, traceroute shows the same destination on all PC's. It is a fairly popular website in our industry (Horticulture) and i'm not aware of any other people (even other sites within our sister companies) with the same problem. Update: Used Fiddler2 to monitor the HTTP request, seems its not getting fulfilled for some reason?! Request sent: GET http://www.rhs.org.uk/ HTTP/1.1 Host: www.rhs.org.uk Connection: keep-alive User-Agent: Mozilla/5.0 (Windows NT 6.1) AppleWebKit/536.11 (KHTML, like Gecko) Chrome/20.0.1132.47 Safari/536.11 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Encoding: gzip,deflate,sdch Accept-Language: en-GB,en-US;q=0.8,en;q=0.6 Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 Log from Fiddler 2 of the request: This session is not yet complete. Press F5 to refresh when session is complete for updated statistics. Request Count: 1 Bytes Sent: 567 (headers:567; body:0) Bytes Received: 0 (headers:0; body:0) ACTUAL PERFORMANCE -------------- ClientConnected: 17:02:33.720 ClientBeginRequest: 17:02:39.118 GotRequestHeaders: 17:02:39.118 ClientDoneRequest: 17:02:39.118 Determine Gateway: 0ms DNS Lookup: 0ms TCP/IP Connect: 46ms HTTPS Handshake: 0ms ServerConnected: 17:02:39.165 FiddlerBeginRequest: 17:02:39.165 ServerGotRequest: 17:02:39.165 ServerBeginResponse: 00:00:00.000 GotResponseHeaders: 00:00:00.000 ServerDoneResponse: 00:00:00.000 ClientBeginResponse: 00:00:00.000 ClientDoneResponse: 00:00:00.000 RESPONSE BYTES (by Content-Type) -------------- ~headers~: 0 Log of a successful request from a working PC (done this morning, excuse the timestamps being different from above): Request Count: 1 Bytes Sent: 493 (headers:493; body:0) Bytes Received: 20,413 (headers:525; body:19,888) ACTUAL PERFORMANCE -------------- ClientConnected: 08:22:47.766 ClientBeginRequest: 08:22:47.766 GotRequestHeaders: 08:22:47.766 ClientDoneRequest: 08:22:47.766 Determine Gateway: 0ms DNS Lookup: 26ms TCP/IP Connect: 30ms HTTPS Handshake: 0ms ServerConnected: 08:22:47.828 FiddlerBeginRequest: 08:22:47.828 ServerGotRequest: 08:22:47.828 ServerBeginResponse: 08:22:48.905 GotResponseHeaders: 08:22:48.905 ServerDoneResponse: 08:22:48.905 ClientBeginResponse: 08:22:48.905 ClientDoneResponse: 08:22:48.905 Overall Elapsed: 00:00:01.1388020 RESPONSE BYTES (by Content-Type) -------------- text/html: 19,888 ~headers~: 525 So my question has evolved into: What is the difference between the 2 requests and how do I determine why 1 PC is not getting a reply to it's GET request?

    Read the article

  • unattended-upgrades does not reboot

    - by Cheiron
    I am running Debian 7 stable with unattended-upgrades (every morning at 6 AM) to make sure I am always fully updated. I have the following config: $ cat /etc/apt/apt.conf.d/50unattended-upgrades // Automatically upgrade packages from these origin patterns Unattended-Upgrade::Origins-Pattern { // Archive or Suite based matching: // Note that this will silently match a different release after // migration to the specified archive (e.g. testing becomes the // new stable). "o=Debian,a=stable"; "o=Debian,a=stable-updates"; // "o=Debian,a=proposed-updates"; "origin=Debian,archive=stable,label=Debian-Security"; }; // List of packages to not update Unattended-Upgrade::Package-Blacklist { // "vim"; // "libc6"; // "libc6-dev"; // "libc6-i686"; }; // This option allows you to control if on a unclean dpkg exit // unattended-upgrades will automatically run // dpkg --force-confold --configure -a // The default is true, to ensure updates keep getting installed //Unattended-Upgrade::AutoFixInterruptedDpkg "false"; // Split the upgrade into the smallest possible chunks so that // they can be interrupted with SIGUSR1. This makes the upgrade // a bit slower but it has the benefit that shutdown while a upgrade // is running is possible (with a small delay) //Unattended-Upgrade::MinimalSteps "true"; // Install all unattended-upgrades when the machine is shuting down // instead of doing it in the background while the machine is running // This will (obviously) make shutdown slower //Unattended-Upgrade::InstallOnShutdown "true"; // Send email to this address for problems or packages upgrades // If empty or unset then no email is sent, make sure that you // have a working mail setup on your system. A package that provides // 'mailx' must be installed. E.g. "[email protected]" Unattended-Upgrade::Mail "root"; // Set this value to "true" to get emails only on errors. Default // is to always send a mail if Unattended-Upgrade::Mail is set Unattended-Upgrade::MailOnlyOnError "true"; // Do automatic removal of new unused dependencies after the upgrade // (equivalent to apt-get autoremove) //Unattended-Upgrade::Remove-Unused-Dependencies "false"; // Automatically reboot *WITHOUT CONFIRMATION* if a // the file /var/run/reboot-required is found after the upgrade Unattended-Upgrade::Automatic-Reboot "true"; // Use apt bandwidth limit feature, this example limits the download // speed to 70kb/sec //Acquire::http::Dl-Limit "70"; As you can see Automatic-Reboot is true and thus the server should automaticly reboot. Last time I checked the server was online for over 100 days, which means that the update from Debian 7.1 to Debian 7.2 has happened while the server was up (and indeed, all updates were installed), but this involves kernel updates, which means that the server should reboot. It did not. The server was running very slow, so I rebooted which fixed that. I did some research and found out that unattended-upgrades responds to the reboot-required file in /var/run/. I touched this file and waited one week, the file still exists and the server did not reboot. So I think that unattended-uppgrades ignores the auto-reboot part. So, am I doing somthing wrong here? Why did the server not restart? The upgrade part works perfect by the way, its just the reboot part that does not seem to work as it should.

    Read the article

  • DBA Best Practices - A Blog Series: Episode 1 - Backups

    - by Argenis
      This blog post is part of the DBA Best Practices series, on which various topics of concern for daily database operations are discussed. Your feedback and comments are very much welcome, so please drop by the comments section and be sure to leave your thoughts on the subject. Morning Coffee When I was a DBA, the first thing I did when I sat down at my desk at work was checking that all backups have completed successfully. It really was more of a ritual, since I had a dual system in place to check for backup completion: 1) the scheduled agent jobs to back up the databases were set to alert the NOC in failure, and 2) I had a script run from a central server every so often to check for any backup failures. Why the redundancy, you might ask. Well, for one I was once bitten by the fact that database mail doesn't work 100% of the time. Potential causes for failure include issues on the SMTP box that relays your server email, firewall problems, DNS issues, etc. And so to be sure that my backups completed fine, I needed to rely on a mechanism other than having the servers do the taking - I needed to interrogate the servers and ask each one if an issue had occurred. This is why I had a script run every so often. Some of you might have monitoring tools in place like Microsoft System Center Operations Manager (SCOM) or similar 3rd party products that would track all these things for you. But at that moment, we had no resort but to write our own Powershell scripts to do it. Now it goes without saying that if you don't have backups in place, you might as well find another career. Your most sacred job as a DBA is to protect the data from a disaster, and only properly safeguarded backups can offer you peace of mind here. "But, we have a cluster...we don't need backups" Sadly I've heard this line more than I would have liked to. You need to understand that a cluster is comprised of shared storage, and that is precisely your single point of failure. A cluster will protect you from an issue at the Operating System level, and also under an outage of any SQL-related service or dependent devices. But it will most definitely NOT protect you against corruption, nor will it protect you against somebody deleting data from a table - accidentally or otherwise. Backup, fine. How often do I take a backup? The answer to this is something you will hear frequently when working with databases: it depends. What does it depend on? For one, you need to understand how much data your business is willing to lose. This is what's called Recovery Point Objective, or RPO. If you don't know how much data your business is willing to lose, you need to have an honest and realistic conversation about data loss expectations with your customers, internal or external. From my experience, their first answer to the question "how much data loss can you withstand?" will be "zero". In that case, you will need to explain how zero data loss is very difficult and very costly to achieve, even in today's computing environments. Do you want to go ahead and take full backups of all your databases every hour, or even every day? Probably not, because of the impact that taking a full backup can have on a system. That's what differential and transaction log backups are for. Have I answered the question of how often to take a backup? No, and I did that on purpose. You need to think about how much time you have to recover from any event that requires you to restore your databases. This is what's called Recovery Time Objective. Again, if you go ask your customer how long of an outage they can withstand, at first you will get a completely unrealistic number - and that will be your starting point for discussing a solution that is cost effective. The point that I'm trying to get across is that you need to have a plan. This plan needs to be practiced, and tested. Like a football playbook, you need to rehearse the moves you'll perform when the time comes. How often is up to you, and the objective is that you feel better about yourself and the steps you need to follow when emergency strikes. A backup is nothing more than an untested restore Backups are files. Files are prone to corruption. Put those two together and realize how you feel about those backups sitting on that network drive. When was the last time you restored any of those? Restoring your backups on another box - that, by the way, doesn't have to match the specs of your production server - will give you two things: 1) peace of mind, because now you know that your backups are good and 2) a place to offload your consistency checks with DBCC CHECKDB or any of the other DBCC commands like CHECKTABLE or CHECKCATALOG. This is a great strategy for VLDBs that cannot withstand the additional load created by the consistency checks. If you choose to offload your consistency checks to another server though, be sure to run DBCC CHECKDB WITH PHYSICALONLY on the production server, and if you're using SQL Server 2008 R2 SP1 CU4 and above, be sure to enable traceflags 2562 and/or 2549, which will speed up the PHYSICALONLY checks further - you can read more about this enhancement here. Back to the "How Often" question for a second. If you have the disk, and the network latency, and the system resources to do so, why not backup the transaction log often? As in, every 5 minutes, or even less than that? There's not much downside to doing it, as you will have to clear the log with a backup sooner than later, lest you risk running out space on your tlog, or even your drive. The one drawback to this approach is that you will have more files to deal with at restore time, and processing each file will add a bit of extra time to the entire process. But it might be worth that time knowing that you minimized the amount of data lost. Again, test your plan to make sure that it matches your particular needs. Where to back up to? Network share? Locally? SAN volume? This is another topic where everybody has a favorite choice. So, I'll stick to mentioning what I like to do and what I consider to be the best practice in this regard. I like to backup to a SAN volume, i.e., a drive that actually lives in the SAN, and can be easily attached to another server in a pinch, saving you valuable time - you wouldn't need to restore files on the network (slow) or pull out drives out a dead server (been there, done that, it’s also slow!). The key is to have a copy of those backup files made quickly, and, if at all possible, to a remote target on a different datacenter - or even the cloud. There are plenty of solutions out there that can help you put such a solution together. That right there is the first step towards a practical Disaster Recovery plan. But there's much more to DR, and that's material for a different blog post in this series.

    Read the article

  • All my sites are 403 but the server is running. Errors on startup

    - by Craig
    We gave access to a contractor to install a firewall and somehow while he was doing it he fracked something up. Everything went off-line about 24 hours ago and we are effectively out of business until I solve this and the person who messed up the thing is not returning calls. I found a few errors. First, I'm not a server guy - I can look at log files and normally everything runs fine. All 'services' are running according to 1and1 server monitoring and mail is being delivered just fine. The whole thing was off-line until I (probably stupidly) updated the kernel from 6.2 to 6.3 this morning and I got everything back except the http access. All the domains (~200 of them) are returning a 403 error and nothing is recorded in the access log. On every restart I see this error in the messages log file: init: Failed to spawn ttyS0 main process: unable to execute: No such file or directory and a little later these: kernel: WARNING: at kernel/sched.c:5914 thread_return+0x232/0x79d() (Not tainted) kernel: Hardware name: X9SCL/X9SCM kernel: Modules linked in: xt_iprange iptable_filter ip_tables ip6t_REJECT nf_conntrack_ipv6 nf_defrag_ipv6 xt_state nf_conntrack ip6table_filter ip6_tables ipv6 ext4 jbd2 serio_raw i2c_i801 i2c_core sg iTCO_wdt iTCO_vendor_support e1000e ext3 jbd mbcache raid1 sd_mod crc_t10dif ahci dm_mirror dm_region_hash dm_log dm_mod [last unloaded: scsi_wait_scan] kernel: Pid: 367, comm: md3_raid1 Not tainted 2.6.32-220.2.1.el6.x86_64 #1 kernel: Call Trace: kernel: [<ffffffff81069997>] ? warn_slowpath_common+0x87/0xc0 kernel: [<ffffffff810699ea>] ? warn_slowpath_null+0x1a/0x20 kernel: [<ffffffff814eccc5>] ? thread_return+0x232/0x79d kernel: [<ffffffff8126a4d9>] ? cpumask_next_and+0x29/0x50 kernel: [<ffffffff813e9c05>] ? md_super_wait+0x55/0x90 kernel: [<ffffffff81090a10>] ? autoremove_wake_function+0x0/0x40 kernel: [<ffffffff813ebf46>] ? md_update_sb+0x206/0x3f0 kernel: [<ffffffff813ee922>] ? md_check_recovery+0x3f2/0x6d0 kernel: [<ffffffffa005b129>] ? raid1d+0x49/0x1050 [raid1] kernel: [<ffffffff814ed985>] ? schedule_timeout+0x215/0x2e0 kernel: [<ffffffff814ef447>] ? _spin_unlock_irqrestore+0x17/0x20 kernel: [<ffffffff813eb336>] ? md_thread+0x116/0x150 kernel: [<ffffffff81090a10>] ? autoremove_wake_function+0x0/0x40 kernel: [<ffffffff813eb220>] ? md_thread+0x0/0x150 kernel: [<ffffffff810906a6>] ? kthread+0x96/0xa0 kernel: [<ffffffff8100c14a>] ? child_rip+0xa/0x20 kernel: [<ffffffff81090610>] ? kthread+0x0/0xa0 kernel: [<ffffffff8100c140>] ? child_rip+0x0/0x20 And something is wrong with the Named/BIND resulting in the same error for all domains: zone DOMAINEXAMPLE.com/IN: loading from master file DOMAINEXAMPLE.com failed: file not found zone DOMAINEXAMPLE.com/IN: not loaded due to errors. _default/DOMAINEXAMPLE.com/IN: file not found I'm pretty sure this is not enough information to solve the problem, but I'm willing to engage someone who can work this out for me. Any help would be greatly appreciated.

    Read the article

  • Big data: An evening in the life of an actual buyer

    - by Jean-Pierre Dijcks
    Here I am, and this is an actual story of one of my evenings, trying to spend money with a company and ultimately failing. I just gave up and bought a service from another vendor, not the incumbent. Here is that story and how I think big data could actually fix this (and potentially prevent some of this from happening). In the end this story should illustrate how big data can benefit me (get me what I want without causing grief) and the company I am trying to buy something from. Note: Lots of details left out, I have no intention of being the annoyed blogger moaning about a specific company. What did I want to get? We watch TV, we have internet and we do have a land line. The land line is from a different vendor then the TV and the internet. I have decided that this makes no sense and I was going to get a bundle (no need to infer who this is, I just picked the generic bundle word as this is what I want to get) of all three services as this seems to save me money. I also want to not talk to people, I just want to click on a website when I feel like it and get it all sorted. I do think that is reality. I want to just do my shopping at 9.30pm while watching silly reruns on TV. Problem 1 - Bad links So, I'm an existing customer of the company I want to buy my bundle from. I go to the website, I click on offers. Turns out they are offers for new customers. After grumbling about how good they are, I click on offers for existing customers. Bummer, it goes to offers for new customers, so I click again on the link for offers for existing customers. No cigar... it just does not work. Big data solutions: 1) Do not show an existing customer the offers for new customers unless they are the same => This is only partially doable without login, but if a customer logs in the application should always know that this is an existing customer. But in general, imagine I do this from my home going through the internet service of this vendor to their domain... an instant filter should move me into the "existing customer route". 2) Flag dead or incorrect links => I've clicked the link for "existing customer offers" at least 3 times in under 5 seconds... Identifying patterns like this is easy in Hadoop and can very quickly make a list of potentially incorrect links. No need for realtime fixing, just the fact that this link can be pro-actively fixed across my entire web domain is a good thing. Preventative maintenance! Problem 2 - Purchase cannot be completed Apart from the fact that the browsing pattern to actually get to what I want is poorly designed, my purchase never gets past a specific point. In other words, I put something into my shopping cart and when I want to move on the application either crashes (with me going to an error page) or hangs or goes into something like chat. So I try again, and again and again. I think I tried this entire path (while being logged in!!) at least 10 times over the course of 20 minutes. I also clicked on the feedback button and, frustrated as I was, tried to explain this did not work... Big Data Solutions: 1) This web site does shopping cart analysis. I got an email next day stating I have things in my shopping cart, just click here to complete my purchase. After the above experience, this just added insult to my pain... 2) What should have happened, is a Hadoop job going over all logged in customers that are on the buy flow. It should flag anyone who is trying (multiple attempts from the same user to do the same thing), analyze the shopping card, the clicks to identify what the customers wants, his feedback provided (note: always own your own website feedback, never just farm this out!!) and in a short turn around time (30 minutes to 2 hours or so) email me with a link to complete my purchase. Not with a link to my shopping cart 12 hours later, but a link to actually achieve what I wanted... Why should this company go through the big data effort? I do believe this is relatively easy to do using our Oracle Event Processing and Big Data Appliance solutions combined. It is almost so simple (to my mind) that it makes no sense that this is not in place? But, now I am ranting... Why is this interesting? It is because of $$$$. After trying really hard, I mean I did this all in the evening, and again in the morning before going to work. I kept on failing, But I really wanted this to work... so an email that said, sorry, we noticed you tried to get a bundle (the log knows what I wanted, where I failed, so easy to generate), here is the link to click and complete your purchase. And here is 2 movies on us as an apology would have kept me as a customer, and got the additional $$$$ per month for the next couple of years. It would also lead to upsell on my phone package etc. Instead, I went to a completely different company, bought service from them. Lost money for company A, negative sentiment for company A and me telling this story at the water cooler so I'm influencing more people to think negatively about company A. All in all, a loss of easy money, a ding in sentiment and image where a relatively simple solution exists and can be in place on the software I describe routinely in this blog... For those who are coming to Openworld and maybe see value in solving the above, or are thinking of how to solve this, come visit us in Moscone North - Oracle Red Lounge or in the Engineered Systems Showcase.

    Read the article

  • BSOD Code 16, artifacts all over the place (gtx 260)

    - by belinea
    I have a following, quite dated rig E8400 Core 2 Duo cpu Intel Dragontail Peak DP35DP motherboard on Intel Bearlake P35 chipset 4GB ram Geforce 260gtx Corsair 650W PSU Windows 7 64 bit The following things has happened in the last few days. I first decided to update my Nvidia drivers to the latest version. That was 4 days ago. PC worked fine for 2 days and I was able to play few games as well without any problems. Then 2 days ago a first crash happened while playing the new XCOM game. BSOD code 16. Just the blue screen, no artifacts. PC rebooted and worked well again, I continued playing this game for another 2 hours and went to sleep. Next evening I tried to play some BF3 multiplayer (use to play on LOW settings). Approx. 10 minutes into the game red/pink-ish artifacts appeared on the screen and game quit to desktop. Restarted the game and another 3-4 minutes afterwards another crash to desktop but this time followed shortly by BSOD Code 16. From that moment I started to seeing artifacts on random startups, including Windows loading screen and the BIOS itself. Windows would still load but soon enough it would BSOD on a simple task like opening a Internet browser. Today I get tons of artifacts (little small red dashes all over the screen) on BIOS, loading screen, normal Windows mode as well as safe mode. I suspect it wouldn't be drivers but I tried removing and sweeping them entirely in the safe mode. PC would still start with artifacts all over the place but would load the normal mode, just without the driver, in the default lowest resolution. As soon as proper Nvidia drivers are installed though and PC rebooted, Windows doesn't load at all as BSOD now appears on loading screen. However, again, if I go to safe mode and remove drivers, normal mode launches fine. So obviously crash happens only on high resolution. I opened my machine this morning and gave it a proper cleaning even though it wasn't heavily dusted. It didn't help and number of artifacts seems to increase with every PC restart. I write these words in safe mode, which works, but I have to look through all the red dots and dashes. I don't have built in GPU chipset so I can't try removing my Geforce card nor can I borrow A GPU from anyone else. What are my options? I was looking into getting a completely new rig around Christmas so I'm not freaking out about this. If everything points to hardware issue I may simply decide to get the new machine earlier and don't bother with fixing this one. However it would be great to learn more if this is indeed situation that has slim chances of getting sorted. I realize BSOD Code 16 is rather popular topic online but every story seems a bit different and there can be number of issues with it. Hence a new thread.

    Read the article

  • Vista 64-bit, DISK BOOT FAILURE

    - by weka
    So I have this Acer Aspire AX3200-U3600A with Windows Vista (64-bit). Every night I turn it off and turn it back on in the morning. Around three weeks ago, I did a fresh factory reimage. Good as new. Then around two days ago, when I turned it on, I noticed it was running extremly slow. As in, it would often freeze up while I had multiple applications open when it usually never froze up. So I decided to restart my computer. Big mistake. My computer froze right after I clicked shut-down. I waited a while. Nothing. Waited some minutes. Nope. I decided to shut it down by pressing the power button. Here is where the problems begin. When I turned it back on, I saw the Windows logo and loading bar and then it loaded to black. I turned it off again forcefully by power button and then once more... then I got: AMD Data Change... Update New Data to DMI! then later the screen clears and I get: AHCI Option ROM BIOS Revision: 01.05.92 Date: 02-19-2008 Copyright (c) 2006-2008 Phoenix Technologies, LTD Port 01: Reset Port Error!! Port 02: then the screen clears again but this time, this loads from the bottom: Nvidia Boot Agent 249.0542 (copyright stuff... blah blah) PXE-E61: Media test failure, check cable. PXE-M0F: Exiting Nvidia Boot Agent DISK BOOT FAILURE, INSERT SYSTEM DISK AND PRESS ENTER. So I try to go into Safe Mode. Well, first of all it doesn't load as fast. After it loads disk.sys from windows/drivers, it will wait a while (2-3 mins) THEN load. However it loads the Acer eRecovery Management Tool. I have three options: Reset computer to factory default, Restore computer from user's backup, or Exit. However, the top two options are gray and disabled where as the Exit is in blue and definitely clickable. So obviously safe mode is not there... A strong thing to note: In the beginning when all of this started, I did a Boot Windows Normal from pressing f8 and I got to my desktop! It logged me in. I could see the icons on my files. However my desktop was extremely slow as in when I clicked on the Start menu, it would wait a while, then load up the menu with JUST the gradient, no text or icons... so as you can see... it saw my HDD? Also, before anyone says, I have NO USB plugged in. My mouse and keyboard are not USB inputs, I assure you. And this came without a recovery CD AND when I went in BIOS, to change the BOOT ORDER, I did NOT see a CD-ROM option. And when I tried pressing ALT+F10 to get into Acer eRecovery Management, the top two options were disabled as well. But sometimes on start-up, I get: Windows has encountered a problem communicating with a device connected to your computer. This error can be caused by unplugging a removable storage device such as an external USB drive while the device is in use, or by faulty hardware such as a hard drive or CD-ROM drive that is failing. Make sure any removeable storage is properly connected and then restart your computer. If you continue to receive this error message, contact the hardware manufacturer. Status: 0xc00000e9 Info: An unexpected I/O error has occured. Then I tried Last Known Good Configuration Settings, that gives me a BSOD. What should I do/

    Read the article

  • Hosed Windows 7 permissons

    - by Anthony
    Here is the most interesting thing I've noticed since the problems started: If I go into a control panel/system module (in this case the Resource Monitor) that has a "Check Online" type option, Firefox (my default browser) opens right up without a problem. But if I just start Firefox from any shortcuts (start menu, desktop, etc), the Firefox process starts up (and the start menu icon starts glowing) only to end without notice a few seconds later. Possibly related: If I start up in Safe-Mode (w/o Networking, but haven't tried with yet), I can start up FF or Chrome just fine, but if I attempt to open Chrome normally, I get a permissions error. Opera and Safari seem to be okay (mostly). Safari crashes when I try to download any files. All of the above leads me to believe that some (but clearly not all) core files have messed up permissions. Or rather, that I no longer have permission. System still does, based on Firefox opening without fail when the system initiates it. I've run MS Forefront once in normal mode, Malwarebytes twice in normal mode and once in safe-mode. One trojan found and deleted, but the problem persists. Two other things worth mentioning: I accidentally duplicated my library... I thought I'd try to add the "Internet" folder to my start menu, next to music and downloads. The first advanced thing I tried was "create new library". I clearly misunderstood what this means. I thought it was a way to add virtual folders to the library (which I thought, in turn, would allow me to choose it as a link on the start menu), but instead it recreated my already existing user folder, AppData and all. I didn't notice this until today. Then I tried setting permissions for my User folder to full control, recursively... Confused but not giving up,I thought I could maybe create a shortcut to the NetHood folder manually, but instead got hit with an access denied error. So I tried to change the permission levels for all sub-folders to my user folder so that I had full control. I got several access denied errors along the way. At this point I gave up, went out, ended up caught in the rain and stuck on a friend's couch and showing up late for work the next day. Thanks for nothing, Microsoft. When I finally got home today (20 hours later), I noticed that Firefox was acting really strange. I tried opening Chrome to see if the problem was client side or server side, and instead got the above-mentioned "you don't have permission to open this program" alert. And I think that's the whole story. Oh, I also did a system restore, but not chose a point from this morning (an auto update), and it worked but the problem wasn't fixed. And then all the earlier restore points were gone. So the questions are: a) is there a way to set the admin and user privs back to "default"? b) would this, in anyone's expert opinion, fix the problems I'm having? c) how come being logged in as an admin isn't the same as being logged in with admin privs? It seems that half the time I have to do run as admin for fairy standard things because i'm being treated as me-theuser and not me-theadmin. Thanks for reading.

    Read the article

  • Giving an Error Object Expected Line 48 Char 1

    - by Leslie Peer
    Giving an Error Object Expected Line 48 Char 1------What did I do wrong??? *Note Line # are for reference only not on Original Web page****** <HTML><HEAD><TITLE></TITLE> <META http-equiv=Content-Type content="text/html; charset=utf-8"> <META content="Leslie Peer" name=author> <META content="Created with Trellian WebPage" name=description> <META content="MSHTML 6.00.6000.16809" name=GENERATOR> <META content=Index name=keywords> <STYLE type=text/css>BODY { COLOR: #000000; BACKGROUND-REPEAT: repeat; FONT-FAMILY: Accent SF, Arial, Arial Black, Arial Narrow, Century Gothic, Comic Sans MS, Courier, Courier New, Georgia, Microsoft Sans Serif, Monotype Corsiva, Symbol, Tahoma, Times New Roman; BACKGROUND-COLOR: #666666 } A { FONT-SIZE: 14px; FONT-FAMILY: Arial Black, Bookman Old Style, DnD4Attack, Lucida Console, MS Serif, MS Outlook, MS Sans Serif, Rockwell Extra Bold, Roman, Star Time JL, Tahoma, Terminal, Times New Roman, Verdana, Wingdings 2, Wingdings 3, Wingdings } A:link { COLOR: #9966cc; TEXT-DECORATION: underline } A:visited { COLOR: #66ff66; TEXT-DECORATION: underline } A:hover { COLOR: #ffff00; TEXT-DECORATION: underline } A:active { COLOR: #ff0033; TEXT-DECORATION: underline } H1 { FONT-SIZE: 25px; COLOR: #9966cc; FONT-FAMILY: Century Gothic } H2 { FONT-SIZE: 20px; COLOR: #ff33cc; FONT-FAMILY: Century Gothic } H3 { FONT-SIZE: 18px; COLOR: #6666cc; FONT-FAMILY: Century Gothic } H4 { FONT-SIZE: 15px; COLOR: #00cc33; FONT-FAMILY: Century Gothic } H5 { FONT-SIZE: 10px; COLOR: #ffff33; FONT-FAMILY: Century Gothic } H6 { FONT-SIZE: 5px; COLOR: #996666; FONT-FAMILY: Century Gothic } </STYLE> line 46-<SCRIPT> line 47- CharNum=6; line 48-var Character=newArray();Character[0]="Larry Lightfoot";Character[1]="Sam Wrightfield";Character[2]="Gavin Hartfild";Character[3]="Gail Quickfoot";Character[4]="Robert Gragorian";Character[5]="Peter Shain"; line 49-var ExChar=newArray();ExChar[0]="Tabor Bloomfield"; line 50-var Class=newArray();Class[0]="MagicUser";Class[1]="Fighter";Class[2]="Fighter";Class[3]="Thief";Class[4]="Cleric";Class[5]="Fighter"; line 51-line 47var ExClass=newArray();ExClass[0]="MagicUser"; line 52-var Level=newArray();Level[0]="2";Level[1]="1";Level[2]="1";Level[3]="2";Level[4]="2";Level[5]="1"; line 53-var ExLevel=newArray();ExLevel[0]="23"; line 54-var Hpts=newArray();Hpts[0]="6";Hpts[1]="14";Hpts[2]="13";Hpts[3]="8";Hpts[4]="12";Hpts[5]="15"; line 55-var ExHpts=newArray();ExHpts[0]="145"; line 56-var Armor=newArray();Armor[0]="Cloak";Armor[1]="Splinted Armor";Armor[2]="Chain Armor";Armor[3]="Leather Armor";Armor[4]="Chain Armor";Armor[5]="Splinted Armor"; line 57-var ExArmor=newArray();ExArmor[0]="Robe of Protection +5"; line 58-var Ac1=newArray();Ac1[0]="0";Ac1[1]="3";Ac1[2]="3";Ac1[3]="4";Ac1[4]="2";Ac1[5]="3"; line 59-var ExAc=newArray();ExAc[0]="5"; line 60-var Armor1b=newArray();Armor1b[0]="Ring of Protection +1";Armor1b[1]="Small Shield";Armor1b[2]="Small Shield";Armor1b[3]="Wooden Shield";Armor1b[4]="Large Shield";Armor1b[5]="Small Shield"; line 61-var ExArmor1b=newArray();ExArmor1b[0]="Ring of Protection +5"; line 62-var Ac2=newArray();Ac2[0]="1";Ac2[1]="1";Ac2[2]="1";Ac2[3]="1";Ac2[4]="1";Ac2[5]="1"; line 63-var ExAc1b=newArray();ExAc1b[0]="5" line 64-var Str=newArray();Str[0]="15";Str[1]="16";Str[2]="14";Str[3]="13";Str[4]="14";Str[5]="13"; line 65-var ExStr=newArray();ExStr[0]=21; line 66-var Int=newArray();Int[0]="17";Int[1]="11";Int[2]="12";Int[3]="13";Int[4]="14";Int[5]="13"; line 67-var ExInt=newArray();ExInt[0]="19"; line 68-var Wis=newArray();Wis[0]="17";Wis[1]="12";Wis[2]="14";Wis[3]="13";Wis[4]="14";Wis[5]="12"; line 69-var ExWis=newArray();ExWis[0]="18"; line 70-var Dex=newArray();Dex[0]="15";Dex[1]="14";Dex[2]="13";Dex[3]="15";Dex[4]="14";Dex[5]="12"; line 71-var ExDex=newArray();ExDex[0]="19"; line 72-var Con=newArray();Con[0]="16";Con[1]="15";Con[2]="16";Con[3]="13";Con[4]="12";Con[5]="10"; line 73-var ExCon=newArray();ExCon[0]="19"; line 74-var Chr=newArray();Chr[0]="16";Chr[1]="14";Chr[2]="13";Chr[3]="12";Chr[4]="14";Chr[5]="13"; line 75-var ExChr=newArray();ExChr[0]="21"; line 76-var Expt=newArray();Expt[0]="45";Expt[1]="21";Expt[2]="16";Expt[3]="18";Expt[4]="22";Expt[5]="34"; line 77-var ExExpt=newArray();ExExpt[0]="245678"; line 78-var ExBp=newArray();ExBp[0]="Unknown";ExBp[1]="Extrademensional Plane World of Amborsia";ExBp[2]="Evil Wizard Banished for Mass Geniocodes"; line 79-</SCRIPT> line 80-</HEAD> line 81-<BODY> Giving an Error Object Expected Line 48 Char 1------What did I do wrong??? *Note Line # are for reference only not on Original Web page******

    Read the article

  • How to count each digit in a range of integers?

    - by Carlos Gutiérrez
    Imagine you sell those metallic digits used to number houses, locker doors, hotel rooms, etc. You need to find how many of each digit to ship when your customer needs to number doors/houses: 1 to 100 51 to 300 1 to 2,000 with zeros to the left The obvious solution is to do a loop from the first to the last number, convert the counter to a string with or without zeros to the left, extract each digit and use it as an index to increment an array of 10 integers. I wonder if there is a better way to solve this, without having to loop through the entire integers range. Solutions in any language or pseudocode are welcome. Edit: Answers review John at CashCommons and Wayne Conrad comment that my current approach is good and fast enough. Let me use a silly analogy: If you were given the task of counting the squares in a chess board in less than 1 minute, you could finish the task by counting the squares one by one, but a better solution is to count the sides and do a multiplication, because you later may be asked to count the tiles in a building. Alex Reisner points to a very interesting mathematical law that, unfortunately, doesn’t seem to be relevant to this problem. Andres suggests the same algorithm I’m using, but extracting digits with %10 operations instead of substrings. John at CashCommons and phord propose pre-calculating the digits required and storing them in a lookup table or, for raw speed, an array. This could be a good solution if we had an absolute, unmovable, set in stone, maximum integer value. I’ve never seen one of those. High-Performance Mark and strainer computed the needed digits for various ranges. The result for one millon seems to indicate there is a proportion, but the results for other number show different proportions. strainer found some formulas that may be used to count digit for number which are a power of ten. Robert Harvey had a very interesting experience posting the question at MathOverflow. One of the math guys wrote a solution using mathematical notation. Aaronaught developed and tested a solution using mathematics. After posting it he reviewed the formulas originated from Math Overflow and found a flaw in it (point to Stackoverflow :). noahlavine developed an algorithm and presented it in pseudocode. A new solution After reading all the answers, and doing some experiments, I found that for a range of integer from 1 to 10n-1: For digits 1 to 9, n*10(n-1) pieces are needed For digit 0, if not using leading zeros, n*10n-1 - ((10n-1) / 9) are needed For digit 0, if using leading zeros, n*10n-1 - n are needed The first formula was found by strainer (and probably by others), and I found the other two by trial and error (but they may be included in other answers). For example, if n = 6, range is 1 to 999,999: For digits 1 to 9 we need 6*105 = 600,000 of each one For digit 0, without leading zeros, we need 6*105 – (106-1)/9 = 600,000 - 111,111 = 488,889 For digit 0, with leading zeros, we need 6*105 – 6 = 599,994 These numbers can be checked using High-Performance Mark results. Using these formulas, I improved the original algorithm. It still loops from the first to the last number in the range of integers, but, if it finds a number which is a power of ten, it uses the formulas to add to the digits count the quantity for a full range of 1 to 9 or 1 to 99 or 1 to 999 etc. Here's the algorithm in pseudocode: integer First,Last //First and last number in the range integer Number //Current number in the loop integer Power //Power is the n in 10^n in the formulas integer Nines //Nines is the resut of 10^n - 1, 10^5 - 1 = 99999 integer Prefix //First digits in a number. For 14,200, prefix is 142 array 0..9 Digits //Will hold the count for all the digits FOR Number = First TO Last CALL TallyDigitsForOneNumber WITH Number,1 //Tally the count of each digit //in the number, increment by 1 //Start of optimization. Comments are for Number = 1,000 and Last = 8,000. Power = Zeros at the end of number //For 1,000, Power = 3 IF Power 0 //The number ends in 0 00 000 etc Nines = 10^Power-1 //Nines = 10^3 - 1 = 1000 - 1 = 999 IF Number+Nines <= Last //If 1,000+999 < 8,000, add a full set Digits[0-9] += Power*10^(Power-1) //Add 3*10^(3-1) = 300 to digits 0 to 9 Digits[0] -= -Power //Adjust digit 0 (leading zeros formula) Prefix = First digits of Number //For 1000, prefix is 1 CALL TallyDigitsForOneNumber WITH Prefix,Nines //Tally the count of each //digit in prefix, //increment by 999 Number += Nines //Increment the loop counter 999 cycles ENDIF ENDIF //End of optimization ENDFOR SUBROUTINE TallyDigitsForOneNumber PARAMS Number,Count REPEAT Digits [ Number % 10 ] += Count Number = Number / 10 UNTIL Number = 0 For example, for range 786 to 3,021, the counter will be incremented: By 1 from 786 to 790 (5 cycles) By 9 from 790 to 799 (1 cycle) By 1 from 799 to 800 By 99 from 800 to 899 By 1 from 899 to 900 By 99 from 900 to 999 By 1 from 999 to 1000 By 999 from 1000 to 1999 By 1 from 1999 to 2000 By 999 from 2000 to 2999 By 1 from 2999 to 3000 By 1 from 3000 to 3010 (10 cycles) By 9 from 3010 to 3019 (1 cycle) By 1 from 3019 to 3021 (2 cycles) Total: 28 cycles Without optimization: 2,235 cycles Note that this algorithm solves the problem without leading zeros. To use it with leading zeros, I used a hack: If range 700 to 1,000 with leading zeros is needed, use the algorithm for 10,700 to 11,000 and then substract 1,000 - 700 = 300 from the count of digit 1. Benchmark and Source code I tested the original approach, the same approach using %10 and the new solution for some large ranges, with these results: Original 104.78 seconds With %10 83.66 With Powers of Ten 0.07 A screenshot of the benchmark application: If you would like to see the full source code or run the benchmark, use these links: Complete Source code (in Clarion): http://sca.mx/ftp/countdigits.txt Compilable project and win32 exe: http://sca.mx/ftp/countdigits.zip Accepted answer noahlavine solution may be correct, but l just couldn’t follow the pseudo code, I think there are some details missing or not completely explained. Aaronaught solution seems to be correct, but the code is just too complex for my taste. I accepted strainer’s answer, because his line of thought guided me to develop this new solution.

    Read the article

  • C# LINQ XML Query with duplicate element names that have attributes

    - by ncain187
    <Party id="Party_1"> <PartyTypeCode tc="1">Person</PartyTypeCode> <FullName>John Doe</FullName> <GovtID>123456789</GovtID> <GovtIDTC tc="1">Social Security Number US</GovtIDTC> <ResidenceState tc="35">New Jersey</ResidenceState> <Person> <FirstName>Frank</FirstName> <MiddleName>Roberts</MiddleName> <LastName>Madison</LastName> <Prefix>Dr.</Prefix> <Suffix>III</Suffix> <Gender tc="1">Male</Gender> <BirthDate>1974-01-01</BirthDate> <Age>35</Age> <Citizenship tc="1">United States of America</Citizenship> </Person> <Address> <AddressTypeCode tc="26">Bill Mailing</AddressTypeCode> <Line1>2400 Meadow Lane</Line1> <Line2></Line2> <Line3></Line3> <Line4></Line4> <City>Somerset</City> <AddressStateTC tc="35">New Jersey</AddressStateTC> <Zip>07457</Zip> <AddressCountryTC tc="1">United States of America</AddressCountryTC> </Address> </Party> <!-- *********************** --> <!-- Insured Information --> <!-- *********************** --> <Party id="Party_2"> <PartyTypeCode tc="1">Person</PartyTypeCode> <FullName>Dollie Robert Madison</FullName> <GovtID>123956239</GovtID> <GovtIDTC tc="1">Social Security Number US</GovtIDTC> <Person> <FirstName>Dollie</FirstName> <MiddleName>R</MiddleName> <LastName>Madison</LastName> <Suffix>III</Suffix> <Gender tc="2">Female</Gender> <BirthDate>1996-10-12</BirthDate> <Citizenship tc="1">United States of America</Citizenship> </Person> <!-- Insured Address --> <Address> <AddressTypeCode tc="26">Bill Mailing</AddressTypeCode> <Line1>2400 Meadow Lane</Line1> <City>Somerset</City> <AddressStateTC tc="35">New Jersey</AddressStateTC> <Zip>07457</Zip> <AddressCountryTC tc="1">United States of America</AddressCountryTC> </Address> <Risk> <!-- Disability Begin Effective Date --> <DisabilityEffectiveStartDate>2006-01-01</DisabilityEffectiveStartDate> <!-- Disability End Effective Date --> <DisabilityEffectiveStopDate>2008-01-01</DisabilityEffectiveStopDate> </Risk> </Party> <!-- ******************************* --> <!-- Company Information --> <!-- ****************************** --> <Party id="Party_3"> <PartyTypeCode tc="2">Organization</PartyTypeCode> <Organization> <DTCCMemberCode>1234</DTCCMemberCode> </Organization> <Carrier> <CarrierCode>105</CarrierCode> </Carrier> </Party> Here is my code which doesn't work because party 3 doesn't contain FullName, I know that partyelements contains 3 parties if I only return the name attribute. Is there a way to loop through each tag seperate? var partyElements = from party in xmlDoc.Descendants("Party") select new { Name = party.Attribute("id").Value, PartyTypeCode = party.Element("PartyTypeCode").Value, FullName = party.Element("FullName").Value, GovtID = party.Element("GovtID").Value, };

    Read the article

  • Windows XP Domain Logon takes between 40 - 60 minutes

    - by Bryan
    Windows XP Clients, fully patched, with Symantec Endpoint Protection 11 client Windows 2008 R2 domain Roaming profiles Folder Redirection applied to Documents, AppData & Desktop I've enabled userenv logging, and logged on just after 17:00 last night. The user shell hadn't appeared at 17:45 when I left last night. When I arrived this morning, I checked the log file and found the following. USERENV(3f8.e7c) 17:02:18:296 LogExtSessionStatus: Successfully logged Extension Session data USERENV(654.a30) 17:04:09:468 ImpersonateUser: Failed to impersonate user with 5. USERENV(654.a30) 17:04:09:468 GetUserNameAndDomain Failed to impersonate user USERENV(654.a30) 17:04:09:468 GetUserDNSDomainName: Domain name is NT Authority. No DNS domain name available. USERENV(c8c.cb8) 17:04:09:781 LibMain: Process Name: C:\Program Files\Symantec\Symantec Endpoint Protection\SescLU.exe USERENV(cd0.cd4) 17:04:10:781 LibMain: Process Name: C:\PROGRA~1\Symantec\LIVEUP~1\LUCOMS~1.EXE USERENV(d08.c84) 17:07:09:609 LibMain: Process Name: C:\Program Files\Symantec\Symantec Endpoint Protection\SescLU.exe USERENV(cbc.cc0) 17:07:10:625 LibMain: Process Name: C:\Program Files\Symantec\LiveUpdate\luall.exe USERENV(db0.db4) 17:07:10:781 LibMain: Process Name: C:\PROGRA~1\Symantec\LIVEUP~1\LUCOMS~1.EXE USERENV(e00.e0c) 17:07:11:062 LibMain: Process Name: C:\Program Files\Symantec\LiveUpdate\LuCallbackProxy.exe USERENV(e20.e34) 17:07:11:203 LibMain: Process Name: C:\Program Files\Symantec\LiveUpdate\LuCallbackProxy.exe USERENV(e40.e50) 17:07:11:406 LibMain: Process Name: C:\Program Files\Symantec\LiveUpdate\LuCallbackProxy.exe USERENV(efc.54c) 17:07:11:656 LibMain: Process Name: C:\Program Files\Symantec\LiveUpdate\LuCallbackProxy.exe USERENV(ccc.df0) 17:08:45:687 LibMain: Process Name: C:\Program Files\Symantec\Symantec Endpoint Protection\SescLU.exe USERENV(e24.e20) 17:08:45:937 LibMain: Process Name: C:\Program Files\Symantec\LiveUpdate\luall.exe USERENV(ff0.ff4) 17:08:46:078 LibMain: Process Name: C:\PROGRA~1\Symantec\LIVEUP~1\LUCOMS~1.EXE USERENV(32c.cd0) 17:08:46:265 LibMain: Process Name: C:\Program Files\Symantec\LiveUpdate\LuCallbackProxy.exe USERENV(cc4.3d4) 17:08:46:406 LibMain: Process Name: C:\Program Files\Symantec\LiveUpdate\LuCallbackProxy.exe USERENV(434.4d0) 17:08:46:593 LibMain: Process Name: C:\Program Files\Symantec\LiveUpdate\LuCallbackProxy.exe USERENV(f2c.ac) 17:08:46:828 LibMain: Process Name: C:\Program Files\Symantec\LiveUpdate\LuCallbackProxy.exe USERENV(d60.d7c) 17:09:40:265 LibMain: Process Name: C:\Program Files\Symantec\Symantec Endpoint Protection\SescLU.exe USERENV(d94.d98) 17:09:40:531 LibMain: Process Name: C:\PROGRA~1\Symantec\LIVEUP~1\LUCOMS~1.EXE USERENV(bc4.3c4) 17:10:52:765 LibMain: Process Name: C:\Program Files\Symantec\Symantec Endpoint Protection\SescLU.exe USERENV(37c.90c) 17:10:52:984 LibMain: Process Name: C:\Program Files\Symantec\LiveUpdate\luall.exe USERENV(580.540) 17:10:53:109 LibMain: Process Name: C:\PROGRA~1\Symantec\LIVEUP~1\LUCOMS~1.EXE USERENV(c18.c30) 17:10:53:312 LibMain: Process Name: C:\Program Files\Symantec\LiveUpdate\LuCallbackProxy.exe USERENV(c44.288) 17:10:53:468 LibMain: Process Name: C:\Program Files\Symantec\LiveUpdate\LuCallbackProxy.exe USERENV(a34.cf4) 17:10:53:656 LibMain: Process Name: C:\Program Files\Symantec\LiveUpdate\LuCallbackProxy.exe USERENV(d3c.d4c) 17:10:53:890 LibMain: Process Name: C:\Program Files\Symantec\LiveUpdate\LuCallbackProxy.exe USERENV(970.948) 17:15:09:468 LibMain: Process Name: C:\Program Files\Symantec\Symantec Endpoint Protection\SescLU.exe USERENV(150.9dc) 17:15:09:734 LibMain: Process Name: C:\PROGRA~1\Symantec\LIVEUP~1\LUCOMS~1.EXE USERENV(f90.cec) 17:20:38:718 LibMain: Process Name: C:\Program Files\Symantec\Symantec Endpoint Protection\SescLU.exe USERENV(d8c.d70) 17:20:38:984 LibMain: Process Name: C:\PROGRA~1\Symantec\LIVEUP~1\LUCOMS~1.EXE USERENV(9a0.fa0) 17:26:07:953 LibMain: Process Name: C:\Program Files\Symantec\Symantec Endpoint Protection\SescLU.exe USERENV(844.51c) 17:26:08:218 LibMain: Process Name: C:\PROGRA~1\Symantec\LIVEUP~1\LUCOMS~1.EXE USERENV(d00.9ac) 17:31:19:453 LibMain: Process Name: C:\Program Files\Symantec\Symantec Endpoint Protection\SescLU.exe USERENV(ad4.624) 17:31:19:718 LibMain: Process Name: C:\PROGRA~1\Symantec\LIVEUP~1\LUCOMS~1.EXE USERENV(654.694) 17:31:46:390 ImpersonateUser: Failed to impersonate user with 5. USERENV(654.694) 17:31:46:390 GetUserNameAndDomain Failed to impersonate user USERENV(654.694) 17:31:46:390 GetUserDNSDomainName: Domain name is NT Authority. No DNS domain name available. USERENV(af8.610) 17:36:48:625 LibMain: Process Name: C:\Program Files\Symantec\Symantec Endpoint Protection\SescLU.exe USERENV(aa4.dfc) 17:36:48:906 LibMain: Process Name: C:\PROGRA~1\Symantec\LIVEUP~1\LUCOMS~1.EXE USERENV(2dc.5c8) 17:42:17:812 LibMain: Process Name: C:\Program Files\Symantec\Symantec Endpoint Protection\SescLU.exe USERENV(f70.8ac) 17:42:18:078 LibMain: Process Name: C:\PROGRA~1\Symantec\LIVEUP~1\LUCOMS~1.EXE USERENV(d50.c30) 17:47:47:062 LibMain: Process Name: C:\Program Files\Symantec\Symantec Endpoint Protection\SescLU.exe USERENV(c2c.c3c) 17:47:47:328 LibMain: Process Name: C:\PROGRA~1\Symantec\LIVEUP~1\LUCOMS~1.EXE USERENV(ef0.4cc) 17:53:16:234 LibMain: Process Name: C:\Program Files\Symantec\Symantec Endpoint Protection\SescLU.exe USERENV(cd4.c84) 17:53:16:500 LibMain: Process Name: C:\PROGRA~1\Symantec\LIVEUP~1\LUCOMS~1.EXE USERENV(828.8c4) 17:58:45:484 LibMain: Process Name: C:\Program Files\Symantec\Symantec Endpoint Protection\SescLU.exe USERENV(a24.b30) 17:58:45:750 LibMain: Process Name: C:\PROGRA~1\Symantec\LIVEUP~1\LUCOMS~1.EXE I've seen posts suggesting that it may be Windows Desktop Search 3.01 that is causing this, so I've removed that. I've removed the policy, 'Always wait for the network at startup or logon', thinking that might have helped. I'm running out of ideas. Has anyone seen this before?

    Read the article

  • Hyper-V File Server Clustering - at my wit’s end

    - by René Kåbis
    I am at my wit’s end with File Server clustering under Hyper-V. I am hoping that someone might be able to help me figure out this Gordian Knot of a technology that seems to have dead ends (like forcing cluster VMs to use iSCSI drives where normally-attached VHDX drives could suffice) where logic and reason would normally provide a logical solution. My hardware: I will be running three servers (in the end), but right now everything is taking place on one server. One of the secondary servers will exist purely as a witness/quorum, and another slightly more powerful one will be acting as an emergency backup (with additional storage, just not redundant) to hold the secondary AD VM and the other halves of a set of clustered VMs: the SQL VM and the file system VM. Please note, these each are the depreciated nodes of a cluster, the main nodes will be on the most powerful first machine. My heavy lifter is a machine that also contains all of the truly redundant storage on the network. If this gives anyone the heebie-geebies, too bad. It has a 6TB (usable) RAID-10 array, and will (in the end) hold the primary nodes of both aforementioned clusters, but is right now holding all VMs. This is, right now: DC01, DC02, SQL01, SQL02, FS01 & FS02. Eventually, I will be adding additional VMs to handle Exchange, Sharepoint and Lync, but only to this main server (the secondary server won't be able to handle more than three or four VMs, so why burden it? The AD, SQL & FS VMs are the most critical for the business). If anyone is now saying, “wait, what about a SAN or a NAS for the file servers?”, well too bad. What exists on the main machine is what I have to deal with. I followed these instructions, but I seem to be unable to get things to work. In order to make the file server truly redundant, I cannot trust any one machine to hold the only data store on the network. Therefore, I have created a set of iSCSI drives on the VM-host of the main machine, and attached one to each file server VM. The end result is that I want my FS01 to sit on the heavy lifter, along with its iSCSI “drive”, and FS02 will sit on the secondary machine with its own iSCSI “drive” there as well. That is, neither iSCSI drive will end up sitting on the same machine as the other. As such, the clustered FS will utterly duplicate the contents of the iSCSI drives between each other, so that if one physical machine (or the FS VM) goes toes-up, the other has got a full copy of the data on its own iSCSI drive. My problem occurs when I try to apply the file server role within the failover cluster manager. Actually, it is even before that -- it occurs when adding the disks. Since I have added each disk preferentially to a specific VM (by limiting the initiator by DNS hostname, and by adding two-way CHAP authentication), this forces each VM to be in control of its own iSCSI disk. However, when I try to add the disks to the Disks section of Storage within Failover Cluster Manager, the entire process fails for a random disk of the pair. That is, one will get online, but the other will remain offline because it does not have the correct “owner node”. I mean, really -- WTF? Of course it doesn’t have the right owner node, both drives are showing the same node name!! I cannot seem to have one drive show up with one node name as owner, and the other drive show up with the other node name as owner. And because both drives are not “online”, I cannot create a pool to apply to a cluster role. Talk about getting stuck between a rock and a hard place! I’ve got more to add, but my work is closing for the day and I have to wrap things up. I will try to add more tomorrow morning when I get in. My main objective is to have a file server VM on each machine, the storage on each machine, but a transparent failover in case one physical machine fails. Essentially, a failover FS that doesn’t care which machine fails -- the storage contents are replicated equally on each machine. Am I even heading in the right direction?

    Read the article

  • Windows Server 2008 Task Scheduler Tasks Not Executing

    - by omatase
    I've been having an intermittent problem for some time now with the Windows Task Scheduler that I can't work out. I use the task scheduler to run a C# app I've written that has various plugins used to ensure production systems are working. This task scheduler itself is actually a production system so I have one simple task that executes every 8 minutes to notify an external monitoring system that the task scheduler is still up. If this external service fails to receive an "all-clear" at least once every 15 minutes (or so I don't remember the exact number right now) it will message us that the monitoring system is down. In the past we've had intermittent "down" messages from time to time and each time I've investigated the cause I was unable to find any problems. So this time I wanted to ask the StackOverflow community since it doesn't look like I'll have luck on my own. This morning at 2:32 AM the task fired (exactly 8 minutes after the previous firing) however the task didn't fire again until 3:28. There are no errors that I can see in the Event Viewer at this time. When I look at the Task Scheduler log there are no errors there either. Here is what the log looks like though: Information 6/11/2011 3:28:56 AM 102 Task completed (2) d6cf2412-269e-48bf-9f40-4a863347baad Information 6/11/2011 3:28:56 AM 201 Action completed (2) d6cf2412-269e-48bf-9f40-4a863347baad Information 6/11/2011 3:28:55 AM 129 Created Task Process Info Information 6/11/2011 3:28:55 AM 200 Action started (1) d6cf2412-269e-48bf-9f40-4a863347baad Information 6/11/2011 3:28:55 AM 100 Task Started (1) d6cf2412-269e-48bf-9f40-4a863347baad Information 6/11/2011 3:28:55 AM 319 Task Engine received message to start task (1) Information 6/11/2011 3:28:55 AM 107 Task triggered on scheduler Info d6cf2412-269e-48bf-9f40-4a863347baad Information 6/11/2011 3:28:15 AM 102 Task completed (2) b91fe5ce-39ef-42fb-adbe-bd8be012c00a Information 6/11/2011 3:28:15 AM 201 Action completed (2) b91fe5ce-39ef-42fb-adbe-bd8be012c00a Information 6/11/2011 3:28:15 AM 102 Task completed (2) 556c07dc-2724-4a21-a97e-dc4abd56f94d Information 6/11/2011 3:28:15 AM 201 Action completed (2) 556c07dc-2724-4a21-a97e-dc4abd56f94d Information 6/11/2011 3:28:15 AM 102 Task completed (2) 79328289-f742-49dd-aa0d-c3d05db50895 Information 6/11/2011 3:28:15 AM 201 Action completed (2) 79328289-f742-49dd-aa0d-c3d05db50895 Information 6/11/2011 3:28:15 AM 102 Task completed (2) 19743755-47b6-4b98-9bec-052193be9496 Information 6/11/2011 3:28:15 AM 201 Action completed (2) 19743755-47b6-4b98-9bec-052193be9496 Information 6/11/2011 3:28:15 AM 102 Task completed (2) c165754f-e3e6-4176-a327-11f9c06c39a5 Information 6/11/2011 3:28:15 AM 201 Action completed (2) c165754f-e3e6-4176-a327-11f9c06c39a5 Information 6/11/2011 3:28:15 AM 102 Task completed (2) 0e62ad3e-1f6e-40c0-9155-19f0108dee22 Information 6/11/2011 3:28:15 AM 201 Action completed (2) 0e62ad3e-1f6e-40c0-9155-19f0108dee22 Information 6/11/2011 3:28:10 AM 129 Created Task Process Info Information 6/11/2011 3:28:10 AM 200 Action started (1) 0e62ad3e-1f6e-40c0-9155-19f0108dee22 Information 6/11/2011 3:28:10 AM 129 Created Task Process Info Information 6/11/2011 3:28:10 AM 200 Action started (1) c165754f-e3e6-4176-a327-11f9c06c39a5 Information 6/11/2011 3:28:10 AM 129 Created Task Process Info Information 6/11/2011 3:28:10 AM 200 Action started (1) 19743755-47b6-4b98-9bec-052193be9496 Information 6/11/2011 3:28:10 AM 129 Created Task Process Info Information 6/11/2011 3:28:10 AM 200 Action started (1) 79328289-f742-49dd-aa0d-c3d05db50895 Information 6/11/2011 3:28:10 AM 129 Created Task Process Info Information 6/11/2011 3:28:10 AM 200 Action started (1) 556c07dc-2724-4a21-a97e-dc4abd56f94d Information 6/11/2011 3:28:10 AM 129 Created Task Process Info Information 6/11/2011 3:28:10 AM 200 Action started (1) b91fe5ce-39ef-42fb-adbe-bd8be012c00a Information 6/11/2011 3:28:10 AM 100 Task Started (1) 0e62ad3e-1f6e-40c0-9155-19f0108dee22 Information 6/11/2011 3:28:10 AM 319 Task Engine received message to start task (1) Information 6/11/2011 3:28:10 AM 100 Task Started (1) c165754f-e3e6-4176-a327-11f9c06c39a5 Information 6/11/2011 3:28:10 AM 319 Task Engine received message to start task (1) Information 6/11/2011 3:28:10 AM 100 Task Started (1) 19743755-47b6-4b98-9bec-052193be9496 Information 6/11/2011 3:28:10 AM 319 Task Engine received message to start task (1) Information 6/11/2011 3:28:10 AM 100 Task Started (1) 79328289-f742-49dd-aa0d-c3d05db50895 Information 6/11/2011 3:28:10 AM 319 Task Engine received message to start task (1) Information 6/11/2011 3:28:10 AM 100 Task Started (1) 556c07dc-2724-4a21-a97e-dc4abd56f94d Information 6/11/2011 3:28:10 AM 319 Task Engine received message to start task (1) Information 6/11/2011 3:28:10 AM 100 Task Started (1) b91fe5ce-39ef-42fb-adbe-bd8be012c00a Information 6/11/2011 3:28:10 AM 319 Task Engine received message to start task (1) Information 6/11/2011 3:28:10 AM 107 Task triggered on scheduler Info 0e62ad3e-1f6e-40c0-9155-19f0108dee22 Information 6/11/2011 3:28:10 AM 107 Task triggered on scheduler Info c165754f-e3e6-4176-a327-11f9c06c39a5 Information 6/11/2011 3:28:10 AM 107 Task triggered on scheduler Info 19743755-47b6-4b98-9bec-052193be9496 Information 6/11/2011 3:28:10 AM 107 Task triggered on scheduler Info 79328289-f742-49dd-aa0d-c3d05db50895 Information 6/11/2011 3:28:10 AM 107 Task triggered on scheduler Info 556c07dc-2724-4a21-a97e-dc4abd56f94d Information 6/11/2011 3:28:10 AM 107 Task triggered on scheduler Info b91fe5ce-39ef-42fb-adbe-bd8be012c00a Information 6/11/2011 2:32:56 AM 102 Task completed (2) 16e4f2c3-a340-410a-9c14-4bfe0861fdd5 Information 6/11/2011 2:32:56 AM 201 Action completed (2) 16e4f2c3-a340-410a-9c14-4bfe0861fdd5 Information 6/11/2011 2:32:55 AM 129 Created Task Process Info Information 6/11/2011 2:32:55 AM 200 Action started (1) 16e4f2c3-a340-410a-9c14-4bfe0861fdd5 Information 6/11/2011 2:32:55 AM 100 Task Started (1) 16e4f2c3-a340-410a-9c14-4bfe0861fdd5 Information 6/11/2011 2:32:55 AM 319 Task Engine received message to start task (1) Information 6/11/2011 2:32:55 AM 107 Task triggered on scheduler Info 16e4f2c3-a340-410a-9c14-4bfe0861fdd5 Seems kind of strange. I also have two other C# apps that run and check something each hour on the hour using task scheduler. If I look at the history for those I can see that they didn't execute at 3 AM either! They all waited until 3:28 to start as well. If I look at "tasks completed" in the Event Viewer it shows that only one task was able to run between the 2:32 AM to 3:28 AM time period. The task was "\Microsoft\Windows\RAC\RACAgent" And here's what it looked like: Information 6/11/2011 3:18:09 AM 102 Task completed (2) 00c53a85-ba20-4666-80db-fbbe2492c0ad Information 6/11/2011 3:18:09 AM 201 Action completed (2) 00c53a85-ba20-4666-80db-fbbe2492c0ad Information 6/11/2011 3:18:08 AM 129 Created Task Process Info Information 6/11/2011 3:18:08 AM 200 Action started (1) 00c53a85-ba20-4666-80db-fbbe2492c0ad Information 6/11/2011 3:18:08 AM 100 Task Started (1) 00c53a85-ba20-4666-80db-fbbe2492c0ad Information 6/11/2011 3:18:08 AM 319 Task Engine received message to start task (1) I appreciate any ideas anyone may have.

    Read the article

  • Mysql performance problem & Failed DIMM

    - by murdoch
    Hi I have a dedicated mysql database server which has been having some performance problems recently, under normal load the server will be running fine, then suddenly out of the blue the performance will fall off a cliff. The server isn't using the swap file and there is 12GB of RAM in the server, more than enough for its needs. After contacting my hosting comapnies support they have discovered that there is a failed 2GB DIMM in the server and have scheduled to replace it tomorow morning. My question is could a failed DIMM result in the performance problems I am seeing or is this just coincidence? My worry is that they will replace the ram tomorrow but the problems will persist and I will still be lost of explanations so I am just trying to think ahead. The reason I ask is that there is plenty of RAM in the server, more than required and simply missing 2GB should be a problem, so if this failed DIMM is causing these performance problems then the OS must be trying to access the failed DIMM and slowing down as a result. Does that sound like a credible explanation? This is what DELLs omreport program says about the RAM, notice one dimm is "Critical" Memory Information Health : Critical Memory Operating Mode Fail Over State : Inactive Memory Operating Mode Configuration : Optimizer Attributes of Memory Array(s) Attributes : Location Memory Array 1 : System Board or Motherboard Attributes : Use Memory Array 1 : System Memory Attributes : Installed Capacity Memory Array 1 : 12288 MB Attributes : Maximum Capacity Memory Array 1 : 196608 MB Attributes : Slots Available Memory Array 1 : 18 Attributes : Slots Used Memory Array 1 : 6 Attributes : ECC Type Memory Array 1 : Multibit ECC Total of Memory Array(s) Attributes : Total Installed Capacity Value : 12288 MB Attributes : Total Installed Capacity Available to the OS Value : 12004 MB Attributes : Total Maximum Capacity Value : 196608 MB Details of Memory Array 1 Index : 0 Status : Ok Connector Name : DIMM_A1 Type : DDR3-Registered Size : 2048 MB Index : 1 Status : Ok Connector Name : DIMM_A2 Type : DDR3-Registered Size : 2048 MB Index : 2 Status : Ok Connector Name : DIMM_A3 Type : DDR3-Registered Size : 2048 MB Index : 3 Status : Critical Connector Name : DIMM_B1 Type : DDR3-Registered Size : 2048 MB Index : 4 Status : Ok Connector Name : DIMM_B2 Type : DDR3-Registered Size : 2048 MB Index : 5 Status : Ok Connector Name : DIMM_B3 Type : DDR3-Registered Size : 2048 MB the command free -m shows this, the server seems to be using more than 10GB of ram which would suggest it is trying to use the DIMM total used free shared buffers cached Mem: 12004 10766 1238 0 384 4809 -/+ buffers/cache: 5572 6432 Swap: 2047 0 2047 iostat output while problem is occuring avg-cpu: %user %nice %system %iowait %steal %idle 52.82 0.00 11.01 0.00 0.00 36.17 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 47.00 0.00 576.00 0 576 sda1 0.00 0.00 0.00 0 0 sda2 1.00 0.00 32.00 0 32 sda3 0.00 0.00 0.00 0 0 sda4 0.00 0.00 0.00 0 0 sda5 46.00 0.00 544.00 0 544 avg-cpu: %user %nice %system %iowait %steal %idle 53.12 0.00 7.81 0.00 0.00 39.06 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 49.00 0.00 592.00 0 592 sda1 0.00 0.00 0.00 0 0 sda2 0.00 0.00 0.00 0 0 sda3 0.00 0.00 0.00 0 0 sda4 0.00 0.00 0.00 0 0 sda5 49.00 0.00 592.00 0 592 avg-cpu: %user %nice %system %iowait %steal %idle 56.09 0.00 7.43 0.37 0.00 36.10 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 232.00 0.00 64520.00 0 64520 sda1 0.00 0.00 0.00 0 0 sda2 159.00 0.00 63728.00 0 63728 sda3 0.00 0.00 0.00 0 0 sda4 0.00 0.00 0.00 0 0 sda5 73.00 0.00 792.00 0 792 avg-cpu: %user %nice %system %iowait %steal %idle 52.18 0.00 9.24 0.06 0.00 38.51 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 49.00 0.00 600.00 0 600 sda1 0.00 0.00 0.00 0 0 sda2 0.00 0.00 0.00 0 0 sda3 0.00 0.00 0.00 0 0 sda4 0.00 0.00 0.00 0 0 sda5 49.00 0.00 600.00 0 600 avg-cpu: %user %nice %system %iowait %steal %idle 54.82 0.00 8.64 0.00 0.00 36.55 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 100.00 0.00 2168.00 0 2168 sda1 0.00 0.00 0.00 0 0 sda2 0.00 0.00 0.00 0 0 sda3 0.00 0.00 0.00 0 0 sda4 0.00 0.00 0.00 0 0 sda5 100.00 0.00 2168.00 0 2168 avg-cpu: %user %nice %system %iowait %steal %idle 54.78 0.00 6.75 0.00 0.00 38.48 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 84.00 0.00 896.00 0 896 sda1 0.00 0.00 0.00 0 0 sda2 0.00 0.00 0.00 0 0 sda3 0.00 0.00 0.00 0 0 sda4 0.00 0.00 0.00 0 0 sda5 84.00 0.00 896.00 0 896 avg-cpu: %user %nice %system %iowait %steal %idle 54.34 0.00 7.31 0.00 0.00 38.35 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 81.00 0.00 840.00 0 840 sda1 0.00 0.00 0.00 0 0 sda2 0.00 0.00 0.00 0 0 sda3 0.00 0.00 0.00 0 0 sda4 0.00 0.00 0.00 0 0 sda5 81.00 0.00 840.00 0 840 avg-cpu: %user %nice %system %iowait %steal %idle 55.18 0.00 5.81 0.44 0.00 38.58 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 317.00 0.00 105632.00 0 105632 sda1 0.00 0.00 0.00 0 0 sda2 224.00 0.00 104672.00 0 104672 sda3 0.00 0.00 0.00 0 0 sda4 0.00 0.00 0.00 0 0 sda5 93.00 0.00 960.00 0 960 avg-cpu: %user %nice %system %iowait %steal %idle 55.38 0.00 7.63 0.00 0.00 36.98 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 74.00 0.00 800.00 0 800 sda1 0.00 0.00 0.00 0 0 sda2 0.00 0.00 0.00 0 0 sda3 0.00 0.00 0.00 0 0 sda4 0.00 0.00 0.00 0 0 sda5 74.00 0.00 800.00 0 800 avg-cpu: %user %nice %system %iowait %steal %idle 56.43 0.00 7.80 0.00 0.00 35.77 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 72.00 0.00 784.00 0 784 sda1 0.00 0.00 0.00 0 0 sda2 0.00 0.00 0.00 0 0 sda3 0.00 0.00 0.00 0 0 sda4 0.00 0.00 0.00 0 0 sda5 72.00 0.00 784.00 0 784 avg-cpu: %user %nice %system %iowait %steal %idle 54.87 0.00 6.49 0.00 0.00 38.64 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 80.20 0.00 855.45 0 864 sda1 0.00 0.00 0.00 0 0 sda2 0.00 0.00 0.00 0 0 sda3 0.00 0.00 0.00 0 0 sda4 0.00 0.00 0.00 0 0 sda5 80.20 0.00 855.45 0 864 avg-cpu: %user %nice %system %iowait %steal %idle 57.22 0.00 5.69 0.00 0.00 37.09 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 33.00 0.00 432.00 0 432 sda1 0.00 0.00 0.00 0 0 sda2 0.00 0.00 0.00 0 0 sda3 0.00 0.00 0.00 0 0 sda4 0.00 0.00 0.00 0 0 sda5 33.00 0.00 432.00 0 432 avg-cpu: %user %nice %system %iowait %steal %idle 56.03 0.00 7.93 0.00 0.00 36.04 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 41.00 0.00 560.00 0 560 sda1 0.00 0.00 0.00 0 0 sda2 2.00 0.00 88.00 0 88 sda3 0.00 0.00 0.00 0 0 sda4 0.00 0.00 0.00 0 0 sda5 39.00 0.00 472.00 0 472 avg-cpu: %user %nice %system %iowait %steal %idle 55.78 0.00 5.13 0.00 0.00 39.09 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 29.00 0.00 392.00 0 392 sda1 0.00 0.00 0.00 0 0 sda2 0.00 0.00 0.00 0 0 sda3 0.00 0.00 0.00 0 0 sda4 0.00 0.00 0.00 0 0 sda5 29.00 0.00 392.00 0 392 avg-cpu: %user %nice %system %iowait %steal %idle 53.68 0.00 8.30 0.06 0.00 37.95 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 78.00 0.00 4280.00 0 4280 sda1 0.00 0.00 0.00 0 0 sda2 0.00 0.00 0.00 0 0 sda3 0.00 0.00 0.00 0 0 sda4 0.00 0.00 0.00 0 0 sda5 78.00 0.00 4280.00 0 4280

    Read the article

  • How does one get rid of fishy behavior in Windows?

    - by Tom Wijsman
    After I had boot my computer this morning there suddenly flooded water from the top of the screen, after which some fishes dropped into it. Now I can barely see what I am doing because the water distorts the view. Sometimes the fish follow the cursor so I need to move it away or wait for the fish to mind their own business. This makes it very annoying to use my system. What have I tried? Reboot the system. This caused the water to deplete from the desktop. Upon reboot, the screen was refilled with water and fishes. Attach another monitor. Same problem, fills that monitor as well and gives me extra fish. Clicking the fish. Makes them turn direction. Right clicking the fish. Changes color of the fish, not really useful. I'm locked out of changing the background or screen saver settings. Hence, I had to post the lady below... Safe mode doesn't save me from the fishes. It does give me another background there, but I can't screenshot easily. Other user accounts experience this as well. The Guest account seems to experience more fish than the other accounts. Using HijackThis, OTL Timekeeper List, Syninternal Autoruns, RootKitRevealer, ShellExView and similar tools I can't seem to find any entries that could be it, the Sysinternals tools show everything as verified. I'm suspecting this to be a driver problem. Randomly removing drivers doesn't seem to alleviate the problem. When removing the Graphics Drivers, it makes my screen black. While that could be considered the solution, it's not what I want. Changing the time / date settings does also not seem to affect the fishes. Changing the time a few years in the future, I would have expected the fishes to be dead. But, the same fishes are still there... They simply won't die! Tried to get used to them. They are really bothering me, looks like they require food. I don't know how to give them food, but apparently they get it elsewhere during reboot... Tried to disable my mouse pointer and use the keyboard. This works, they now swim around more randomly. They do put their attention to huge changes on the screen, so I need to type slow. Or otherwise I can't see what I'm tying exactly. Hold my laptop upside down. This seems to affect the water and fishes, but the water stays in the screen. They seem super resistant against water sickness and confusion though... What does the problem look like? What do I need? A way to get rid of these fishes on my screen forever, they are really annoying me a lot and I'm about to crack the screen to see if that makes them escape. Do you have any idea why this problem is occurring? What are my considerations? Buying an USB fish tank could make the fish leave the screen, I am uncertain though whether the fish could leave the screen through the USB cable. Using the FISh (programming language) which seems to provide EXPRESSIVE POWER and EFFICIENT EXECUTION, I can however not find any examples on how to remove fish. What are my Specifications? I'm using a Sony Vaio Fishy laptop. Sony VAIO VGN-Fishy, VAIO. Processor: 1337 MHz, Intel Core 2 Duo, T5432, 1 MB, Intel PM965 Express, 667 MHz. Memory: 1024 MB, DDR2-SDRAM, 667 MHz, 2 x 1024 MB, 4 GB. Disk Drive: 50 GB, Serial ATA, 5400 RPM. Storage Media: Memory Stick™, Memory Stick PRO™. Display: 15.4 ", 1280 x 800 pixels, LCD. Video: GeForce 8400M GT, 128 MB. Optical Drive: DVD±R/RW DL, 24 x, 24 x, 24 x, 6 x, 4 x, 6 x, 4 x, 5 x, 5 x, 8 x, 8 x, 8 x, 8 x, 6 x, 6 x, 24 x, 24 x, 24 x, 16 x. Camera: 1.3 MP, 30 fps. Networking: 2.0+EDR. Keyboard: Touchpad, AZERTY. Operating System/Software: Windows Vista Home Premium. Security: Kensington. Weight & Dimensions: 98.8 oz (2800 g), 14 " (355.8 mm), 10 " (254.4 mm), 0.98 " (24.9 mm). Other features: 100 BASE-TX/10 BASE-T, 802.11a/b/g/n/Draft n, V92/V.90, fishes. Plz! Help me...

    Read the article

  • Does an ESEUTIL defrag of an Exchange store also perform an integrity check/repair on it?

    - by Bigbio2002
    Earlier this morning, store.exe fuzzled up in one way or another, which necessitated a restart of our Exchange server. It came back online with no errors or problems, all the transaction logs replayed successfully, and all the stores mounted as normal. To me, it was just one of those random crashes; however, our consultant suspects it was caused by corruption in one of the stores. Perhaps he's correct, since he has far more experience than me, but that's not the point. To fix the suspected errors, he's planinng to run an ESEUTIL defrag (via PerfectDisk) to fix them, which he claims will also fix any errors present. From what I understand, defrag, verify, and repair are 3 separate actions, and a defrag does not imply any kind of integrity check. Is this correct? Are there any dangers of running a straight-up defrag on a database that might be corrupt? Edit: Here's the first error in the event log, which indicated the start of the problems we were having. Anyone know what it might indicate? Event Type: Error Event Source: Microsoft Exchange Server Event Category: None Event ID: 1000 Date: 11/23/2011 Time: 8:15:47 AM User: N/A Computer: SERVER Description: Faulting application exsp.dll, version 6.5.7638.1, stamp 430e735b, faulting module kernel32.dll, version 5.2.3790.4480, stamp 49c51f0a, debug? 0, fault address 0x0000bef7. For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. Data: 0000: 41 00 70 00 70 00 6c 00 A.p.p.l. 0008: 69 00 63 00 61 00 74 00 i.c.a.t. 0010: 69 00 6f 00 6e 00 20 00 i.o.n. . 0018: 46 00 61 00 69 00 6c 00 F.a.i.l. 0020: 75 00 72 00 65 00 20 00 u.r.e. . 0028: 20 00 65 00 78 00 73 00 .e.x.s. 0030: 70 00 2e 00 64 00 6c 00 p...d.l. 0038: 6c 00 20 00 36 00 2e 00 l. .6... 0040: 35 00 2e 00 37 00 36 00 5...7.6. 0048: 33 00 38 00 2e 00 31 00 3.8...1. 0050: 20 00 34 00 33 00 30 00 .4.3.0. 0058: 65 00 37 00 33 00 35 00 e.7.3.5. 0060: 62 00 20 00 69 00 6e 00 b. .i.n. 0068: 20 00 6b 00 65 00 72 00 .k.e.r. 0070: 6e 00 65 00 6c 00 33 00 n.e.l.3. 0078: 32 00 2e 00 64 00 6c 00 2...d.l. 0080: 6c 00 20 00 35 00 2e 00 l. .5... 0088: 32 00 2e 00 33 00 37 00 2...3.7. 0090: 39 00 30 00 2e 00 34 00 9.0...4. 0098: 34 00 38 00 30 00 20 00 4.8.0. . 00a0: 34 00 39 00 63 00 35 00 4.9.c.5. 00a8: 31 00 66 00 30 00 61 00 1.f.0.a. 00b0: 20 00 66 00 44 00 65 00 .f.D.e. 00b8: 62 00 75 00 67 00 20 00 b.u.g. . 00c0: 30 00 20 00 61 00 74 00 0. .a.t. 00c8: 20 00 6f 00 66 00 66 00 .o.f.f. 00d0: 73 00 65 00 74 00 20 00 s.e.t. . 00d8: 30 00 30 00 30 00 30 00 0.0.0.0. 00e0: 62 00 65 00 66 00 37 00 b.e.f.7. 00e8: 0d 00 0a 00 ....

    Read the article

< Previous Page | 97 98 99 100 101 102 103 104  | Next Page >