Search Results

Search found 3683 results on 148 pages for 'lucky luck'.

Page 131/148 | < Previous Page | 127 128 129 130 131 132 133 134 135 136 137 138  | Next Page >

  • Source-control 'wet-work'?

    - by Phil Factor
    When a design or creative work is flawed beyond remedy, it is often best to destroy it and start again. The other day, I lost the code to a long and intricate SQL batch I was working on. I’d thought it was impossible, but it happened. With all the technology around that is designed to prevent this occurring, this sort of accident has become a rare event.  If it weren’t for a deranged laptop, and my distraction, the code wouldn’t have been lost this time.  As always, I sighed, had a soothing cup of tea, and typed it all in again.  The new code I hastily tapped in  was much better: I’d held in my head the essence of how the code should work rather than the details: I now knew for certain  the start point, the end, and how it should be achieved. Instantly the detritus of half-baked thoughts fell away and I was able to write logical code that performed better.  Because I could work so quickly, I was able to hold the details of all the columns and variables in my head, and the dynamics of the flow of data. It was, in fact, easier and quicker to start from scratch rather than tidy up and refactor the existing code with its inevitable fumbling and half-baked ideas. What a shame that technology is now so good that developers rarely experience the cleansing shock of losing one’s code and having to rewrite it from scratch.  If you’ve never accidentally lost  your code, then it is worth doing it deliberately once for the experience. Creative people have, until Technology mistakenly prevented it, torn up their drafts or sketches, threw them in the bin, and started again from scratch.  Leonardo’s obsessive reworking of the Mona Lisa was renowned because it was so unusual:  Most artists have been utterly ruthless in destroying work that didn’t quite make it. Authors are particularly keen on writing afresh, and the results are generally positive. Lawrence of Arabia actually lost the entire 250,000 word manuscript of ‘The Seven Pillars of Wisdom’ by accidentally leaving it on a train at Reading station, before rewriting a much better version.  Now, any writer or artist is seduced by technology into altering or refining their work rather than casting it dramatically in the bin or setting a light to it on a bonfire, and rewriting it from the blank page.  It is easy to pick away at a flawed work, but the real creative process is far more brutal. Once, many years ago whilst running a software house that supplied commercial software to local businesses, I’d been supervising an accounting system for a farming cooperative. No packaged system met their needs, and it was all hand-cut code.  For us, it represented a breakthrough as it was for a government organisation, and success would guarantee more contracts. As you’ve probably guessed, the code got mangled in a disk crash just a week before the deadline for delivery, and the many backups all proved to be entirely corrupted by a faulty tape drive.  There were some fragments left on individual machines, but they were all of different versions.  The developers were in despair.  Strangely, I managed to re-write the bulk of a three-month project in a manic and caffeine-soaked weekend.  Sure, that elegant universally-applicable input-form routine was‘nt quite so elegant, but it didn’t really need to be as we knew what forms it needed to support.  Yes, the code lacked architectural elegance and reusability. By dawn on Monday, the application passed its integration tests. The developers rose to the occasion after I’d collapsed, and tidied up what I’d done, though they were reproachful that some of the style and elegance had gone out of the application. By the delivery date, we were able to install it. It was a smaller, faster application than the beta they’d seen and the user-interface had a new, rather Spartan, appearance that we swore was done to conform to the latest in user-interface guidelines. (we switched to Helvetica font to look more ‘Bauhaus’ ). The client was so delighted that he forgave the new bugs that had crept in. I still have the disk that crashed, up in the attic. In IT, we have had mixed experiences from complete re-writes. Lotus 123 never really recovered from a complete rewrite from assembler into C, Borland made the mistake with Arago and Quattro Pro  and Netscape’s complete rewrite of their Navigator 4 browser was a white-knuckle ride. In all cases, the decision to rewrite was a result of extreme circumstances where no other course of action seemed possible.   The rewrite didn’t come out of the blue. I prefer to remember the rewrite of Minix by young Linus Torvalds, or the rewrite of Bitkeeper by a slightly older Linus.  The rewrite of CP/M didn’t do too badly either, did it? Come to think of it, the guy who decided to rewrite the windowing system of the Xerox Star never regretted the decision. I’ll agree that one should often resist calls for a rewrite. One of the worst habits of the more inexperienced programmer is to denigrate whatever code he or she inherits, and then call loudly for a complete rewrite. They are buoyed up by the mistaken belief that they can do better. This, however, is a different psychological phenomenon, more related to the idea of some motorcyclists that they are operating on infinite lives, or the occasional squaddies that if they charge the machine-guns determinedly enough all will be well. Grim experience brings out the humility in any experienced programmer.  I’m referring to quite different circumstances here. Where a team knows the requirements perfectly, are of one mind on methodology and coding standards, and they already have a solution, then what is wrong with considering  a complete rewrite? Rewrites are so painful in the early stages, until that point where one realises the payoff, that even I quail at the thought. One needs a natural disaster to push one over the edge. The trouble is that source-control systems, and disaster recovery systems, are just too good nowadays.   If I were to lose this draft of this very blog post, I know I’d rewrite it much better. However, if you read this, you’ll know I didn’t have the nerve to delete it and start again.  There was a time that one prayed that unreliable hardware would deliver you from an unmaintainable mess of a codebase, but now technology has made us almost entirely immune to such a merciful act of God. An old friend of mine with long experience in the software industry has long had the idea of the ‘source-control wet-work’,  where one hires a malicious hacker in some wild eastern country to hack into one’s own  source control system to destroy all trace of the source to an application. Alas, backup systems are just too good to make this any more than a pipedream. Somehow, it would be difficult to promote the idea. As an alternative, could one construct a source control system that, on doing all the code-quality metrics, would systematically destroy all trace of source code that failed the quality test? Alas, I can’t see many managers buying into the idea. In reading the full story of the near-loss of Toy Story 2, it set me thinking. It turned out that the lucky restoration of the code wasn’t the happy ending one first imagined it to be, because they eventually came to the conclusion that the plot was fundamentally flawed and it all had to be rewritten anyway.  Was this an early  case of the ‘source-control wet-job’?’ It is very hard nowadays to do a rapid U-turn in a development project because we are far too prone to cling to our existing source-code.

    Read the article

  • Azure Diagnostics: The Bad, The Ugly, and a Better Way

    - by jasont
    If you’re a .Net web developer today, no doubt you’ve enjoyed watching Windows Azure grow up over the past couple of years. The platform has scaled, stabilized (mostly), and added on a slew of great (and sometimes overdue) features. What was once just an endpoint to host a solution, developers today have tremendous flexibility and options in the platform. Organizations are building new solutions and offerings on the platform, and others have, or are in the process of, migrating existing applications out of their own data centers into the Azure cloud. Whether new application development or migrating legacy, every development shop and IT organization needs to monitor their applications in the cloud, the same as they do on premises. Azure Diagnostics has some capabilities, but what I constantly hear from users is that it’s either (a) not enough, or (b) too cumbersome to set up. Today, Stackify is happy to announce that we fully support Azure deployments, just the same as your on-premises deployments. Let’s take a look below and compare and contrast the options. Azure Diagnostics Let’s crack open the Windows Azure documentation on Azure Diagnostics and see just how easy it is to use. The high level steps are:   Step 1: Import the Diagnostics Oh, I’ve already deployed my app without the diagnostics module. Guess I can’t do anything until I do this and re-deploy. Step 2: Configure the Diagnostics (and multiple sub-steps) Do I want it all? Or just pieces of it? Whoops, forgot to include a specific performance counter, I guess I’ll have to deploy again. Wait a minute… I have to specifically code these performance counters into my role’s OnStart() method, compile and deploy again? And query and consume it myself? Step 3: (Optional) Permanently store diagnostic data Lucky for me, Azure storage has gotten pretty cheap. But how often should I move the data into storage? I want to see real-time data, so I guess that’s out now as well. Step 4: (Optional) View stored diagnostic data Optional? Of course I want to see it. Conveniently, Microsoft recommends 3 tools to do this with. Un-conveniently, none of these are web based and they all just give you access to raw data, and very little charting or real-time intelligence. Just….. data. Nevermind that one product seems to have gotten stale since a recent acquisition, and doesn’t even have screenshots!   So, let’s summarize: lots of diagnostics data is available, but think realistically. Think Dev Ops. What happens when you are in the middle of a major production performance issue and you don’t have the diagnostics you need? You are redeploying an application (and thankfully you have a great branching strategy, so you feel perfectly safe just willy-nilly launching code into prod, don’t you?) to get data, then shipping it to storage, and then digging through that data to find a needle in a haystack. Would you like to be able to troubleshoot a performance issue in the middle of the night, or on a weekend, from your iPad or home computer’s web browser? Forget it: the best you get is this spark line in the Azure portal. If it’s real pointy, you probably have an issue; but since there is no alert based on a threshold your customers have likely already let you know. And high CPU, Memory, I/O, or Network doesn’t tell you anything about where the problem is. The Better Way – Stackify Stackify supports application and server monitoring in real time, all through a great web interface. All of the things that Azure Diagnostics provides, Stackify provides for your on-premises deployments, and you don’t need to know ahead of time that you’ll need it. It’s always there, it’s always on. Azure deployments are essentially no different than on-premises. It’s a Windows Server (or Linux) in the cloud. It’s behind a different firewall than your corporate servers. That’s it. Stackify can provide the same powerful tools to your Azure deployments in two simple steps. Step 1 Add a startup task to your web or worker role and deploy. If you can’t deploy and need it right now, no worries! Remote Desktop to the Azure instance and you can execute a Powershell script to download / install Stackify.   Step 2 Log in to your account at www.stackify.com and begin monitoring as much as you want, as often as you want and see the results instantly. WMI? It’s there Event Viewer? You’ve got it. File System Access? Yes, please! Would love to make sure my web.config is correct.   IIS / App Pool Info? Yep. You can even restart it. Running Services? All of them. Start and Stop them to your heart’s content. SQL Database access? You bet’cha. Alerts and Notification? Of course! You should know before your customers let you know. … and so much more.   Conclusion Microsoft has shown, consistently, that they love developers, developers, developers. What every developer needs to realize from this is that they’ve given you a canvas, which is exactly what Azure is. It’s great infrastructure that is readily available, easy to manage, and fairly cost effective. However, the tooling is your responsibility. What you get, at best, is bare bones. App and server diagnostics should be available when you need them. While we, as developers, try to plan for and think of everything ahead of time, there will come times where we need to get data that just isn’t available. And having to go through a lot of cumbersome steps to get that data, and then have to find a friendlier way to consume it…. well, that just doesn’t make a lot of sense to me. I’d rather spend my time writing and developing features and completing bug fixes for my applications, than to be writing code to monitor and diagnose.

    Read the article

  • first install for windows eight.....da beta

    - by raysmithequip
    The W8 preview is now installed and I am enjoying it.  I remember the learning curve of my first unix machine back in the eighties, this ain't that.It is normal for me to do the first os install with a keyboard and low end monitor...you never know what you'll encounter out in the field.  The OS took like a fish to water.  I used a low end INTEL motherboard dp55w I gathered on the cheap, an 1157 i5 from the used bin a pair of 6 gig ddr3 sticks, a rosewell 550 watt power supply a cheap used twenty buck sub 200g wd sata drive, a half working dvd burner and an asus fanless nvidia vid card, not a great one but Sub 50.00 on newey eggey...I did have to hunt the ms forums for a key and of course to activate the thing, if dos would of needed this outmoded ritual, we would still be on cpm and osborne would be a household name, of course little do people know that this ritual was common as far back as the seventies on att unix installs....not, but it was possible, I used to joke about when I ran a bbs, what hell would of been wrought had dos 3.2 machines been required to dial into my bbs to send fido mail to ms and wait for an acknowledgement.  All in all the thing was pushing a seven on the ms richter scale, not including the vid card, sadly it came in at just a tad over three....I wanted to evaluate it for a possible replacement on critical machines that in the past went down due to a vid card fan failure....you have no idea what a customer thinks when you show them a failed vid card fan..."you mean that little plastic piece of junk caused all this!!??!!!"...yea man.  Some production machines don't need any sort of vid, I will at least keep it on the maybe list for those, MTBF is a very important factor, some big box stores should put percentage of failure rate within 24 month estimates on the outside of the carton for sure.  And a warning that the power supplies are already at their limit.  Let's face it, today even 550w can be iffy.A few neat eye candy improvements over the earlier windows is nice, the metro screen is nice, anyone who has used a newer phone recently will intuitively drag their fingers across the screen....lot of good that was with no mouse or touch screen though.  Lucky me, I have been using windows since day one, I still have a copy of win 2.0 (and every other version) for no good reason.  Still the old ix collection of disks is much larger, recompiling any kernal is another silly ritual, same machine, different day, same recompile...argh. Rh is my all time fav, mandrake was always missing something, like it rewrote the init file or something, novell is ok as long as you stay on the beaten path and of course ubuntu normally recompiles with the same errors consistantly....makes life easy that way....no errors on windows eight, just a screen that did not match the installed hardware, natuarally I alt tabbed right out of it, then hit the flag key to find the start menu....no start button. I miss the start button already. Keyboard cowboy funnin and I was browsing the harddrive, nothing stunning there, I like that, means I can find stuff. Only I can't find what I want, the start button....the start menu is that first screen for touch tablets. No biggie for useruser, that is where they will want to be, I can see that. Admins won't want to be there, it is easy enough to get the control panel a bazzilion other ways though, just not the start button. (see a pattern here?). Personally, from the keyboard I find it fun to hit the carets along the location bar at the top of the explorer screen with tabs and arrows and choose SHOW ALL CONTROL PANEL ITEMS, or thereabouts. Bottom line, I love seven and I'll love eight even more!...very happy I did not have to follow the normal rule of thumb (a customer watching me build a system and asking questions said "oh I get it, so every piece you put in there is basically a hundred bucks, right?)...ok, sure, pretty much, more or less, well, ya dude.  It will be WAY past october till I get a real touch screen but I did pick up a pair of cheap tatungs so I can try the NEW main start screen, I parse a lot of folders and have a vision of how a pair of touch screens will be easier than landing a rover on mars.  Ok.  fine, they are way smallish, and I don't expect multitouch to work but we are talking a few percent of a new 21 inch viewsonic touch screen.  Will this OS be a game changer?  I don't know.  Bottom line with all the pads and droids in the world, it is more of a catch up move at first glance.  Not something ms is used to.  An app store?  I can see ms's motivation, the others have it.  I gather there will not be gadgets there, go ahead and see what ms did  to the once populated gadget page...go ahead, google gadgets and take a gander, used to hundreds of gadgets, they are already gone.  They replaced gadgets?  sort of, I'll drop that, it's a bit of a sore point for me.  More of interest was what happened when I downloaded stuff off codeplex and some other normal programs that I like, like orbitron, top o' my list!!...cardware it is...anyways, click on the exe, get a screen, normal for windows, this one indicated that I was not running a normal windows program and had a button for  exit the install, naw, I hit details, a hidden run program anyways came into view....great, my path to the normal windows has detected a program tha.....yea ok, acl is on, fine, moving along I got orbitron installed in record time and was tracking the iss on the newest Microsoft OS, beta of course, felt like the first time I setup bsd all those year ago...FUN!!...I suppose I gotta start to think about budgeting for the real os when it comes out in october, by then I should have a rasberry pi and be done with fedora remixed.  Of course that sounds like fun too!!  I would use this OS on a tablet or phone.  I don't like the idea of being hearded to an app store, don't like that on anything, we are americans and want real choices not marketed hype, lest you are younger with opm (other peoples money).   This os would be neat on a zune, but I suspect the zune is a gonner, I am rooting for microsoft, after all their default password is not admin anymore, nor alpine,  it's blank. Others force a password, my first fawn password was so long I could not even log into it with the password in front of me, who the heck uses %$# anyways, and if I was writing a brute force attack what the heck kinda impasse is that anyways at .00001 microseconds of a code execution cycle (just a non qualified number, not a real clock speed)....AI is where it will be before too long, MS is on that path, perhaps soon someone will sit down and write an app for the kinect that watches your eyes while you scan the new main start screen, clicking on the big E icon when you blink.....boy is that going to be fun!!!! sure. Blink,dammit,blink,dammit...... OPM no doubt.I like windows eight, we are moving forwards, better keep a close eye on ubuntu.  The real clinch comes when open source becomes paid source......don't blink, I already see plenty of very expensive 'ix apps, some even in app stores already.  more to come.......

    Read the article

  • Source-control 'wet-work'?

    - by Phil Factor
    When a design or creative work is flawed beyond remedy, it is often best to destroy it and start again. The other day, I lost the code to a long and intricate SQL batch I was working on. I’d thought it was impossible, but it happened. With all the technology around that is designed to prevent this occurring, this sort of accident has become a rare event.  If it weren’t for a deranged laptop, and my distraction, the code wouldn’t have been lost this time.  As always, I sighed, had a soothing cup of tea, and typed it all in again.  The new code I hastily tapped in  was much better: I’d held in my head the essence of how the code should work rather than the details: I now knew for certain  the start point, the end, and how it should be achieved. Instantly the detritus of half-baked thoughts fell away and I was able to write logical code that performed better.  Because I could work so quickly, I was able to hold the details of all the columns and variables in my head, and the dynamics of the flow of data. It was, in fact, easier and quicker to start from scratch rather than tidy up and refactor the existing code with its inevitable fumbling and half-baked ideas. What a shame that technology is now so good that developers rarely experience the cleansing shock of losing one’s code and having to rewrite it from scratch.  If you’ve never accidentally lost  your code, then it is worth doing it deliberately once for the experience. Creative people have, until Technology mistakenly prevented it, torn up their drafts or sketches, threw them in the bin, and started again from scratch.  Leonardo’s obsessive reworking of the Mona Lisa was renowned because it was so unusual:  Most artists have been utterly ruthless in destroying work that didn’t quite make it. Authors are particularly keen on writing afresh, and the results are generally positive. Lawrence of Arabia actually lost the entire 250,000 word manuscript of ‘The Seven Pillars of Wisdom’ by accidentally leaving it on a train at Reading station, before rewriting a much better version.  Now, any writer or artist is seduced by technology into altering or refining their work rather than casting it dramatically in the bin or setting a light to it on a bonfire, and rewriting it from the blank page.  It is easy to pick away at a flawed work, but the real creative process is far more brutal. Once, many years ago whilst running a software house that supplied commercial software to local businesses, I’d been supervising an accounting system for a farming cooperative. No packaged system met their needs, and it was all hand-cut code.  For us, it represented a breakthrough as it was for a government organisation, and success would guarantee more contracts. As you’ve probably guessed, the code got mangled in a disk crash just a week before the deadline for delivery, and the many backups all proved to be entirely corrupted by a faulty tape drive.  There were some fragments left on individual machines, but they were all of different versions.  The developers were in despair.  Strangely, I managed to re-write the bulk of a three-month project in a manic and caffeine-soaked weekend.  Sure, that elegant universally-applicable input-form routine was‘nt quite so elegant, but it didn’t really need to be as we knew what forms it needed to support.  Yes, the code lacked architectural elegance and reusability. By dawn on Monday, the application passed its integration tests. The developers rose to the occasion after I’d collapsed, and tidied up what I’d done, though they were reproachful that some of the style and elegance had gone out of the application. By the delivery date, we were able to install it. It was a smaller, faster application than the beta they’d seen and the user-interface had a new, rather Spartan, appearance that we swore was done to conform to the latest in user-interface guidelines. (we switched to Helvetica font to look more ‘Bauhaus’ ). The client was so delighted that he forgave the new bugs that had crept in. I still have the disk that crashed, up in the attic. In IT, we have had mixed experiences from complete re-writes. Lotus 123 never really recovered from a complete rewrite from assembler into C, Borland made the mistake with Arago and Quattro Pro  and Netscape’s complete rewrite of their Navigator 4 browser was a white-knuckle ride. In all cases, the decision to rewrite was a result of extreme circumstances where no other course of action seemed possible.   The rewrite didn’t come out of the blue. I prefer to remember the rewrite of Minix by young Linus Torvalds, or the rewrite of Bitkeeper by a slightly older Linus.  The rewrite of CP/M didn’t do too badly either, did it? Come to think of it, the guy who decided to rewrite the windowing system of the Xerox Star never regretted the decision. I’ll agree that one should often resist calls for a rewrite. One of the worst habits of the more inexperienced programmer is to denigrate whatever code he or she inherits, and then call loudly for a complete rewrite. They are buoyed up by the mistaken belief that they can do better. This, however, is a different psychological phenomenon, more related to the idea of some motorcyclists that they are operating on infinite lives, or the occasional squaddies that if they charge the machine-guns determinedly enough all will be well. Grim experience brings out the humility in any experienced programmer.  I’m referring to quite different circumstances here. Where a team knows the requirements perfectly, are of one mind on methodology and coding standards, and they already have a solution, then what is wrong with considering  a complete rewrite? Rewrites are so painful in the early stages, until that point where one realises the payoff, that even I quail at the thought. One needs a natural disaster to push one over the edge. The trouble is that source-control systems, and disaster recovery systems, are just too good nowadays.   If I were to lose this draft of this very blog post, I know I’d rewrite it much better. However, if you read this, you’ll know I didn’t have the nerve to delete it and start again.  There was a time that one prayed that unreliable hardware would deliver you from an unmaintainable mess of a codebase, but now technology has made us almost entirely immune to such a merciful act of God. An old friend of mine with long experience in the software industry has long had the idea of the ‘source-control wet-work’,  where one hires a malicious hacker in some wild eastern country to hack into one’s own  source control system to destroy all trace of the source to an application. Alas, backup systems are just too good to make this any more than a pipedream. Somehow, it would be difficult to promote the idea. As an alternative, could one construct a source control system that, on doing all the code-quality metrics, would systematically destroy all trace of source code that failed the quality test? Alas, I can’t see many managers buying into the idea. In reading the full story of the near-loss of Toy Story 2, it set me thinking. It turned out that the lucky restoration of the code wasn’t the happy ending one first imagined it to be, because they eventually came to the conclusion that the plot was fundamentally flawed and it all had to be rewritten anyway.  Was this an early  case of the ‘source-control wet-job’?’ It is very hard nowadays to do a rapid U-turn in a development project because we are far too prone to cling to our existing source-code.

    Read the article

  • PASS Summit – looking back on my first time

    - by Fatherjack
      So I was lucky enough to get my first experience of PASS Summit this year and took some time beforehand to read some blogs and reference material to get an idea on what to do and how to get the best out of my visit. Having been to other conferences – technical and non-technical – I had a reasonable idea on the routine and what to expect in general. Here is a list of a few things that I have learned/remembered as the week has gone by. Wear comfortable shoes. This actually needs to be broadened to Take several pairs of comfortable shoes. You will be spending many many hours, for several days one after another. Having comfortable feet that can literally support you for the duration will make the week in general a whole lot better. Not only at the conference but getting to and from you could well be walking. In the evenings you will be walking around town and standing talking in various bars and clubs. Looking back, on some days I was on my feet for over 20 hours. Make friends. This is a given for the long term benefits it brings but there is also an immediate reward in being at a conference with a friend or two. Some events are bigger and more popular than others and some have the type of session that every single attendee will want to be in. This is great for those that get in but if you are in the bathroom or queuing for coffee and you miss out it sucks. Having a friend that can get in to a room and reserve you a seat is a great advantage to make sure you get the content that you want to see and still have the coffee that you need. Don’t go to every session you want to see This might sound counter intuitive and it relies on the sessions being recorded in some way to guarantee you don’t totally miss out. Both PASS Summit and SQL Bits sessions are recorded (summit is audio, SQLBits is video) and this means that if you get into a good conversation with someone over a coffee you don’t have to break it up to go to a session. Obviously there is a trade-off here and you need to decide on the tipping point for yourself but a conversation at a place like this could make a big difference to the next contract or employer you have or it might simply be great catching up with some friends you don’t see so often. Go to at least one session you don’t want to Again, this will seem to be contrary to normal logic but there is no reason why you shouldn’t learn about a part of SQL Server that isn’t part of your daily routine. Not only will you learn something new but you will also pick up on the feelings and attitudes of the people in the session. So, if you are a DBA, head off to a BI session and so on. You’ll hear BI speakers speaking to a BI audience and get to understand their point of view and reasoning for making the decisions they do. You will also appreciate the way that your decisions and instructions affect the way they have to work. This will help you a lot when you are on a project, working with multiple teams and make you all more productive. Socialise While you are at the conference venue, speak to people. Ask questions, be interested in whoever you are speaking to. You get chances to talk to new friends at breakfast, dinner and every break between sessions. The only people that might not talk to you would be speakers that are about to go and give a session, in most cases speakers like peace and quiet before going on stage. Other than that the people around you are just waiting for someone to talk to them so make the first move. There is a whole lot going on outside of the conference hours and you should make an effort to join in with some of this too. At karaoke evenings or just out for a quiet drink with a few of the people you meet at the conference. Either way, don’t be a recluse and hide in your room or be alone out in the town. Don’t talk to people Once again this sounds wrong but stay with me. I have spoken to a number of speakers since Summit 2013 finished and they have all mentioned the time it has taken them to move about the conference venue due to people stopping them for a chat or to ask a question. 45 minutes to walk from a session room to the speaker room in one case. Wow. While none of the speakers were upset about this sort of delay I think delegates should take the situation into account and possibly defer their question to an email or to a time when the person they want is clearly less in demand. Give them a chance to enjoy the conference in the same way that you are, they may actually want to go to a session or just have a rest after giving their session – talking for 75 minutes is hard work, taking an extra 45 minutes right after is unbelievable. I certainly hope that they get good feedback on their sessions and perhaps if you spoke to a speaker outside a session you can give them a mention in the ‘any other comments’ part of the feedback, just to convey your gratitude for them giving up their time and expertise for free. Say thank you I just mentioned giving the speakers a clear, visible ‘thank you’ in the feedback but there are plenty of people that help make any conference the success it is that would really appreciate hearing that their efforts are valued. People on the registration desk, volunteers giving schedule guidance and directions, people on the community zone are all volunteers giving their time to help you have the best experience possible. Send an email to PASS and convey your thoughts about the work that was done. Maybe you want to be a volunteer next time so you could enquire how you get into that position at the same time. This isn’t an exclusive list and you may agree or disagree with the points I have made, please add anything you think is good advice in the comments. I’d like to finish by saying a huge thank you to all the people involved in planning, facilitating and executing the PASS Summit 2013, it was an excellent event and I know many others think it was a totally worthwhile event to attend.

    Read the article

  • Alcatel-Lucent: Enterprise 2.0: The Top 5 Things I would Do Over

    - by Kellsey Ruppel
    Happy Monday! Does anyone else feel as if the weekend went entirely too quickly? At least for those of us in the United States, we have the 4th of July Holiday next week to look forward to This week on the blog, we are going to focus on "WebCenter by Example" and highlight best practices from customers and partners. I recently came across this article and I think this is a great example of how we can learn from one another when it comes to social collaboration adoption. Do you agree with Jem? What things or best practices have you learned in your organizations?  By Jem Janik, Enterprise community manager, Alcatel-Lucent  Not so long ago, Engage, the Alcatel-Lucent employee social network and collaboration platform, celebrated its third birthday. With more than 25,000 members actively interacting each month, Engage has been a big enough success that it’s been the subject of external articles, and often those of us who helped launch it will go out and speak about what aspects contributed to that success. Hindsight is still 20/20 and what it takes to successfully launch an enterprise 2.0 community is fairly well-known now.  Today I want to tell you what I suspect you really want to know about.  As the enterprise community manager for Engage, after three years in, what are the top 5 things I wish we (and I mostly mean me) could do over? #5 Define your analytics solution from the start There is so much to do when you launch a community and initially growing it without complete chaos is quite a task.  It doesn’t take too long to get to a point where you want to focus your continued efforts in growing company collaboration.  Do people truly talk across regional boundaries or have we shifted siloed conversations to a new platform.  Is there one organization that doesn’t interact with another? If you are lucky you’ll have someone in your community team well versed in the world of databases and SQL queries, but it takes time to figure out what backend analytics data actually means. Professional support can be expensive and it may be hard to justify later as it typically has the community manager as the only main customer.  Figure out what you think you’ll want to know and how to get it early on. The sooner the better even if it doesn’t seem that critical at the time. #4 Lobbies guide you to the right places One piece of feedback that comes up more and more as we keep growing Engage is it’s hard to find stuff, or new people are not sure where to start. Something we’re doing now is defining some general topic areas of interest to be like “lobbies” into the platform and some common hashtags to go with them. I liken this to walking into a large medical or professional building for the first time.  There are hundreds of offices, and you look to a sign in the lobby to get guided to the right place for you.  We’re building that sign for members now, but again we missed the boat as the majority of the company has had their initial Engage experience. #3 Clean up, clean up, clean up Knowledge work and folksonomies are messy! The day we opened the doors to Engage I would have said we should keep everything ever created in Engage with an argument that it was a window into our collective knowledge so nothing should go.  Well, 6000+ groups and 200,000+ pieces of content later, I’ve changed my mind.  As previously mentioned, with too much “stuff” the system can be overwhelming to new members and it makes it harder to get what you’re looking for.   Do we need that help document about a tool we no longer have? NO!  Do we need that group that had 1 document and 2 discussions in the last two years? NO! Should we only have one group about a given topic instead of 4?  YES! Last fall, Engage defined a cleanup process for groups not used for a long time.  We also formed a volunteer cleaning army who are extra eyes on the hunt for “stuff” that should be updated, merged, or deleted.  It’s better late than never, but in line with what’s becoming a theme I wish these efforts had started earlier. #2 Communications & local community management One of the most important aspects of my job is to make sure people who should be talking to each other are actually doing it.  Connecting people to the other people they should know, the groups they should join, a piece of content that shouldn’t be missed.   I have worked both inside and outside of communications teams, and they are the best informed people in your company.  They know when something big is coming, how it impacts employees, how it fits with strategy, who else knows more, etc.  Having communications professionals who are power users can help scale up community management because they are already so well connected.  They also need to have the platform skills to pay attention without suffering email overload, how to grab someone’s attention, etc.  I wish I’d had figured this out much earlier.  If I had I would have groomed more communications colleagues into advocates and power members right at the start. #1 Grooming advocates vs. natural advocates I’ve just alluded to this above already. The very best advocates are those who naturally embrace your platform and automatically start to see new ways to work within it.  Those advocates seem to come out of the woodwork naturally since some of them are early adopters.  Not surprisingly, our best advocates today are those same people who were willing to come kick the tires when the community was completely empty.  Unfortunately, we didn’t get a global spread of those natural advocates.  I did ask around when we first launched for other people who might be good candidates, but didn’t push too hard as there were so many other things to get ready.  That was a mistake.  If I could get a redo I would have formally asked for people to be assigned where there were gaps and groomed them into an advocate.  Today as we find new advocates to fill the gaps, people are hesitant as the initial set has three years of practice are ahead of the curve power members; it definitely would have been easier earlier on. As fairly early adopters to corporate scale enterprise collaboration, there hasn’t been a roadmap to follow as we’ve grown Engage, which is part of the fun! It’s clear a lot of issues are more easily tackled the earlier you identify and begin to correct them, and I’ve identified the main five I wish I could redo.  In the spirit of collaboration, I hope someone else learns from my mistakes! View the original article by Jem here. 

    Read the article

  • OpenSSL: certificate signature failure error

    - by e-t172
    I'm trying to wget La Banque Postale's website. $ wget https://www.labanquepostale.fr/ --2009-10-08 17:25:03-- https://www.labanquepostale.fr/ Resolving www.labanquepostale.fr... 81.252.54.6 Connecting to www.labanquepostale.fr|81.252.54.6|:443... connected. ERROR: cannot verify www.labanquepostale.fr's certificate, issued by `/C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=Terms of use at https://www.verisign.com/rpa (c)06/CN=VeriSign Class 3 Extended Validation SSL SGC CA': certificate signature failure To connect to www.labanquepostale.fr insecurely, use `--no-check-certificate'. Unable to establish SSL connection. I'm using Debian Sid. On another machine which is running Debian Sid with same software versions the command works perfectly. ca-certificates is installed on both machines (I tried removing it and reinstalling it in case a certificate got corrupted somehow, no luck). Opening https://www.labanquepostale.fr/ in Iceweasel on the same machine works perfectly. Additional information: $ openssl s_client -CApath /etc/ssl/certs -connect www.labanquepostale.fr:443 CONNECTED(00000003) depth=3 /C=US/O=VeriSign, Inc./OU=Class 3 Public Primary Certification Authority verify error:num=7:certificate signature failure verify return:0 --- Certificate chain 0 s:/1.3.6.1.4.1.311.60.2.1.3=FR/2.5.4.15=V1.0, Clause 5.(b)/serialNumber=421100645/C=FR/postalCode=75006/ST=PARIS/L=PARIS/streetAddress=115 RUE DE SEVRES/O=LA BANQUE POSTALE/OU=DISF2/CN=www.labanquepostale.fr i:/C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=Terms of use at https://www.verisign.com/rpa (c)06/CN=VeriSign Class 3 Extended Validation SSL SGC CA 1 s:/C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=Terms of use at https://www.verisign.com/rpa (c)06/CN=VeriSign Class 3 Extended Validation SSL SGC CA i:/C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=(c) 2006 VeriSign, Inc. - For authorized use only/CN=VeriSign Class 3 Public Primary Certification Authority - G5 2 s:/C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=(c) 2006 VeriSign, Inc. - For authorized use only/CN=VeriSign Class 3 Public Primary Certification Authority - G5 i:/C=US/O=VeriSign, Inc./OU=Class 3 Public Primary Certification Authority 3 s:/C=US/O=VeriSign, Inc./OU=Class 3 Public Primary Certification Authority i:/C=US/O=VeriSign, Inc./OU=Class 3 Public Primary Certification Authority --- Server certificate -----BEGIN CERTIFICATE----- <base64-encoded certificate removed for lisibility> -----END CERTIFICATE----- subject=/1.3.6.1.4.1.311.60.2.1.3=FR/2.5.4.15=V1.0, Clause 5.(b)/serialNumber=421100645 /C=FR/postalCode=75006/ST=PARIS/L=PARIS/streetAddress=115 RUE DE SEVRES/O=LA BANQUE POSTALE/OU=DISF2/CN=www.labanquepostale.fr issuer=/C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=Terms of use at https://www.verisign.com/rpa (c)06/CN=VeriSign Class 3 Extended Validation SSL SGC CA --- No client certificate CA names sent --- SSL handshake has read 5101 bytes and written 300 bytes --- New, TLSv1/SSLv3, Cipher is RC4-MD5 Server public key is 1024 bit Compression: NONE Expansion: NONE SSL-Session: Protocol : TLSv1 Cipher : RC4-MD5 Session-ID: 0009008CB3ADA9A37CE45B464E989C82AD0793D7585858584ACE056700035363 Session-ID-ctx: Master-Key: 1FB7DAD98B6738BEA7A3B8791B9645334F9C760837D95E3403C108058A3A477683AE74D603152F6E4BFEB6ACA48BC2C3 Key-Arg : None Start Time: 1255015783 Timeout : 300 (sec) Verify return code: 7 (certificate signature failure) --- Any idea why I get certificate signature failure? As if this wasn't strange enough, copy-pasting the "server certificate" mentionned in the output and running openssl verify on it returns OK...

    Read the article

  • Indesign Import XML into Automatic Page generation, data merge

    - by taudep
    I've created some InDesign Pages that I want to use as templates. I've created an XML file with all the appropriate data. I want to merge the XML data with the InDesign page and have a few hundred pages automatically generated. I've been reading online and working with InDesign's "Import XML" features without any luck. The documentation has been pretty poor for me. And Google searches haven't returned much fruitful. Edit: I'm updating this to now include my present steps 1) I create a Master Page of my template 2) I add a bunch of text frames where I want the imported data from the XML file to be places 3) I open the "Tags" window and Import and XML file 4) I mark my text frames in the Master Document with the appropriate tags 5) I then add a lot of pages (like 200) to the document 6) Then I use "Import XML" to try and get the data brought in and filled across all 200 pages. This is where I fail. So there's something I'm missing. It might be that InDesign doesn't work as I'm expecting... Anyone have any good tips for mail-merge like functionality with an XML document and auto-generation of InDesign pages? BTW, here's an example of Adobe's great documentation for merging repeated XML elements. There's gotta be more...InDesign CS4 Docs: XML-Importing XML-Working with Repeating Data EDIT: Here's some of the sample XML, notice the ITEM will repeat. I've also truncated the data in the "desc" tag: <output> <item> <user_name>taude</user_name> <date>2009-02-21</date> <title>Wishful Thinking</title> <desc>Skiing up in Vermont on a beautiful day. This photo of</desc> <thumbnail>http://www.blipfoto.com/thumbs/5371/2009/big/color/96104200949a162672e1996.15963073.jpeg</thumbnail> </item> <item> <user_name>taude</user_name> <date>2009-02-22</date> <title>Skiing Self Portrait</title> <desc>I was inspired by ML's self-portrait while </desc> <thumbnail>http://www.blipfoto.com/thumbs/5371/2009/big/color/36547696749a2c5782308e0.91477014.jpeg</thumbnail> </item> </output> Here's what my imported XML looks like with the InDesign Structure

    Read the article

  • How to debug Ubuntu/Cisco VPN issues

    - by Joe Casadonte
    I'm trying to connect an Ubuntu laptop (9.10) with some kind of Cisco VPN device; I don't know what's on the other end, and I'm not likely to find out exactly what. I know my company allows VPN from Linux clients because they provide one that I cannot get to install (it fails to compile). I've had the most luck with the network-manager-vpnc package, however I can't figure out what's failing. When I try to connect, I get this message from libnotify: The VPN connection 'XXX' failed. which is not very helpful. I've scoured the system logs and all I can find is this: Dec 27 12:57:45 jcasadon-lap NetworkManager: <info> Starting VPN service 'org.freedesktop.NetworkManager.vpnc'... Dec 27 12:57:45 jcasadon-lap NetworkManager: <info> VPN service 'org.freedesktop.NetworkManager.vpnc' started (org.freedesktop.NetworkManager.vpnc), PID 2672 Dec 27 12:57:45 jcasadon-lap NetworkManager: <info> VPN service 'org.freedesktop.NetworkManager.vpnc' just appeared, activating connections Dec 27 12:58:00 jcasadon-lap NetworkManager: <info> VPN plugin state changed: 3 Dec 27 12:58:00 jcasadon-lap NetworkManager: <info> VPN connection 'AmericasEast' (Connect) reply received. Dec 27 12:58:00 jcasadon-lap NetworkManager: SCPlugin-Ifupdown: devices added (path: /sys/devices/virtual/net/tun0, iface: tun0) Dec 27 12:58:00 jcasadon-lap kernel: [ 6144.529002] tun0: Disabled Privacy Extensions Dec 27 12:58:00 jcasadon-lap NetworkManager: SCPlugin-Ifupdown: device added (path: /sys/devices/virtual/net/tun0, iface: tun0): no ifupdown configuration found. Dec 27 12:58:15 jcasadon-lap NetworkManager: SCPlugin-Ifupdown: devices removed (path: /sys/devices/virtual/net/tun0, iface: tun0) Dec 27 12:58:15 jcasadon-lap NetworkManager: <info> VPN plugin failed: 1 Dec 27 12:58:15 jcasadon-lap NetworkManager: <info> VPN plugin state changed: 6 Dec 27 12:58:15 jcasadon-lap NetworkManager: <info> VPN plugin state change reason: 0 Dec 27 12:58:15 jcasadon-lap NetworkManager: <WARN> connection_state_changed(): Could not process the request because no VPN connection was active. Dec 27 12:58:15 jcasadon-lap NetworkManager: <info> (wlan0): writing resolv.conf to /sbin/resolvconf Dec 27 12:58:15 jcasadon-lap NetworkManager: <info> Policy set 'Northbound Train' (wlan0) as default for routing and DNS. Dec 27 12:58:27 jcasadon-lap NetworkManager: <debug> [1261936707.002971] ensure_killed(): waiting for vpn service pid 2672 to exit Dec 27 12:58:27 jcasadon-lap NetworkManager: <debug> [1261936707.003175] ensure_killed(): vpn service pid 2672 cleaned up I have no idea where to go from here. Tomorrow I'll ask the IT/IS guys if there's anything they can tell me from their end, but I don't know if they'll be able to tell me anything. Any ideas? Thanks!

    Read the article

  • Php pdo_dblib - cannot find/unable to load freetds

    - by MaxPowers
    Self-hosted box, RHEL 6 PHP 5.3.3 PDO installed freetds installed pdo_dblib - so far no luck installing My goal is to use PDO with sybase. Attempting to install pdo_dblib from the appropriate version php source code. I have tried a variety of methods and searched quite a bit for help on this topic, but have yet to be successful. Method 1 Install freetds $ ./configure $ make $ su root Password: $ make install This is successful Install pdo_dblib inside the /ext/pdo_dblib folder: $ phpize $ ./configure $ make $ make test Error output: PHP Warning: PHP Startup: Unable to load dynamic library '/home/sybase/Install_items/php_533_src/php-5.3.3/ext/pdo_dblib/modules/pdo_dblib.so' - /home/sybase/Install_items/php_533_src/php-5.3.3/ext/pdo_dblib/modules/pdo_dblib.so: undefined symbol: php_pdo_register_driver in Unknown on line 0 Warning: PHP Startup: Unable to load dynamic library '/home/sybase/Install_items/php_533_src/php-5.3.3/ext/pdo_dblib/modules/pdo_dblib.so' - /home/sybase/Install_items/php_533_src/php-5.3.3/ext/pdo_dblib/modules/pdo_dblib.so: undefined symbol: php_pdo_register_driver in Unknown on line 0 PHP Warning: PHP Startup: Unable to load dynamic library '/home/sybase/Install_items/php_533_src/php-5.3.3/ext/pdo_dblib/modules/pdo_dblib.so' - /home/sybase/Install_items/php_533_src/php-5.3.3/ext/pdo_dblib/modules/pdo_dblib.so: undefined symbol: php_pdo_register_driver in Unknown on line 0 Warning: PHP Startup: Unable to load dynamic library '/home/sybase/Install_items/php_533_src/php-5.3.3/ext/pdo_dblib/modules/pdo_dblib.so' - /home/sybase/Install_items/php_533_src/php-5.3.3/ext/pdo_dblib/modules/pdo_dblib.so: undefined symbol: php_pdo_register_driver in Unknown on line 0 That doesn't look good...I researched this and found an interesting hack for this here. But changing pdo.ini to pdo_0.ini was not the solution, as I still got the same errors on make test. $ su $ make install Output: Installing shared extensions: /usr/lib64/php/modules/ That seems strange...and no, it doesn't actually install (not showing up on phpinfo after apache restart). Method 2 Install freetds following the instructions exactly, i add the prefix $ ./configure --prefix=/usr/local/freetds $ make $ su root Password: $ make install This is successful Install pdo_dblib inside the /ext/pdo_dblib folder: $ phpize $ ./configure --with-sybase=/usr/local/freetds This produces the following error at the bottom of the output ... checking for PDO_DBLIB support via FreeTDS... yes, shared configure: error: Cannot find FreeTDS in known installation directories Method 3 freetds ./configure variation (including or not include the --prefix...) did not change the result of this so I'll skip it. Install pdo_dblib pecl extension following the method specified here. pecl download pdo_dblib tar -xzvf PDO_DBLIB-1.0.tgz Removed the line, <dep type=”ext” rel=”ge” version=”1.0?>pdo</dep> Saved the package.xml file, and moved it in to the PDO_DBLIB directory. mv package.xml ./PDO_DBLIB-1.0 Navigated to the PDO_DBLIB directory, then installed the package from the directory. cd ./PDO_DBLIB-1.0 pecl install package.xml But, this command gives me the following error output, same as Method 2. checking for PDO_DBLIB support via FreeTDS... yes, shared configure: error: Cannot find FreeTDS in known installation directories ERROR: `/home/sybase/Install_items/pecl_pdo_dblib/PDO_DBLIB-1.0/configure' failed

    Read the article

  • windows firewall broken on server 2008

    - by Chloraphil
    This evening I tried to rdp into my server 2008 box and was unable to. After poking around some I discovered that something is awry with my Windows Firewall. I did install 5 windows updates remotely earlier today but rolled those back in an attempt to see if that fixed the problem but had no luck. Symptoms: cannot rdp to machine (including from itself) cannot ping machine cannot connect to file share on machine error message when attempting to open "windows firewall with advanced security" snap-in (there was an error opening the windows firewall with advanced security snap-in ... The Windows Firewall with Advanced Security snap-in failed to load. Restart the windows firewall service on the computer that you are managing. Error code: 0x6D9. When I opened the "user-friendly" Windows Firewall it failed to load most of the gui elements, meaning, the title bar with close, minimize, and maximize buttons is present, the rest of the window has a white background with a yellow rectangle with rounded corners and a yellow triangle w/ an exclamation point is in the upper right. hope that made sense "Windows Firewall" does not appear in the list of services I ran a virus scan that found nothing. How do I fix the firewall and hopefully restore the ability to rdp? EDIT: Added at fission's request: c:\sc query mpsdrv SERVICE_NAME: mpsdrv TYPE : 1 KERNEL_DRIVER STATE : 4 RUNNING (STOPPABLE, NOT_PAUSABLE, IGNORES_SHUTDOWN) WIN32_EXIT_CODE : 0 (0x0) SERVICE_EXIT_CODE : 0 (0x0) CHECKPOINT : 0x0 WAIT_HINT : 0x0 c:\sc query mpssvc SERVICE_NAME: mpssvc TYPE : 20 WIN32_SHARE_PROCESS STATE : 1 STOPPED WIN32_EXIT_CODE : 1068 (0x42c) SERVICE_EXIT_CODE : 0 (0x0) CHECKPOINT : 0x0 WAIT_HINT : 0x0 Those two registry keys do exist: HKLM\SYSTEM\CurrentControlSet\Services\mpsdrv & HKLM\SYSTEM\CurrentControlSet\Services\MpsSvc ! The problem seems to be with the Base Filtering Engine, when I try to start it I get the following error: Windows could not start the Base Filtering Engine service on MYCOMPUTER. Error 15100: The resource loader failed to find MUI file. EDIT2: I ran sfc /scannow and i found about 100 occurrences of "[SR] Cannot repair member file"... including several related to the firewall (ex: [l:32{16}]"Firewall.cpl.mui" of Networking-MPSSVC.Resources...). One of them mentioned wordpad.exe, which I tried to open, and it failed. I found here mentions of mounting the install.wim on the install media to copy the affected files over. I am downloading the appropriate AIK and will continue tomorrow evening.

    Read the article

  • Lion built-in VPN client times out connecting to Windows 2003 PPTP server

    - by beporter
    I have a new iMac with OS X 10.7 (Lion) on it that refuses to connect to a PPTP-based VPN server (running Windows 2003 SBS). To shortcut past a lot of questions: There is a Dell workstation running Windows 7 on the same LAN as the Mac that is able to establish a PPTP connection to the same VPN server using the same credentials. That would seem to rule out any possible problems with the server, the port forwards on the server's firewall, the internet connection between the two, and the router local to the Dell and iMac. Here's a "verbose" dump of the PPP log from the iMac: Tue Sep 6 10:13:11 2011 : using link 0 Tue Sep 6 10:13:11 2011 : Using interface ppp0 Tue Sep 6 10:13:11 2011 : Connect: ppp0 socket[34:17] Tue Sep 6 10:13:11 2011 : sent [LCP ConfReq id=0x1 ] Tue Sep 6 10:13:11 2011 : PPTP port-mapping for en0, interfaceIndex: 0, Protocol: None, Private Port: 0, Public Address: 45f6f181, Public Port: 0, TTL: 0. Tue Sep 6 10:13:11 2011 : PPTP port-mapping for en0 inconsistent. is Connected: 1, Previous interface: 4, Current interface 0 Tue Sep 6 10:13:11 2011 : PPTP port-mapping for en0 initialized. is Connected: 1, Previous publicAddress: (0), Current publicAddress 45f6f181 Tue Sep 6 10:13:11 2011 : PPTP port-mapping for en0 fully initialized. Flagging up Tue Sep 6 10:13:14 2011 : sent [LCP ConfReq id=0x1 ] Tue Sep 6 10:13:17 2011 : sent [LCP ConfReq id=0x1 ] Tue Sep 6 10:13:20 2011 : sent [LCP ConfReq id=0x1 ] Tue Sep 6 10:13:23 2011 : sent [LCP ConfReq id=0x1 ] Tue Sep 6 10:13:26 2011 : sent [LCP ConfReq id=0x1 ] Tue Sep 6 10:13:29 2011 : sent [LCP ConfReq id=0x1 ] Tue Sep 6 10:13:32 2011 : sent [LCP ConfReq id=0x1 ] Tue Sep 6 10:13:35 2011 : sent [LCP ConfReq id=0x1 ] Tue Sep 6 10:13:38 2011 : sent [LCP ConfReq id=0x1 ] Tue Sep 6 10:13:41 2011 : LCP: timeout sending Config-Requests Tue Sep 6 10:13:41 2011 : Connection terminated. Tue Sep 6 10:13:41 2011 : PPTP disconnecting... Tue Sep 6 10:13:41 2011 : PPTP clearing port-mapping for en0 Tue Sep 6 10:13:41 2011 : PPTP disconnected The error seems to be focused around the line, LCP: timeout sending Config-Requests, but I haven't had any luck in finding troubleshooting information for this. I've tried completely deleting the entire VPN "connection" from the Network prefpane and recreating it from scratch. I am certain the connection details are correct because they exactly match what successfully connects from the Win7 machine sitting next to the iMac. Any suggestions?

    Read the article

  • Automating video generation by adding an intro and a trailing video to the main video

    - by DevDewboy
    I have a video project I am trying to compile. Here is the overview: I have many videos which are 5 minute training sessions - Main video. The Intro Video will be a standard 5 second video that will have the Video title and Author. This will be concatenated to the main video. The Trailing Video will pretty much be a stock video that will be concatenated to the main video and have all the legaleze etc. The Intro Vid will smoothly fade into the main vid as well as when you get to end of the main video it will fade into the Trailing video nicely. The product is a new video with a Intro, Main & Trailer video all in one! The concept is really that simple. In fact I found an example of a person who has solved this and is doing exactly what I want. This solution is a Bash script that takes a config file that has the title, author, etc. and generates the Intro, the Ending and creates the resulting video with them concatenated. I am using Ubuntu 12.04 Server. I have been trying to take this as a sample and just running it with no luck because of incompatibility errors. I even attempted to convert it using .MP4 containers or .MKV. I am running into error after error or incompatibility issues. I went as far as changing out the ffmpeg binary using the 25 Oct 2013 version from http://ffmpeg.gusari.org/static/64bit/ which I like as I don't have to worry about rebuilding the binary. Almost successful but again I have some error which I cannot solve. I know part of the problem is the fact that video production, codecs, formats is a completely new field for me so I am attempting to work through this new territory. Perhaps an expert here has something similar that I can use as a guideline that uses MP4 or h.264 format. Or take the solution above from the URL and make it work with a more up-to-date version of ffmpeg. I will include the script and its parameter file and the output (abbreviated because of limitation) below. Basically as the script stands right now, when run I get the error [matroska,webm @ 0x27bbee0] Read error. This error is return from the 'reasembleVideo' routine from the first ffmpeg command. The following is the Parameter File: #!/bin/bash INPUTFILE="ssh_main.mp4" LOGO="logo.png" LOGOLENGTH="1" SPEAKER="Jason" TITLE="Basic SSH Video" DATE="October 28, 2013" SCENESTART="00:00:01" SCENEDURATION="00:00:09" OUTPUTFILE="ssh_basic_1" } The following is the script I am running. The ${OUTPUTFILE} being used is a small 2 minute video I create in screen-o-matic in MP4 format. Script on PasteBin (too long for Super User post)

    Read the article

  • git post-receive hook throws "command not found" error but seems to run properly and no errors when run manually

    - by Ben
    I have a post-receive hook that runs on a central git repository set up with gitolite to trigger a git pull on a staging server. It seems to work properly, but throws a "command not found" error when it is run. I am trying to track down the source of the error, but have not had any luck. Running the same commands manually does not produce an error. The error changes depending on what was done in the commit that is being pushed to the central repository. For instance, if 'git rm ' was committed and pushed to the central repo the error message will be "remote: hooks/post-receive: line 16: Removed: command not found" and if 'git add ' was committed and pushed to the central repo the error message will be "remote: hooks/post-receive: line 16: Merge: command not found". In either case the 'git pull' run on the staging server works correctly despite the error message. Here is the post-receive script: #!/bin/bash # # This script is triggered by a push to the local git repository. It will # ssh into a remote server and perform a git pull. # # The SSH_USER must be able to log into the remote server with a # passphrase-less SSH key *AND* be able to do a git pull without a passphrase. # # The command to actually perform the pull request on the remost server comes # from the ~/.ssh/authorized_keys file on the REMOTE_HOST and is triggered # by the ssh login. SSH_USER="remoteuser" REMOTE_HOST="staging.server.com" `ssh $SSH_USER@$REMOTE_HOST` # This is line 16 echo "Done!" The command that does the git pull on the staging server is in the ssh user's ~/.ssh/authorized_keys file and is: command="cd /var/www/staging_site; git pull",no-port-forwarding,no-X11-forwarding,no-agent-forwarding, ssh-rsa AAAAB3NzaC1yc2EAAAABIwAA... (the rest of the public key) This is the actual output from removing a file from my local repo, committing it locally, and pushing it to the central git repo: ben@tamarack:~/thejibe/testing/web$ git rm ./testing rm 'testing' ben@tamarack:~/thejibe/testing/web$ git commit -a -m "Remove testing file" [master bb96e13] Remove testing file 1 files changed, 0 insertions(+), 5 deletions(-) delete mode 100644 testing ben@tamarack:~/thejibe/testing/web$ git push Counting objects: 3, done. Delta compression using up to 2 threads. Compressing objects: 100% (2/2), done. Writing objects: 100% (2/2), 221 bytes, done. Total 2 (delta 1), reused 0 (delta 0) remote: From [email protected]:testing remote: aa72ad9..bb96e13 master -> origin/master remote: hooks/post-receive: line 16: Removed: command not found # The error msg remote: Done! To [email protected]:testing aa72ad9..bb96e13 master -> master ben@tamarack:~/thejibe/testing/web$ As you can see the post-receive script gets to the echo "Done!" line and when I look on the staging server the git pull has been successfully run, but there's still that nagging error message. Any suggestions on where to look for the source of the error message would be greatly appreciated. I'm tempted to redirect stderr to /dev/null but would prefer to know what the problem is.

    Read the article

  • git post-receive hook throws "command not found" error but seems to run properly and no errors when run manually

    - by Ben
    I have a post-receive hook that runs on a central git repository set up with gitolite to trigger a git pull on a staging server. It seems to work properly, but throws a "command not found" error when it is run. I am trying to track down the source of the error, but have not had any luck. Running the same commands manually does not produce an error. The error changes depending on what was done in the commit that is being pushed to the central repository. For instance, if 'git rm ' was committed and pushed to the central repo the error message will be "remote: hooks/post-receive: line 16: Removed: command not found" and if 'git add ' was committed and pushed to the central repo the error message will be "remote: hooks/post-receive: line 16: Merge: command not found". In either case the 'git pull' run on the staging server works correctly despite the error message. Here is the post-receive script: #!/bin/bash # # This script is triggered by a push to the local git repository. It will # ssh into a remote server and perform a git pull. # # The SSH_USER must be able to log into the remote server with a # passphrase-less SSH key *AND* be able to do a git pull without a passphrase. # # The command to actually perform the pull request on the remost server comes # from the ~/.ssh/authorized_keys file on the REMOTE_HOST and is triggered # by the ssh login. SSH_USER="remoteuser" REMOTE_HOST="staging.server.com" `ssh $SSH_USER@$REMOTE_HOST` # This is line 16 echo "Done!" The command that does the git pull on the staging server is in the ssh user's ~/.ssh/authorized_keys file and is: command="cd /var/www/staging_site; git pull",no-port-forwarding,no-X11-forwarding,no-agent-forwarding, ssh-rsa AAAAB3NzaC1yc2EAAAABIwAA... (the rest of the public key) This is the actual output from removing a file from my local repo, committing it locally, and pushing it to the central git repo: ben@tamarack:~/thejibe/testing/web$ git rm ./testing rm 'testing' ben@tamarack:~/thejibe/testing/web$ git commit -a -m "Remove testing file" [master bb96e13] Remove testing file 1 files changed, 0 insertions(+), 5 deletions(-) delete mode 100644 testing ben@tamarack:~/thejibe/testing/web$ git push Counting objects: 3, done. Delta compression using up to 2 threads. Compressing objects: 100% (2/2), done. Writing objects: 100% (2/2), 221 bytes, done. Total 2 (delta 1), reused 0 (delta 0) remote: From [email protected]:testing remote: aa72ad9..bb96e13 master -> origin/master remote: hooks/post-receive: line 16: Removed: command not found # The error msg remote: Done! To [email protected]:testing aa72ad9..bb96e13 master -> master ben@tamarack:~/thejibe/testing/web$ As you can see the post-receive script gets to the echo "Done!" line and when I look on the staging server the git pull has been successfully run, but there's still that nagging error message. Any suggestions on where to look for the source of the error message would be greatly appreciated. I'm tempted to redirect stderr to /dev/null but would prefer to know what the problem is.

    Read the article

  • Touch gestures in IE not working without explorer.exe being run once

    - by Michael
    Edit: Rephrasing my question: Upon further troubleshooting, I can conclude that: Touch gestures (dragging, pinch to zoom, touch-and-hold right click) in Internet Explorer start to work when: The system has been running for ~2 minutes. This coincides with the delayed start of services. Explorer.exe is being run, then killed. I assume Explorer.exe starts some services? The services with delayed start are as follows: Security Center Software Protection Windows Defender, Search and Update Windows Font Cache Service Microsoft .NET Framework NGEN v4.0.30319_X64 and X86 I see no connection between these services and touch gestures, but just in case, I manually tried starting these services, but without luck. What else happens delayed after system boot, which also happens when explorer is started? Old question: Details: Internet Explorer 9 and Windows 7 Professional, running on a HP TouchSmart (touch screen PC). It is going to be a kiosk PC (running a custom GUI for displaying websites). Scenario 1: When running Internet Explorer as a normal program in Windows 7, touch functions work perfectly. I can scroll the website by dragging it with my finger, I can pinch zoom and I can touch-and-hold right click. I now change the default shell in Windows to Internet Explorer (ie. IE starts instead of explorer.exe). Internet Explorer of course starts up when logging in. However, touch functions are reduced to basic clicking (no dragging, no pinch zooming, no touch-and-hold right click). Then I manually start explorer.exe, and the touch functions work again! And here is the weird part: When I kill explorer.exe, the touch functions keeps working - even if I close IE and start a new instance. Scenario 2: The exact same, but instead of changing the default shell to Internet Explorer, I change it to my own program, which uses an embedded Internet Explorer ("WebBrowser"). Same thing happens. What I've tried: Autorun programs: When explorer.exe launches, it launches all the autorun programs. There are no relevant programs being run by explorer, but just in case, I have manually started all the autorun programs, so that it is identical (but without explorer.exe) to a normal login. It still does not work (until I launch explorer.exe). Specifically TabTip.exe, TabTip32.exe and wisptis.exe are all running. All services are also started. To sum it up Running explorer.exe once changes something in the touch capabilities of Internet Explorer. It doesn't matter if explorer.exe is running - as long as it has been run once. Does anyone know what causes this behavior? Or how I can circumvent it neatly?

    Read the article

  • OS X: Finder error -36 when using SMB shares on a Samba server bound to AD

    - by Frenchie
    We're looking at deploying SMB homes on Debian (5.0.3) for our mac clients rather than purchasing four new Xserves. We've got our test servers built and functioning properly. Windows clients behave perfectly, but we've run into an issue with OS X (10.6.x and 10.5.x). We're going this route instead of Windows file servers due to a whole bunch of other issues that arise when going that way. Specifically, when mounting a SMB share with unix extensions switched on and the remote server bound to AD, the finder cannot save files on the share, instead touching the file and then bombing out with a -36 IO error, folder creation is fine. Copying files in the terminal behaves fine and the problem seems to be limited to the finder. The issue arises (I think) as the remote UID/GID is passed across when using unix extensions. OS X uses its own winbind idmap (odsam) to work out the effective UID/GID from AD users and groups whilst we're using a rid map on the server. Consequently, there is a mismatch in ownership which the finder chooses to honour. How OS X appears to handle this is to use the remote uid and gid at the file permission level (see below) and then set an OS X acl granting the local uid/gid to have the appropriate permissions on the file. I think the finder touches the file (which the kernel allows because of the ACL) and then checks the filesystem perms and drops out with the IO error. On a Client fc-003353-d:homes2 root# ls -led test/ drwx------+ 2 135978 100513 16384 Feb 3 15:14 test/ 0: user:jfrench allow list,add_file,search,delete,add_subdirectory,delete_child,readattr,writeattr,readextattr,writeextattr,readsecurity,writesecurity,chown,file_inherit,directory_inherit 1: group:ARTS\domain users allow 2: group:everyone allow 3: group:owner allow list,add_file,search,delete,add_subdirectory,delete_child,readattr,writeattr,readextattr,writeextattr,readsecurity,writesecurity,chown,file_inherit,directory_inherit,only_inherit 4: group:group allow list,add_file,search,delete,add_subdirectory,delete_child,readattr,writeattr,readextattr,writeextattr,readsecurity,writesecurity,chown,file_inherit,directory_inherit,only_inherit 5: group:everyone allow list,add_file,search,delete,add_subdirectory,delete_child,readattr,writeattr,readextattr,writeextattr,readsecurity,writesecurity,chown,file_inherit,directory_inherit,only_inherit We've tried the following without any luck: Setting the Linux side file owner to match the OS X GID/UID Adding ACLs on the linux filesystem which grant the OS X GID/UID perms Disabling extended attributes Setting steams=no in /etc/nsmb.conf on the client We're currently running a workaround which is to just turn off unix extensions which forces the macs to just mount the share as the local user with u=rwx perms. This works for most things but is causing a few apps that expect certain perms to break in subtle ways. Worst case scenario is that we'll continue running in this way but we would like to have the unix extensions on. Regards. Relevant SMB config below: [global] workgroup = ARTS realm = *snip* security = ADS password server = *snip* unix extensions = yes panic action = /usr/share/panic-action %d idmap backend = rid:ARTS=100000-10000000 idmap uid = 100000-10000000 idmap gid = 100000-10000000 winbind enum users = Yes winbind enum groups = Yes veto files = /lost+found/aquota.*/ hide files = /desktop.ini/$RECYCLE.BIN/.*/AppData/Library/ ea support = yes store dos attributes = yes map system = no map archive = no map readonly = no

    Read the article

  • Ububtu server 12.04 auto installation freezes at kickseeding running if ks.cfg has post scripts

    - by john206
    I'm trying to make a custom Ubuntu Server iso file. Kickstart file (ks.cfg) runs smooth when there is no %post in the file and Ubuntu installs correctly with ks configuration. Installation finishes installing base, apt, grub and It echos: Kickseed Running... and it freezes @ 0% I thought may be apt-get update doesnt work in ks file, I tried to install other apps like apache2 but no luck I have created dozen iso images and installed them in Virtual Box.I have been googling for 3 days and checked out ubuntu forums but haven't figured out the issue. I appreciate your help. This is how I made the iso image. My ks.file and txt.cfg files located in isolinux directory: root@ubuntu:/home/work mount -o loop ubuntu-12.04-amd64.iso original-iso/ rsync -a original-iso/ custom-iso/ cp ks.cfg custom-iso/isolinux/ cp txt.cfg custom-iso/isolinux/ chmod -R 777 custom-iso/ #Creating Iso image mkisofs -D -r -V “$IMAGE_NAME” -cache-inodes -J -l -b isolinux/isolinux.bin -c isolinux/boot.cat -no-emul-boot -boot-load-size 4 -boot-info-table -o ~/ubuntu-12.04-alternate-custom-amd64.iso custom-iso/ ks.cfg #Generated by Kickstart Configurator #platform=AMD64 or Intel EM64T #System language lang en_US #Language modules to install langsupport en_US #System keyboard keyboard us #System mouse mouse #System timezone timezone America/Los_Angeles #Root password rootpw --iscrypted somethingsomething #Initial user user ubuntu --fullname "ubuntu" --iscrypted --password somethingsomething. #Reboot after installation reboot #Use text mode install text #Install OS instead of upgrade install #Use CDROM installation media cdrom #System bootloader configuration bootloader --location=mbr #Clear the Master Boot Record zerombr yes #Partition clearing information clearpart --all --initlabel #Disk partitioning information part /boot --size 128 --fstype=ext3 --asprimary part / --size 512 --fstype=ext3 --asprimary part swap --size 512 part /tmp --size 512 --fstype=ext3 part /var --size 512 --fstype=ext3 part /usr --size 4096 --fstype=ext3 part /home --size 2048 --fstype=ext3 #System authorization infomation auth --useshadow --enablemd5 #Network information network --bootproto=dhcp --device=eth0 #Firewall configuration firewall --disabled --http --ftp --ssh #X Window System configuration information xconfig --depth=32 --resolution=1024x768 --defaultdesktop=GNOME %post apt-get update mkdir /home/user txt.cfg default autoinstall label autoinstall menu label ^Install Custom Ubuntu Server kernel /install/vmlinuz append file=/cdrom/preseed/ubuntu-server.seed initrd=/install/initrd.gz quiet ks=cdrom:/isolinux/ks.cfg -- label install menu label ^Install Ubuntu Server kernel /install/vmlinuz append file=/cdrom/preseed/ubuntu-server.seed vga=788 initrd=/install/initrd.gz quiet -- label cloud menu label ^Multiple server install with MAAS kernel /install/vmlinuz append modules=maas-enlist-udeb vga=788 initrd=/install/initrd.gz quiet -- label check menu label ^Check disc for defects kernel /install/vmlinuz append MENU=/bin/cdrom-checker-menu vga=788 initrd=/install/initrd.gz quiet -- label memtest menu label Test ^memory kernel /install/mt86plus label hd menu label ^Boot from first hard disk localboot 0x80

    Read the article

  • Indesign Import XML into Automatic Page generation, data merge

    - by taudep
    I've created some InDesign Pages that I want to use as templates. I've created an XML file with all the appropriate data. I want to merge the XML data with the InDesign page and have a few hundred pages automatically generated. I've been reading online and working with InDesign's "Import XML" features without any luck. The documentation has been pretty poor for me. And Google searches haven't returned much fruitful. Here are my present steps: I create a Master Page of my template I add a bunch of text frames where I want the imported data from the XML file to be places I open the "Tags" window and Import and XML file I mark my text frames in the Master Document with the appropriate tags I then add a lot of pages (like 200) to the document Then I use "Import XML" to try and get the data brought in and filled across all 200 pages. This is where I fail. There's something I'm missing. It might be that InDesign doesn't work as I'm expecting... Does anyone have any good tips for mail-merge like functionality with an XML document and auto-generation of InDesign pages? By the way, here's an example of Adobe's great documentation for merging repeated XML elements. There's got to be more... InDesign CS4 Docs: XML-Importing XML-Working with Repeating Data Here's some of the sample XML, notice the ITEM will repeat. I've also truncated the data in the "desc" tag: <output> <item> <user_name>taude</user_name> <date>2009-02-21</date> <title>Wishful Thinking</title> <desc>Skiing up in Vermont on a beautiful day. This photo of</desc> <thumbnail>http://www.blipfoto.com/thumbs/5371/2009/big/color/96104200949a162672e1996.15963073.jpeg</thumbnail> </item> <item> <user_name>taude</user_name> <date>2009-02-22</date> <title>Skiing Self Portrait</title> <desc>I was inspired by ML's self-portrait while </desc> <thumbnail>http://www.blipfoto.com/thumbs/5371/2009/big/color/36547696749a2c5782308e0.91477014.jpeg</thumbnail> </item> </output> Here's what my imported XML looks like with the InDesign Structure:

    Read the article

  • Old operational master still thinks it is the "one"

    - by Doug
    Hi there, I have a domain with 3 AD servers for now i'll just call them: AD01 (Win 2008 GC, Operations master) AD02 (Win 2008 GC) AD03 (Win 2003 GC) A couple of months there was some hardware issues with AD01 so the operations master, PDC and Infrastructure Master was moved to AD02. All machines where on while this was happening. AD01 (Win 2008 GC) AD02 (Win 2008 GC, Operations master) AD03 (Win 2003 GC) AD01 was then shutdown for a month. Upon starting this machine up with replaced hardware (NIC and RAID card) i now have a weird problem. AD01 Thinks it is operations master still in AD on the local box AD02 & AD03 Thinks AD02 is operations master in AD on both boxes When running DCDIAG on AD01 i get a number of issues (listed below) When running "dcdiag /test:advertising" on AD01: Doing primary tests Testing server: Default-First-Site-Name\AD01 Starting test: Advertising Warning: DsGetDcName returned information for \\ad02.domain.local, when we were trying to reach AD01. SERVER IS NOT RESPONDING or IS NOT CONSIDERED SUITABLE. ......................... AD01 failed test Advertising Running partition tests on : ForestDnsZones Running partition tests on : DomainDnsZones Running partition tests on : Schema Running partition tests on : Configuration Running partition tests on : domain Running enterprise tests on : domain.local When running "dcdiag" on AD01 i get the following errors (excerpt of the Final output): Testing server: Default-First-Site-Name\AD01 Starting test: Advertising Warning: DsGetDcName returned information for \\ad02.domain.local, when we were trying to reach AD01. SERVER IS NOT RESPONDING or IS NOT CONSIDERED SUITABLE. ......................... AD01 failed test Advertising Starting test: FrsEvent There are warning or error events within the last 24 hours after the SYSVOL has been shared. Failing SYSVOL replication problems may cause Group Policy problems. Starting test: NCSecDesc Error NT AUTHORITY\ENTERPRISE DOMAIN CONTROLLERS doesn't have Replicating Directory Changes In Filtered Set access rights for the naming context: DC=ForestDnsZones,DC=domain,DC=local Error NT AUTHORITY\ENTERPRISE DOMAIN CONTROLLERS doesn't have Replicating Directory Changes In Filtered Set access rights for the naming context: DC=DomainDnsZones,DC=domain,DC=local Starting test: Replications [Replications Check,Replications Check] Inbound replication is disabled. To correct, run "repadmin /options AD01 -DISABLE_INBOUND_REPL" [Replications Check,AD01] Outbound replication is disabled. To correct, run "repadmin /options AD01 -DISABLE_OUTBOUND_REPL" So the problem appeasr to be that when i moved the operations master, AD01 never got the memo, and now that it's started up, all the other AD servers don't think its the boss anymore when it trys to replicate etc. So i really need to manually update AD01 so that it knows who the operations master, instrastructure and PDC is - but i'm not having any luck I've been googling for nearly a day and all solutions lead to "the cake is a lie" Your ninja skills will be greatly appreciated

    Read the article

  • IIS6 Virtual Directory 500 Error on Remote Share

    - by David
    We have our servers at the server farm in a domain. Let's call it LIVE. Our developer computers live in a completely separate corporate domain, miles and miles away. Let's call it CORP. We have a large central storage unit (unix) that houses images and other media needed by many webservers in the server farm. The IIS application pools run as (let's say) LIVE\MediaUser and use those credentials to connect to a central storage share as a virtual directory, retrieve the images, and serve them as if they were local on each server. The problem is in development. On my development machine. I log in as CORP\MyName. My IIS 6 application pool runs as Network Service. I can't run it as a user from the LIVE domain because my machine isn't (and can not be) joined to that domain. I try to create a virtual directory, point it to the same network directory, click Connect As, uncheck the "Always use the authenticated user's credentials when validating access to the network directory" checkbox so that I can enter the login info, enter the credentails for LIVE\MediaUser, click OK, verify the password, etc. This doesn't work. I get "HTTP Error 500 - Internal server error" from IIS. The IIS log file reports sc-status = 500, sc-substatus = 16, and sc-win32-status = 1326. The documentation says this means "UNC authorization credentials are incorrect" and the Win32 status means "Logon failure: unknown user name or bad password." This would be all and good if it were anywhere close to accurate. I double- and trouble-checked it. Tried multiple known good logins. The IIS manager allows me to view the file tree in its window, it's only the browser that kicks me out. I even tried going to the virtual directory's Directory Security tab, and under Authentication and Access Control, I tried using the same LIVE domain username for the anonymous access credential. No luck. I'm not trying to run any ASP, ASP.NET, or other dynamic anything out of the virtual directory. I just want IIS to be able to load static images, css, and js files. If anyone has some bright ideas I would be most appreciative!

    Read the article

  • Iptables config breaks Java + Elastic Search communication

    - by Agustin Lopez
    I am trying to set up a firewall for a server hosting a java app and ES. Both are on the same server and communicate to each other. The problem I am having is that my firewall configuration prevents java from connecting to ES. Not sure why really.... I have tried lot of stuff like opening the port range 9200:9400 to the server ip without any luck but from what I know all communication inside the server should be allowed with this configuration. The idea is that ES should not be accessible from outside but it should be accessible from this java app and ES uses the port range 9200:9400. This is my iptables script: echo -e Deleting rules for INPUT chain iptables -F INPUT echo -e Deleting rules for OUTPUT chain iptables -F OUTPUT echo -e Deleting rules for FORWARD chain iptables -F FORWARD echo -e Setting by default the drop policy on each chain iptables -P INPUT DROP iptables -P OUTPUT ACCEPT iptables -P FORWARD DROP echo -e Open all ports from/to localhost iptables -A INPUT -i lo -j ACCEPT echo -e Open SSH port 22 with brute force security iptables -A INPUT -p tcp -m tcp --dport 22 -m state --state NEW -m recent --set --name SSH --rsource iptables -A INPUT -p tcp -m tcp --dport 22 -m recent --rcheck --seconds 30 --hitcount 4 --rttl --name SSH --rsource -j REJECT --reject-with tcp-reset iptables -A INPUT -p tcp -m tcp --dport 22 -m recent --rcheck --seconds 30 --hitcount 3 --rttl --name SSH --rsource -j LOG --log-prefix "SSH brute force " iptables -A INPUT -p tcp -m tcp --dport 22 -m recent --update --seconds 30 --hitcount 3 --rttl --name SSH --rsource -j REJECT --reject-with tcp-reset iptables -A INPUT -p tcp -m tcp --dport 22 -j ACCEPT echo -e Open NGINX port 80 iptables -A INPUT -p tcp --dport 80 -j ACCEPT echo -e Open NGINX SSL port 443 iptables -A INPUT -p tcp --dport 443 -j ACCEPT echo -e Enable DNS iptables -A INPUT -p tcp -m tcp --sport 53 --dport 1024:65535 -m state --state ESTABLISHED -j ACCEPT iptables -A INPUT -p udp -m udp --sport 53 --dport 1024:65535 -m state --state ESTABLISHED -j ACCEPT And I get this in the java app when this config is in place: org.elasticsearch.cluster.block.ClusterBlockException: blocked by: [SERVICE_UNAVAILABLE/1/state not recovered / initialized];[SERVICE_UNAVAILABLE/2/no master]; at org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor.postProcessPropertyValues(AutowiredAnnotationBeanPostProcessor.java:292) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.populateBean(AbstractAutowireCapableBeanFactory.java:1185) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:537) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:475) at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:304) at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:228) at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:300) at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:195) at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:700) at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:760) at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:482) at org.springframework.web.context.ContextLoader.configureAndRefreshWebApplicationContext(ContextLoader.java:403) Do any of you see any problem with this configuration and ES? Thanks in advance

    Read the article

  • How to shutdown VMware Fusion virtual machine on host shutdown

    - by Nikksno
    I have a Mac mini running Mavericks server. I installed the Atmail server + webmail vm [a linux centos distribution] in VMware Fusion Professional 6 with the VMware Tools addon. It works flawlessly. I've set it to start on boot and that works very reliably. However I've been looking for a way to also safely and gracefully shut it down whenever OS X shuts down for whatever reason. The Mac is connected to a UPS and configured to perform an automatic shutdown in case the battery starts running low so that's no additional problem. Now the first thing I did was to go into Fusion's prefs and select "Power off the vm" when closing it. However I noticed that for some arcane reason closing the vm window would actually forcibly power off the vm: so then I found this post that showed me how to change the default power options and I managed to have the vm cleanly shutdown when closing its window or quitting Fusion altogether. At this point I was hoping to have solved the problem but as it turns out upon invoking system shutdown OS X doesn't wait for the vm to shutdown and terminates Fusion before it has a chance to do so. At this point I started looking for a way to automate the process of shutting down the guest os via some advanced setting but had no luck in doing so. That's when I found a command to shut the vm down: vmrun and it worked. The only thing left was to find out a way to execute this script on os x shutdown and giving it a little time to power off completely. However this turned out to be a nightmare: I spent hours looking through several ways to do this with Startup Items, rc.shutdown, cron, launchd, etc... but none of them worked the way I had configured them. I have to say that I found very limited information on using launchd for a shutdown script execution and I know it's the latest thing in the OS X world so I'm hoping someone out there among you will be able to help me out with this. I still think this is an extremely basic feature to ask for and I was really surprised to find this little documentation on so many different aspects of this problem. Is Fusion too basic of an application for this? I really hope someone can help. Thank you very much in advance.

    Read the article

  • DPM 2010 "Disk failed or disk not found"

    - by SysAdmin
    I have an HP Proliant ML110 G5 server with Windows server 2008R2 only dedicated for DPM 2010. This server has a limit in HD of 8TB which has already been met. I'm now stuck in this situation where my disk keeps failing "Disk failed or Disk not found" in the disk management. Only after I reboot the system the disk comes back up. Today I was running my monthly tape backup on a certain protection group and the disk failed again while the tape job was running (so the job wasn't completed). This is the description of the error in the alerts: "The disk Disk 1 - Hitachi HDS722020ALA330 SCSI Disk Device cannot be detected or has stopped responding. All subsequent protection activities that use this disk will fail until the disk is brought back online. (ID 3120)". My backup system is becoming useless! I don't think that is a hardware issue (please correct me if I'm wrong) since the HD works fine for a certain period of time which is becoming shorter and shorter. I basically have no more option to fix this problem. I tried to fix any error that was coming up in the event viewer with no luck (included one regarding the SQL2008 compatibility issue). The disk keeps failing! Now I'm only trying to recover/migrate the data from the disk that is having problem but my issue now is that I cannot add any drives to my server since I already got installed the maximum storage capacity 8TB. I thought about 2 simple options. Please tell me what you guys think about it; Unplug one of the 2 storage pool disks (disk0, that one without problem) from the machine and install a new one in order to migrate the data with the Migration tool for DPM. Remove the defective disk (disk1), put back the disk0 and run the synchronization/consistency check on all the groups to recreate replicas and recovery points. Run diskpart.exe and clean up the disk (loosing all data) and hoping that he will work after I sync all the protection groups. Both solutions are not elegant but I have no better options at the moment. Please I need some help. Thanks for your time Angelo

    Read the article

  • Using an SSD with no AHCI [ICH7 base] - Windows 7 hangs frequently

    - by h4xnoodle
    I have a Shuttle Intel G31 + ICH7 (base -- not M/R etc) system. I just bought an OCZ Vertex 3 120gb [VTX3-25SAT3-120G] which includes the Sandforce 2218 firmware. The ICH7 does not support AHCI. I understand that this can be a problem. What I don't understand, is if it's necessary to have the proper performance of this drive. I know that without AHCI I may get a limited read/write speed -- this is fine. What my concern is, is the constant freezing/hangs I'm getting with Windows 7 on any disk activity. The 'Highest Active Time' flip-flops from 0 to 100% every minute or so regardless of large or small files. EDIT: The threads/processes with the highest response time is the kernel. I've been reading about other people with Shuttle SG31G2s, and they seem to be using SSDs no problem. Is this the controller's fault? The fact that I do not have AHCI enabled? It makes sense to me that if this SSD requires AHCI features that it would cause Windows to hang, but I would like to fully determine my situation before returning things/reformatting. To initially have my drive recognise the SSD at all, I had to change the BIOS option to Force Gen II instead of Auto for the SATA controller. I then installed Windows with no problem. There were no errors in the event log related to disk usage, but watching the perfmon I could see the highest active time and the processes (usually pagefile.sys being written to, or chrome/firefox caching) which was correlated to the hanging. So now what I need answered is: should I be returning this SSD and getting one with a different controller, or returning the SSD all-together as it will never work out and I will continue to get these hangs. Posts I've read: Windows 7 New SSD SATA AHCI? -- suggests to use AHCI http://forums.anandtech.com/showthread.php?t=2189868 -- Sandforce issues Windows 7 freezes with SSD -- and attached posts Why does my Windows 7 PC / SSD drive keep freezing? -- this is not the controller I have, but still a related issue. Windows 7 hangs after longer inactivity of user -- also tried messing with power settings with no luck. It was already set to 'Never' for turning off HDDs.

    Read the article

< Previous Page | 127 128 129 130 131 132 133 134 135 136 137 138  | Next Page >