Search Results

Search found 1586 results on 64 pages for 'devil night'.

Page 60/64 | < Previous Page | 56 57 58 59 60 61 62 63 64  | Next Page >

  • MS Securily Essentials efficiency / usage, suspicious processes

    - by biggvsdiccvs
    I recently noticed that my (originally pretty fast) Windows 7 Pro laptop started getting slow and using a lot of CPU power for no apparent reason. A full scan by Microsoft Security Essentials revealed nothing. After some investigation, I found multiple instances of a strange process called urpev.exe and a couple of similar exe files sitting in subdirectories of Users//AppData/Roaming (this particular one was in a folder called Xyceowme). Description: "Mescrosift Visaal Studie 2010". Company name: "Mesrosift Corporatien". Is it a virus or something? :) Now, all of these exe files were scheduled to be started from the Task Scheduler by tasks with names like "Security Center Update - 1291373911" and similar. My user name was listed as the author of the tasks. I disabled the tasks, restarted the computer in safe mode and moved all of the exe files to quarantine for further investigation. All of this was done last night. I just scanned the files with Security Essentials again (not updated since yesterday) in the quarantine location and this time it found PWS:Win32/Zbot.gen!plock in urpev.exe (but not in the other exe files, which are most likely viruses, too). Category: Password Stealer Description: This program is dangerous and captures user passwords. Another strange process is browser.exe (not chrome.exe) by Google Inc., described as Google Chrome. I uninstalled Chrome but it's still there. It runs out of Users\\AppData\LocalLow\UIVoice\ToolMedium\browser.exe and if I move it in safe mode, it just reappears there, and multiple instances run. Needless to say, it I kill it, it just runs again. Couldn't see anything in Task Scheduler, but found a couple of references to it in the Registry Editor: HKEY_CURRENT_USER/Software/Microsoft/Internet Explorer/LowRegistry/Audio/PolicyConfig/PropertyStore/ HKEY_USERS/S-1-5-21-1685709306-872053864-2599010960-1002/Software/Microsoft/Internet Explorer/LowRegistry/Audio/PolicyConfig/PropertyStore/ Maybe it's a legit process, but seems kind of strange. For the time being, I suspended the process and killed all of the child processes when I booted up the laptop. I used Security Essentials to scan the system periodically, but obviously it's not effective at least against one virus. I had the "real-time protection" turned off. Would it help if it were turned on and how much of a nuisance would it be? I wonder if there is a better alternative to Security Essentials. Over the years I've used multiple antivirus products at home and especially at work and was not very happy with any of them. Apparently, asking for software recommendations or comparisons is taboo here, but I will mention that I installed Malware Bytes and it was able to find an quarantine a bunch of suspicious files, and at least some of which were truly infected, but when it scans the bogus security center update executables from Mesrosift Corporatien, it finds nothing wrong. Also, any thoughts on the browser.exe mystery? Neither MS Security Essentials nor Malware Bytes found anything wrong with that file. However, after I ran a Malware Bytes scan and quarantined everything it found suspicious and rebooted the laptop, the process did not run.

    Read the article

  • Reg Gets a Job at Red Gate (and what happens behind the scenes)

    - by red(at)work
    Mr Reg Gater works at one of Cambridge’s many high-tech companies. He doesn’t love his job, but he puts up with it because... well, it could be worse. Every day he drives to work around the Red Gate roundabout, wondering what his boss is going to blame him for today, and wondering if there could be a better job out there for him. By late morning he already feels like handing his notice in. He got the hacky look from his boss for being 5 minutes late, and then they ran out of tea. Again. He goes to the local sandwich shop for lunch, and picks up a Red Gate job menu and a Book of Red Gate while he’s waiting for his order. That night, he goes along to Cambridge Geek Nights and sees some very enthusiastic Red Gaters talking about the work they do; it sounds interesting and, of all things, fun. He takes a quick look at the job vacancies on the Red Gate website, and an hour later realises he’s still there – looking at videos, photos and people profiles. He especially likes the Red Gate’s Got Talent page, and is very impressed with Simon Johnson’s marathon time. He thinks that he’d quite like to work with such awesome people. It just so happens that Red Gate recently decided that they wanted to hire another hot shot team member. Behind the scenes, the wheels were set in motion: the recruitment team met with the hiring manager to understand exactly what they’re looking for, and to decide what interview tests to do, who will do the interviews, and to kick-start any interview training those people might need. Next up, a job description and job advert were written, and the job was put on the market. Reg applies, and his CV lands in the Recruitment team’s inbox and they open it up with eager anticipation that Reg could be the next awesome new starter. He looks good, and in a jiffy they’ve arranged an interview. Reg arrives for his interview, and is greeted by a smiley receptionist. She offers him a selection of drinks and he feels instantly relaxed. A couple of interviews and an assessment later, he gets a job offer. We make his day and he makes ours by accepting, and becoming one of the 60 new starters so far this year. Behind the scenes, things start moving all over again. The HR team arranges for a “Welcome” goodie box to be whisked out to him, prepares his contract, sends an email to Information Services (Or IS for short - we’ll come back to them), keeps in touch with Reg to make sure he knows what to expect on his first day, and of course asks him to fill in the all-important wiki questionnaire so his new colleagues can start to get to know him before he even joins. Meanwhile, the IS team see an email in SupportWorks from HR. They see that Reg will be starting in the sales team in a few days’ time, and they know exactly what to do. They pull out a new machine, and within minutes have used their automated deployment software to install every piece of software that a new recruit could ever need. They also check with Reg’s new manager to see if he has any special requirements that they could help with. Reg starts and is amazed to find a fully configured machine sitting on his desk, complete with stationery and all the other tools he’ll need to do his job. He feels even more cared for after he gets a workstation assessment, and realises he’d be comfier with an ergonomic keyboard and a footstool. They arrive minutes later, just like that. His manager starts him off on his induction and sales training. Along with job-specific training, he’ll also have a buddy to help him find his feet, and loads of pre-arranged demos and introductions. Reg settles in nicely, and is great at his job. He enjoys the canteen, and regularly eats one of the 40,000 meals provided each year. He gets used to the selection of teas that are available, develops a taste for champagne launch parties, and has his fair share of the 25,000 cups of coffee downed at Red Gate towers each year. He goes along to some Feel Good Fund events, and donates a little something to charity in exchange for a turn on the chocolate fountain. He’s looking a little scruffy, so he decides to get his hair cut in between meetings, just in time for the Red Gate birthday company photo. Reg starts a new project: identifying existing customers to up-sell to new bundles. He talks with the web team to generate lists of qualifying customers who haven’t recently been sent marketing emails, and sends emails out, using a new in-house developed tool to schedule follow-up calls in CRM for the same group. The customer responds, saying they’d like to upgrade but are having a licensing problem – Reg sends the issue to Support, and it gets routed to the web team. The team identifies a workaround, and the bug gets scheduled into the next maintenance release in a fortnight’s time (hey; they got lucky). With all the new stuff Reg is working on, he realises that he’d be way more efficient if he had a third monitor. He speaks to IS and they get him one - no argument. He also needs a test machine and then some extra memory. Done. He then thinks he needs an iPad, and goes to ask for one. He gets told to stop pushing his luck. Some time later, Reg’s wife has a baby, so Reg gets 2 weeks of paid paternity leave and a bunch of flowers sent to his house. He signs up to the childcare scheme so that he doesn’t have to pay National Insurance on the first £243 of his childcare. The accounts team makes it all happen seamlessly, as they did with his Give As You Earn payments, which come out of his wages and go straight to his favorite charity. Reg’s sales career is going well. He’s grateful for the help that he gets from the product support team. How do they answer all those 900-ish support calls so effortlessly each month? He’s impressed with the patches that are sent out to customers who find “interesting behavior” in their tools, and to the customers who just must have that new feature. A little later in his career at Red Gate, Reg decides that he’d like to learn about management. He goes on some management training specially customised for Red Gate, joins the Management Book Club, and gets together with other new managers to brainstorm how to get the most out of one to one meetings with his team. Reg decides to go for a game of Foosball to celebrate his good fortune with his team, and has to wait for Finance to finish. While he’s waiting, he reflects on the wonderful time he’s had at Red Gate. He can’t put his finger on what it is exactly, but he knows he’s on to a good thing. All of the stuff that happened to Reg didn’t just happen magically. We’ve got teams of people working relentlessly behind the scenes to make sure that everyone here is comfortable, safe, well fed and caffeinated to the max.

    Read the article

  • Microsoft and the open source community

    - by Charles Young
    For the last decade, I have repeatedly, in my imitable Microsoft fan boy style, offered an alternative view to commonly held beliefs about Microsoft's stance on open source licensing.  In earlier times, leading figures in Microsoft were very vocal in resisting the idea that commercial licensing is outmoded or morally reprehensible.  Many people interpreted this as all-out corporate opposition to open source licensing.  I never read it that way. It is true that I've met individual employees of Microsoft who are antagonistic towards FOSS (free and open source software), but I've met more who are supportive or at least neutral on the subject.  In any case, individual attitudes of employees don't necessarily reflect a corporate stance.  The strongest opposition I've encountered has actually come from outside the company.  It's not a charitable thought, but I sometimes wonder if there are people in the .NET community who are opposed to FOSS simply because they believe, erroneously, that Microsoft is opposed. Here, for what it is worth, are the points I've repeated endlessly over the years and which have often been received with quizzical scepticism. a)  A decade ago, Microsoft's big problem was not FOSS per se, or even with copyleft.  The thing which really kept them awake at night was the fear that one day, someone might find, deep in the heart of the Windows code base, some code that should not be there and which was published under GPL.  The likelihood of this ever happening has long since faded away, but there was a time when MS was running scared.  I suspect this is why they held out for a while from making Windows source code open to inspection.  Nowadays, as an MVP, I am positively encouraged to ask to see Windows source. b)  Microsoft has never opposed the open source community.  They have had problems with specific people and organisations in the FOSS community.  Back in the 1990s, Richard Stallman gave time and energy to a successful campaign to launch antitrust proceedings against Microsoft.  In more recent times, the negative attitude of certain people to Microsoft's submission of two FOSS licences to the OSI (both of which have long since been accepted), and the mad scramble to try to find any argument, however tenuous, to block their submission was not, let us say, edifying. c) Microsoft has never, to my knowledge, written off the FOSS model.  They certainly don't agree that more traditional forms of licensing are inappropriate or immoral, and they've always been prepared to say so.  One reason why it was so hard to convince people that Microsoft is not rabidly antagonistic towards FOSS licensing is that so many people think they have no involvement in open source.  A decade ago, there was virtually no evidence of any such involvement.  However, that was a long time ago.  Quietly over the years, Microsoft has got on with the job of working out how to make use of FOSS licensing and how to support the FOSS community.  For example, as well as making increasingly extensive use of Github, they run an important FOSS forge (CodePlex) on which they, themselves, host many hundreds of distinct projects.  The total count may even be in the thousands now.  I suspect there is a limit of about 500 records on CodePlex searches because, for the past few years, whenever I search for Microsoft-specific projects on CodePlex, I always get approx. 500 hits.  Admittedly, a large volume of the stuff they publish under FOSS licences amounts to code samples, but many of those 'samples' have grown into useful and fully featured frameworks, libraries and tools. All this is leading up to the observation that yesterday's announcement by Scott Guthrie marks a significant milestone and should not go unnoticed.  If you missed it, let me summarise.   From the first release of .NET, Microsoft has offered a web development framework called ASP.NET.  The core libraries are included in the .NET framework which is released free of charge, but which is not open source.   However, in recent years, the number of libraries that constitute ASP.NET have grown considerably.  Today, most professional ASP.NET web development exploits the ASP.NET MVC framework.  This, together with several other important parts of the ASP.NET technology stack, is released on CodePlex under the Apache 2.0 licence.   Hence, today, a huge swathe of web development on the .NET/Azure platform relies four-square on the use of FOSS frameworks and libraries. Yesterday, Scott Guthrie announced the next stage of ASP.NET's journey towards FOSS nirvana.  This involves extending ASP.NET's FOSS stack to include Web API and the MVC Razor view engine which is rapidly becoming the de facto 'standard' for building web pages in ASP.NET.  However, perhaps the more important announcement is that the ASP.NET team will now accept and review contributions from the community.  Scott points out that this model is already in place elsewhere in Microsoft, and specifically draws attention to development of the Windows Azure SDKs.  These SDKs are central to Azure development.   The .NET and Java SDKs are published under Apache 2.0 on Github and Microsoft is open to community contributions.  Accepting contributions is a more profound move than simply releasing code under FOSS licensing.  It means that Microsoft is wholeheartedly moving towards a full-blooded open source approach for future evolution of some of their central and most widely used .NET and Azure frameworks and libraries.  In conjunction with Scott's announcement, Microsoft has also released Git support for CodePlex (at long last!) and, perhaps more importantly, announced significant new investment in their own FOSS forge. Here at Solidsoft we have several reasons to be very interested in Scott's announcement. I'll draw attention to one of them.  Earlier this year we wrote the initial version of a new UK Government web application called CloudStore.  CloudStore provides a way for local and central government to discover and purchase applications and services. We wrote the web site using ASP.NET MVC which is FOSS.  However, this point has been lost on the ladies and gentlemen of the press and, I suspect, on some of the decision makers on the government side.  They announced a few weeks ago that future versions of CloudStore will move to a FOSS framework, clearly oblivious of the fact that it is already built on a FOSS framework.  We are, it is fair to say, mildly irked by the uninformed and badly out-of-date assumption that “if it is Microsoft, it can't be FOSS”.  Old prejudices live on.

    Read the article

  • Five Ways Enterprise 2.0 Can Transform Your Business - Q&A from the Webcast

    - by [email protected]
    A few weeks ago, Vince Casarez and I presented with KMWorld on the Five Ways Enterprise 2.0 Can Transform Your Business. It was an enjoyable, interactive webcast in which Vince and I discussed the ways Enterprise 2.0 can transform your business and more importantly, highlighted key customer examples of how to do so. If you missed the webcast, you can catch a replay here. We had a lot of audience participation in some of the polls we conducted and in the Q&A session. We weren't able to address all of the questions during the broadcast, so we attempted to answer them here: Q: Which area within your firm focuses on Web 2.0? Meaning, do you find new departments developing just to manage the web 2.0 (Twitter, Facebook, etc.) user experience or are you structuring current departments? A: There are three distinct efforts within Oracle. The first is around delivery of these Web 2.0 services for enterprise deployments. This is the focus of the WebCenter team. The second effort is injecting these Web 2.0 services into use cases that drive the different enterprise applications. This effort is focused on how to manage these external services and bring them into a cohesive flow for marketing programs, customer care, and purchasing. The third effort is how we consume these services internally to enhance Oracle's business delivery. It leverages the technologies and use cases of the first two but also pushes the envelope with regards to future directions of these other two areas. Q: In a business, Web 2.0 is mostly like action logs. How can we leverage the official process practice versus the logs of a recent action? Example: a system configuration modified last night on a call out versus the official practice that everybody would use in the morning.A: The key thing to remember is that most Web 2.0 actions / activity streams today are based on collaboration and communication type actions. At least with public social sites like Facebook and Twitter. What we're delivering as part of the WebCenter Suite are not just these types of activities but also enterprise application activities. These enterprise application activities come from different application modules: purchasing, HR, order entry, sales opportunity, etc. The actions within these systems are normally tied to a business object or process: purchase order/customer, employee or department, customer and supplier, customer and product, respectively. Therefore, the activities or "logs" as you name them are able to be "typed" so that as a viewer, you can filter or decide to see only certain types of information. In your example, you could have a view that only showed you recent "configuration" changes and this could be right next to a view that showed off the items to be watched every morning. Q: It's great to hear about customers using the software but is there any plan for future webinars to show what the products/installs look like? That would be very helpful.A: We don't have a webinar planned to show off the install process. However, we have a viewlet that's posted on Oracle Technology Network. You can see it here:http://www.oracle.com/technetwork/testcontent/wcs-install-098014.htmlAnd we've got excellent documentation that walks you through the steps here:http://download.oracle.com/docs/cd/E14571_01/install.1111/e12001/install.htmAnd there's a whole set of demos and examples of what WebCenter can do at this URL:http://www.oracle.com/technetwork/middleware/webcenter/release11-demos-097468.html Q: How do you anticipate managing metadata across the enterprise to make content findable?A: We need to first make sure we are all talking about the same thing when we use a word like "metadata". Here's why...  For a developer, metadata means information that describes key elements of the portal or application and what the portal or application can do. For content systems, metadata means key terms that provide a taxonomy or folksonomy about the information that is being indexed, ordered, and managed. For business intelligence systems, metadata means key terms that provide labels to groups of data that most non-mathematicians need to understand. And for SOA, metadata means labels for parts of the processes that business owners should understand that connect development terminology. There are also additional requirements for metadata to be available to the team building these new solutions as well as requirements to make this metadata available to the running system. These requirements are often separated by "design time" and "run time" respectively. So clearly, a general goal of managing metadata across the enterprise is very challenging. We've invested a huge amount of resources around Oracle Metadata Services (MDS) to be able to provide a more generic system for all of these elements. No other vendor has anything like this technology foundation in their products. This provides a huge benefit to our customers as they will now be able to find content, processes, people, and information from a common set of search interfaces with consistent enterprise wide results. Q: Can you give your definition of terms as to document and content, please?A: Content applies to a broad category of information from Word documents, presentations and reports through attachments to invoices and/or purchase orders. Content is essentially any type of digital asset including images, video, and voice. A document is just one type of content. Q: Do you have special integration tools to realize an interaction between UCM and WebCenter Spaces/Services?A: Yes, we've dedicated a whole team of engineers to exploit the key features of Oracle UCM within WebCenter.  While ensuring that WebCenter can connect to other non-Oracle systems, we've made sure that with the combined set of Oracle technology, no other solution can match the combined power and integration.  This is part of the Oracle Fusion Middleware strategy which is to provide best in class capabilities for Content and Portals.  When combined together, the synergy between the two products enables users to quickly add capabilities when they are needed.  For example, simple document sharing is part of the combined product offering, but if legal discovery or archiving is required, Oracle UCM product includes these capabilities that can be quickly added.  There's no need to move content around or add another system to support this, it's just a feature that gets turned on within Oracle UCM. Q: All customers have some interaction with their applications and have many older versions, how do you see some of these new Enterprise 2.0 capabilities adding value to existing enterprise application deployments?A: Just as Service Oriented Architectures allowed for connecting the processes of different applications systems to work together, there's a need for a similar approach with regards to these enterprise 2.0 capabilities. Oracle WebCenter is built on a core architecture that allows for SOA of these Enterprise 2.0 services so that one set of scalable services can be used and integrated directly into any type of application. In this way, users can get immediate value out of the Enterprise 2.0 capabilities without having to wait for the next major release or upgrade. These centrally managed WebCenter services expose a set of standard interfaces that make it extremely easy to add them into existing applications no matter what technology the application has been implemented. Q: We've heard about Oracle Next Generation applications called "Fusion Applications", can you tell me how all this works together?A: Oracle WebCenter powers the core collaboration and social computing services found within Fusion Applications. It is the core user experience technology for how all the application screens have been implemented. And the core concept of task flows allows for all the Fusion Applications modules to be adaptable and composable by business users and IT without needing to be a professional developer. Oracle WebCenter is at the heart of the new Fusion Applications. In addition, the same patterns and technologies are now being added to the existing applications including JD Edwards, Siebel, Peoplesoft, and eBusiness Suite. The core technology enables all these customers to have a much smoother upgrade path to Fusion Applications. They get immediate benefits of injecting new user interactions into their existing applications without having to completely move to Fusion Applications. And then when the time comes, their users will already be well versed in how the new capabilities work. Q: Does any of this work with non Oracle software? Other databases? Other application servers? etc.A: We have made sure that Oracle WebCenter delivers the broadest set of development choices so that no matter what technology you developers are using, WebCenter capabilities can be quickly and easily added to the site or application. In addition, we have certified Oracle WebCenter to run against non-Oracle databases like DB2 and SQLServer. We have stated plans for certification against MySQL as well. Later in CY 2011, Oracle will provide certification on non-Oracle application servers such as WebSphere and JBoss. Q: How do we balance User and IT requirements in regards to Enterprise 2.0 technologies?A: Wrong decisions are often made because employee knowledge is not tapped efficiently and opportunities to innovate are often missed because the right people do not work together. Collaboration amongst workers in the right business context is critical for success. While standalone Enterprise 2.0 technologies can improve collaboration for collaboration's sake, using social collaboration tools in the context of business applications and processes will improve business responsiveness and lead companies to a more competitive position. As these systems become more mission critical it is essential that they maintain the highest level of performance and availability while scaling to support larger communities. Q: What are the ways in which Enterprise 2.0 can improve business responsiveness?A: With a wide range of Enterprise 2.0 tools in the marketplace, CIOs need to deploy solutions that will meet the requirements from users as well as address the requirements from IT. Workers want a next-generation user experience that is personalized and aggregates their daily tools and tasks, while IT needs to ensure the solution is secure, scalable, flexible, reliable and easily integrated with existing systems. An open and integrated approach to deploying portals, content management, and collaboration can enhance your business by addressing both the needs of knowledge workers for better information and the IT mandate to conserve resources by simplifying, consolidating and centralizing infrastructure and administration.  

    Read the article

  • COPSSH RSA only authentication connection problem

    - by Siriss
    Hello all- I am trying to setup an RSA Authentication only SSH/SFTP server. The SSH will be used primarily for RDC. Everything works just fine if I use password authentication. I am using Putty Key Generator to create he keys and I have pasted the key into authorized_keys file and restarted the OpenSSH server. I am using FileZilla to test the SFTP connection as that is the most important. For my tests I have created the keys without password correction. It will not work with a standard SSH connection either. It says "Server refused our key". I have recreated the key twice double checking with a guide on google, and I am pretty sure I did it correctly. I load the key file into FileZilla under settings/SFTP and try to connect and I get the following error: Disconnected: No supported authentication methods available. I have been playing with the different settings all night and I cannot figure it out. Here is my sshd_config file: # $OpenBSD: sshd_config,v 1.80 2008/07/02 02:24:18 djm Exp $ # This is the sshd server system-wide configuration file. See # sshd_config(5) for more information. # This sshd was compiled with PATH=/usr/bin:/bin:/usr/sbin:/sbin # The strategy used for options in the default sshd_config shipped with # OpenSSH is to specify options with their default value where # possible, but leave them commented. Uncommented options change a # default value. #Port 22 #AddressFamily any #ListenAddress 0.0.0.0 #ListenAddress :: # Disable legacy (protocol version 1) support in the server for new # installations. In future the default will change to require explicit # activation of protocol 1 Protocol 2 # HostKey for protocol version 1 #HostKey /etc/ssh/ssh_host_key # HostKeys for protocol version 2 #HostKey /etc/ssh/ssh_host_rsa_key #HostKey /etc/ssh/ssh_host_dsa_key # Lifetime and size of ephemeral version 1 server key #KeyRegenerationInterval 1h #ServerKeyBits 1024 # Logging # obsoletes QuietMode and FascistLogging #SyslogFacility AUTH #LogLevel INFO # Authentication: #LoginGraceTime 2m PermitRootLogin no #StrictModes yes #MaxAuthTries 6 #MaxSessions 10 RSAAuthentication yes PubkeyAuthentication yes AuthorizedKeysFile .ssh/authorized_keys # For this to work you will also need host keys in /etc/ssh/ssh_known_hosts RhostsRSAAuthentication no # similar for protocol version 2 #HostbasedAuthentication no # Change to yes if you don't trust ~/.ssh/known_hosts for # RhostsRSAAuthentication and HostbasedAuthentication #IgnoreUserKnownHosts no # Don't read the user's ~/.rhosts and ~/.shosts files #IgnoreRhosts yes # To disable tunneled clear text passwords, change to no here! PasswordAuthentication no PermitEmptyPasswords no # Change to no to disable s/key passwords #ChallengeResponseAuthentication yes # Kerberos options #KerberosAuthentication no #KerberosOrLocalPasswd yes #KerberosTicketCleanup yes #KerberosGetAFSToken no # GSSAPI options #GSSAPIAuthentication no #GSSAPICleanupCredentials yes # Set this to 'yes' to enable PAM authentication, account processing, # and session processing. If this is enabled, PAM authentication will # be allowed through the ChallengeResponseAuthentication and # PasswordAuthentication. Depending on your PAM configuration, # PAM authentication via ChallengeResponseAuthentication may bypass # the setting of "PermitRootLogin without-password". # If you just want the PAM account and session checks to run without # PAM authentication, then enable this but set PasswordAuthentication # and ChallengeResponseAuthentication to 'no'. UsePAM no #AllowAgentForwarding yes #AllowTcpForwarding yes #GatewayPorts no #X11Forwarding no #X11DisplayOffset 10 #X11UseLocalhost yes #PrintMotd yes #PrintLastLog yes #TCPKeepAlive yes UseLogin no #UsePrivilegeSeparation yes #PermitUserEnvironment no #Compression delayed #ClientAliveInterval 0 #ClientAliveCountMax 3 #UseDNS yes #PidFile /var/run/sshd.pid #MaxStartups 10 #PermitTunnel no #ChrootDirectory none # no default banner path #Banner none # override default of no subsystems Subsystem sftp /bin/sftp-server # Example of overriding settings on a per-user basis #Match User anoncvs # X11Forwarding no # AllowTcpForwarding no # ForceCommand cvs server Thank you so much for your help!

    Read the article

  • Agile Like Jazz

    - by Jeff Certain
    (I’ve been sitting on this for a week or so now, thinking that it needed to be tightened up a bit to make it less rambling. Since that’s clearly not going to happen, reader beware!) I had the privilege of spending around 90 minutes last night sitting and listening to Sonny Rollins play a concert at the Disney Center in LA. If you don’t know who Sonny Rollins is, I don’t know how to explain the experience; if you know who he is, I don’t need to. Suffice it to say that he has been recording professionally for over 50 years, and helped create an entire genre of music. A true master by any definition. One of the most intriguing aspects of a concert like this, however, is watching the master step aside and let the rest of the musicians play. Not just play their parts, but really play… letting them take over the spotlight, to strut their stuff, to soak up enthusiastic applause from the crowd. Maybe a lot of it has to do with the fact that Sonny Rollins has been doing this for more than a half-century. Maybe it has something to do with a kind of patience you learn when you’re on the far side of 80 – and the man can still blow a mean sax for 90 minutes without stopping! Maybe it has to do with the fact that he was out there for the love of the music and the love of the show, not because he had anything to prove to anyone and, I like to think, not for the money. Perhaps it had more to do with the fact that, when you’re at that level of mastery, the other musicians are going to be good. Really good. Whatever the reasons, there was a incredible freedom on that stage – the ability to improvise, for each musician to showcase their own specialization and skills, and them come back to the common theme, back to being on the same page, as it were. All this took place in the same venue that is home to the L.A. Phil. Somehow, I can’t ever see the same kind of free-wheeling improvisation happening in that context. And, since I’m a geek, I started thinking about agility. Rollins has put together a quintet that reflects his own particular style and past. No upright bass or piano for Rollins – drums, bongos, electric guitar and bass guitar along with his sax. It’s not about the mix of instruments. Other trios, quartets, and sextets use different mixes of instruments. New Orleans jazz tends towards trombones instead of sax; some prefer cornet or trumpet. But no matter what the choice of instruments, size matters. Team sizes are something I’ve been thinking about for a while. We’re on a quest to rethink how our teams are organized. They just feel too big, too unwieldy. In fact, they really don’t feel like teams at all. Most of the time, they feel more like collections or people who happen to report to the same manager. I attribute this to a couple factors. One is over-specialization; we have a tendency to have people work in silos. Although the teams are product-focused, within them our developers are both generalists and specialists. On the one hand, we expect them to be able to build an entire vertical slice of the application; on the other hand, each developer tends to be responsible for the vertical slice. As a result, developers often work on their own piece of the puzzle, in isolation. This sort of feels like working on a jigsaw in a group – each person taking a set of colors and piecing them together to reveal a portion of the overall picture. But what inevitably happens when you go to meld all those pieces together? Inevitably, you have some sections that are too big to move easily. These sections end up falling apart under their own weight as you try to move them. Not only that, but there are other challenges – figuring out where that section fits, and how to tie it into the rest of the puzzle. Often, this is when you find a few pieces need to be added – these pieces are “glue,” if you will. The other issue that arises is due to the overhead of maintaining communications in a team. My mother, who worked in IT for around 30 years, once told me that 20% per team member is a good rule of thumb for maintaining communication. While this is a rule of thumb, it seems to imply that any team over about 6 people is going to become less agile simple because of the communications burden. Teams of ten or twelve seem like they fall into the philharmonic organizational model. Complicated pieces of music requiring dozens of players to all be on the same page requires a much different model than the jazz quintet. There’s much less room for improvisation, originality or freedom. (There are probably orchestral musicians who will take exception to this characterization; I’m calling it like I see it from the cheap seats.) And, there’s one guy up front who is running the show, whose job is to keep all of those dozens of players on the same page, to facilitate communications. Somehow, the orchestral model doesn’t feel much like a self-organizing team, either. The first violin may be the best violinist in the orchestra, but they don’t get to perform free-wheeling solos. I’ve never heard of an orchestra getting together for a jam session. But I have heard of teams that organize their work based on the developers available, rather than organizing the developers based on the work required. I have heard of teams where desired functionality is deferred – or worse yet, schedules are missed – because one critical person doesn’t have any bandwidth available. I’ve heard of teams where people simply don’t have the big picture, because there is too much communication overhead for everyone to be aware of everything that is happening on a project. I once heard Paul Rayner say something to the effect of “you have a process that is perfectly designed to give you exactly the results you have.” Given a choice, I want a process that’s much more like jazz than orchestral music. I want a process that doesn’t burden me with lots of forms and checkboxes and stuff. Give me the simplest, most lightweight process that will work – and a smaller team of the best developers I can find. This seems like the kind of process that will get the kind of result I want to be part of.

    Read the article

  • Azure Diagnostics: The Bad, The Ugly, and a Better Way

    - by jasont
    If you’re a .Net web developer today, no doubt you’ve enjoyed watching Windows Azure grow up over the past couple of years. The platform has scaled, stabilized (mostly), and added on a slew of great (and sometimes overdue) features. What was once just an endpoint to host a solution, developers today have tremendous flexibility and options in the platform. Organizations are building new solutions and offerings on the platform, and others have, or are in the process of, migrating existing applications out of their own data centers into the Azure cloud. Whether new application development or migrating legacy, every development shop and IT organization needs to monitor their applications in the cloud, the same as they do on premises. Azure Diagnostics has some capabilities, but what I constantly hear from users is that it’s either (a) not enough, or (b) too cumbersome to set up. Today, Stackify is happy to announce that we fully support Azure deployments, just the same as your on-premises deployments. Let’s take a look below and compare and contrast the options. Azure Diagnostics Let’s crack open the Windows Azure documentation on Azure Diagnostics and see just how easy it is to use. The high level steps are:   Step 1: Import the Diagnostics Oh, I’ve already deployed my app without the diagnostics module. Guess I can’t do anything until I do this and re-deploy. Step 2: Configure the Diagnostics (and multiple sub-steps) Do I want it all? Or just pieces of it? Whoops, forgot to include a specific performance counter, I guess I’ll have to deploy again. Wait a minute… I have to specifically code these performance counters into my role’s OnStart() method, compile and deploy again? And query and consume it myself? Step 3: (Optional) Permanently store diagnostic data Lucky for me, Azure storage has gotten pretty cheap. But how often should I move the data into storage? I want to see real-time data, so I guess that’s out now as well. Step 4: (Optional) View stored diagnostic data Optional? Of course I want to see it. Conveniently, Microsoft recommends 3 tools to do this with. Un-conveniently, none of these are web based and they all just give you access to raw data, and very little charting or real-time intelligence. Just….. data. Nevermind that one product seems to have gotten stale since a recent acquisition, and doesn’t even have screenshots!   So, let’s summarize: lots of diagnostics data is available, but think realistically. Think Dev Ops. What happens when you are in the middle of a major production performance issue and you don’t have the diagnostics you need? You are redeploying an application (and thankfully you have a great branching strategy, so you feel perfectly safe just willy-nilly launching code into prod, don’t you?) to get data, then shipping it to storage, and then digging through that data to find a needle in a haystack. Would you like to be able to troubleshoot a performance issue in the middle of the night, or on a weekend, from your iPad or home computer’s web browser? Forget it: the best you get is this spark line in the Azure portal. If it’s real pointy, you probably have an issue; but since there is no alert based on a threshold your customers have likely already let you know. And high CPU, Memory, I/O, or Network doesn’t tell you anything about where the problem is. The Better Way – Stackify Stackify supports application and server monitoring in real time, all through a great web interface. All of the things that Azure Diagnostics provides, Stackify provides for your on-premises deployments, and you don’t need to know ahead of time that you’ll need it. It’s always there, it’s always on. Azure deployments are essentially no different than on-premises. It’s a Windows Server (or Linux) in the cloud. It’s behind a different firewall than your corporate servers. That’s it. Stackify can provide the same powerful tools to your Azure deployments in two simple steps. Step 1 Add a startup task to your web or worker role and deploy. If you can’t deploy and need it right now, no worries! Remote Desktop to the Azure instance and you can execute a Powershell script to download / install Stackify.   Step 2 Log in to your account at www.stackify.com and begin monitoring as much as you want, as often as you want and see the results instantly. WMI? It’s there Event Viewer? You’ve got it. File System Access? Yes, please! Would love to make sure my web.config is correct.   IIS / App Pool Info? Yep. You can even restart it. Running Services? All of them. Start and Stop them to your heart’s content. SQL Database access? You bet’cha. Alerts and Notification? Of course! You should know before your customers let you know. … and so much more.   Conclusion Microsoft has shown, consistently, that they love developers, developers, developers. What every developer needs to realize from this is that they’ve given you a canvas, which is exactly what Azure is. It’s great infrastructure that is readily available, easy to manage, and fairly cost effective. However, the tooling is your responsibility. What you get, at best, is bare bones. App and server diagnostics should be available when you need them. While we, as developers, try to plan for and think of everything ahead of time, there will come times where we need to get data that just isn’t available. And having to go through a lot of cumbersome steps to get that data, and then have to find a friendlier way to consume it…. well, that just doesn’t make a lot of sense to me. I’d rather spend my time writing and developing features and completing bug fixes for my applications, than to be writing code to monitor and diagnose.

    Read the article

  • Community Outreach - Where Should I Go

    - by Roger Brinkley
    A few days ago I was talking to person new to community development and they asked me what guidelines I used to determine the worthiness of a particular event. After our conversation was over I thought about it a little bit more and figured out there are three ways to determine if any event (be it conference, blog, podcast or other social medias) is worth doing: Transferability, Multiplication, and Impact. Transferability - Is what I have to say useful to the people that are going to hear it. For instance, consider a company that has product offering that can connect up using a number of languages like Scala, Grovey or Java. Sending a Scala expert to talk about Scala and the product is not transferable to a Java User Group, but a Java expert doing the same talk with a Java slant is. Similarly, talking about JavaFX to any Java User Group meeting in Brazil was pretty much a wasted effort until it was open sourced. Once it was open sourced it was well received. You can also look at transferability in relation to the subject matter that you're dealing with. How transferable is a presentation that I create. Can I, or a technical writer on the staff, turn it into some technical document. Could it be converted into some type of screen cast. If we have a regular podcast can we make a reference to the document, catch the high points or turn it into a interview. Is there a way of using this in the sales group. In other words is the document purely one dimensional or can it be re-purposed in other forms. Multiplication - On every trip I'm looking for 2 to 5 solid connections that I can make with developers. These are long term connections, because I know that once that relationship is established it will lead to another 2 - 5 from that connection and within a couple of years were talking about some 100 connections from just one developer. For instance, when I was working on JavaHelp in 2000 I hired a science teacher with a programming background. We've developed a very tight relationship over the year though we rarely see each other more than once a year. But at this JavaOne, one of his employees came up to me and said, "Richard (Rick Hard in Czech) told me to tell you that he couldn't make it to JavaOne this year but if I saw you to tell you hi". Another example is from my Mobile & Embedded days in Brasil. On our very first FISL trip about 5 years ago there were two university students that had created a project called "Marge". Marge was a Bluetooth framework that made connecting bluetooth devices easier. I invited them to a "Sun" dinner that evening. Originally they were planning on leaving that afternoon, but they changed their plans recognizing the opportunity. Their eyes were as big a saucers when they realized the level of engineers at the meeting. They went home started a JUG in Florianoplis that we've visited more than a couple of times. One of them went to work for Brazilian government lab like Berkley Labs, MIT Lab, John Hopkins Applied Physicas Labs or Lincoln Labs in the US. That presented us with an opportunity to show Embedded Java as a possibility for some of the work they were doing there. Impact - The final criteria is how life changing is what I'm going to say be to the individuals I'm reaching. A t-shirt is just a token, but when I reach down and tug at their developer hearts then I know I've succeeded. I'll never forget one time we flew all night to reach Joan Pasoa in Northern Brazil. We arrived at 2am went immediately to our hotel only to be woken up at 6 am to travel 2 hours by car to the presentation hall. When we arrived we were totally exhausted. Outside the facility there were 500 people lined up to hear 6 speakers for the day. That itself was uplifting.  I delivered one of my favorite talks on "I have passion". It was a talk on golf and embedded java development, "Find your passion". When we finished a couple of first year students came up to me and said how much my talk had inspired them. FISL is another great example. I had been about 4 years in a row. FISL is a very young group of developers so capturing their attention is important. Several of the students will come back 2 or 3 years later and ask me questions about research or jobs. And then there's Louis. Louis is one my favorite Brazilians. I can only describe him as a big Brazilian teddy bear. I see him every year at FISL. He works primarily in Java EE but he's attended every single one of my talks over the last 4 years. I can't tell you why, but he always greets me and gives me a hug. For some reason I've had a real impact. And of course when it comes to impact you don't just measure a presentation but every single interaction you have at an event. It's the hall way conversations, the booth conversations, but more importantly it's the conversations at dinner tables or in the cars when you're getting transported to an event. There's a good story that illustrates this. Last year in the spring I was traveling to Goiânia in Brazil. I've been there many times and leaders there no me well. One young man has picked me up at the airport on more than one occasion. We were going out to dinner one evening and he brought his girl friend along. One thing let to another and I eventually asked him, in front of her, "Why haven't you asked her to marry you?" There were all kinds of excuses and she just looked at him and smiled. When I came back in December for JavaOne he came and sought me. "I just want to tell you that I thought a lot about what you said, and I asked her to marry me. We're getting married next Spring." Sometimes just one presentation is all it takes to make an impact. Other times it takes years. Some impacts are directly related to the company and some are more personal in nature. It doesn't matter which it is because it's having the impact that matters.

    Read the article

  • How do I restrict concurrent statistics gathering to a small set of tables from a single schema?

    - by Maria Colgan
    I got an interesting question from one of my colleagues in the performance team last week about how to restrict a concurrent statistics gather to a small subset of tables from one schema, rather than the entire schema. I thought I would share the solution we came up with because it was rather elegant, and took advantage of concurrent statistics gathering, incremental statistics, and the not so well known “obj_filter_list” parameter in DBMS_STATS.GATHER_SCHEMA_STATS procedure. You should note that the solution outline below with “obj_filter_list” still applies, even when concurrent statistics gathering and/or incremental statistics gathering is disabled. The reason my colleague had asked the question in the first place was because he wanted to enable incremental statistics for 5 large partitioned tables in one schema. The first time you gather statistics after you enable incremental statistics on a table, you have to gather statistics for all of the existing partitions so that a synopsis may be created for them. If the partitioned table in question is large and contains a lot of partition, this could take a considerable amount of time. Since my colleague only had the Exadata environment at his disposal overnight, he wanted to re-gather statistics on 5 partition tables as quickly as possible to ensure that it all finished before morning. Prior to Oracle Database 11g Release 2, the only way to do this would have been to write a script with an individual DBMS_STATS.GATHER_TABLE_STATS command for each partition, in each of the 5 tables, as well as another one to gather global statistics on the table. Then, run each script in a separate session and manually manage how many of this session could run concurrently. Since each table has over one thousand partitions that would definitely be a daunting task and would most likely keep my colleague up all night! In Oracle Database 11g Release 2 we can take advantage of concurrent statistics gathering, which enables us to gather statistics on multiple tables in a schema (or database), and multiple (sub)partitions within a table concurrently. By using concurrent statistics gathering we no longer have to run individual statistics gathering commands for each partition. Oracle will automatically create a statistics gathering job for each partition, and one for the global statistics on each partitioned table. With the use of concurrent statistics, our script can now be simplified to just five DBMS_STATS.GATHER_TABLE_STATS commands, one for each table. This approach would work just fine but we really wanted to get this down to just one command. So how can we do that? You may be wondering why we didn’t just use the DBMS_STATS.GATHER_SCHEMA_STATS procedure with the OPTION parameter set to ‘GATHER STALE’. Unfortunately the statistics on the 5 partitioned tables were not stale and enabling incremental statistics does not mark the existing statistics stale. Plus how would we limit the schema statistics gather to just the 5 partitioned tables? So we went to ask one of the statistics developers if there was an alternative way. The developer told us the advantage of the “obj_filter_list” parameter in DBMS_STATS.GATHER_SCHEMA_STATS procedure. The “obj_filter_list” parameter allows you to specify a list of objects that you want to gather statistics on within a schema or database. The parameter takes a collection of type DBMS_STATS.OBJECTTAB. Each entry in the collection has 5 feilds; the schema name or the object owner, the object type (i.e., ‘TABLE’ or ‘INDEX’), object name, partition name, and subpartition name. You don't have to specify all five fields for each entry. Empty fields in an entry are treated as if it is a wildcard field (similar to ‘*’ character in LIKE predicates). Each entry corresponds to one set of filter conditions on the objects. If you have more than one entry, an object is qualified for statistics gathering as long as it satisfies the filter conditions in one entry. You first must create the collection of objects, and then gather statistics for the specified collection. It’s probably easier to explain this with an example. I’m using the SH sample schema but needed a couple of additional partitioned table tables to get recreate my colleagues scenario of 5 partitioned tables. So I created SALES2, SALES3, and COSTS2 as copies of the SALES and COSTS table respectively (setup.sql). I also deleted statistics on all of the tables in the SH schema beforehand to more easily demonstrate our approach. Step 0. Delete the statistics on the tables in the SH schema. Step 1. Enable concurrent statistics gathering. Remember, this has to be done at the global level. Step 2. Enable incremental statistics for the 5 partitioned tables. Step 3. Create the DBMS_STATS.OBJECTTAB and pass it to the DBMS_STATS.GATHER_SCHEMA_STATS command. Here, you will notice that we defined two variables of DBMS_STATS.OBJECTTAB type. The first, filter_lst, will be used to pass the list of tables we want to gather statistics on, and will be the value passed to the obj_filter_list parameter. The second, obj_lst, will be used to capture the list of tables that have had statistics gathered on them by this command, and will be the value passed to the objlist parameter. In Oracle Database 11g Release 2, you need to specify the objlist parameter in order to get the obj_filter_list parameter to work correctly due to bug 14539274. Will also needed to define the number of objects we would supply in the obj_filter_list. In our case we ere specifying 5 tables (filter_lst.extend(5)). Finally, we need to specify the owner name and object name for each of the objects in the list. Once the list definition is complete we can issue the DBMS_STATS.GATHER_SCHEMA_STATS command. Step 4. Confirm statistics were gathered on the 5 partitioned tables. Here are a couple of other things to keep in mind when specifying the entries for the  obj_filter_list parameter. If a field in the entry is empty, i.e., null, it means there is no condition on this field. In the above example , suppose you remove the statement Obj_filter_lst(1).ownname := ‘SH’; You will get the same result since when you have specified gather_schema_stats so there is no need to further specify ownname in the obj_filter_lst. All of the names in the entry are normalized, i.e., uppercased if they are not double quoted. So in the above example, it is OK to use Obj_filter_lst(1).objname := ‘sales’;. However if you have a table called ‘MyTab’ instead of ‘MYTAB’, then you need to specify Obj_filter_lst(1).objname := ‘”MyTab”’; As I said before, although we have illustrated the usage of the obj_filter_list parameter for partitioned tables, with concurrent and incremental statistics gathering turned on, the obj_filter_list parameter is generally applicable to any gather_database_stats, gather_dictionary_stats and gather_schema_stats command. You can get a copy of the script I used to generate this post here. +Maria Colgan

    Read the article

  • SQL Server Developer Tools &ndash; Codename Juneau vs. Red-Gate SQL Source Control

    - by Ajarn Mark Caldwell
    So how do the new SQL Server Developer Tools (previously code-named Juneau) stack up against SQL Source Control?  Read on to find out. At the PASS Community Summit a couple of weeks ago, it was announced that the previously code-named Juneau software would be released under the name of SQL Server Developer Tools with the release of SQL Server 2012.  This replacement for Database Projects in Visual Studio (also known in a former life as Data Dude) has some great new features.  I won’t attempt to describe them all here, but I will applaud Microsoft for making major improvements.  One of my favorite changes is the way database elements are broken down.  Previously every little thing was in its own file.  For example, indexes were each in their own file.  I always hated that.  Now, SSDT uses a pattern similar to Red-Gate’s and puts the indexes and keys into the same file as the overall table definition. Of course there are really cool features to keep your database model in sync with the actual source scripts, and the rename refactoring feature is now touted as being more than just a search and replace, but rather a “semantic-aware” search and replace.  Funny, it reminds me of SQL Prompt’s Smart Rename feature.  But I’m not writing this just to criticize Microsoft and argue that they are late to the party with this feature set.  Instead, I do see it as a viable alternative for folks who want all of their source code to be version controlled, but there are a couple of key trade-offs that you need to know about when you choose which tool set to use. First, the basics Both tool sets integrate with a wide variety of source control systems including the most popular: Subversion, GIT, Vault, and Team Foundation Server.  Both tools have integrated functionality to produce objects to upgrade your target database when you are ready (DACPACs in SSDT, integration with SQL Compare for SQL Source Control).  If you regularly live in Visual Studio or the Business Intelligence Development Studio (BIDS) then SSDT will likely be comfortable for you.  Like BIDS, SSDT is a Visual Studio Project Type that comes with SQL Server, and if you don’t already have Visual Studio installed, it will install the shell for you.  If you already have Visual Studio 2010 installed, then it will just add this as an available project type.  On the other hand, if you regularly live in SQL Server Management Studio (SSMS) then you will really enjoy the SQL Source Control integration from within SSMS.  Both tool sets store their database model in script files.  In SSDT, these are on your file system like other source files; in SQL Source Control, these are stored in the folder structure in your source control system, and you can always GET them to your file system if you want to browse them directly. For me, the key differentiating factors are 1) a single, unified check-in, and 2) migration scripts.  How you value those two features will likely make your decision for you. Unified Check-In If you do a continuous-integration (CI) style of development that triggers an automated build with unit testing on every check-in of source code, and you use Visual Studio for the rest of your development, then you will want to really consider SSDT.  Because it is just another project in Visual Studio, it can be added to your existing Solution, and you can then do a complete, or unified single check-in of all changes whether they are application or database changes.  This is simply not possible with SQL Source Control because it is in a different development tool (SSMS instead of Visual Studio) and there is no way to do one unified check-in between the two.  You CAN do really fast back-to-back check-ins, but there is the possibility that the automated build that is triggered from the first check-in will cause your unit tests to fail and the CI tool to report that you broke the build.  Of course, the automated build that is triggered from the second check-in which contains the “other half” of your changes should pass and so the amount of time that the build was broken may be very, very short, but if that is very, very important to you, then SQL Source Control just won’t work; you’ll have to use SSDT. Refactoring and Migrations If you work on a mature system, or on a not-so-mature but also not-so-well-designed system, where you want to refactor the database schema as you go along, but you can’t have data suddenly disappearing from your target system, then you’ll probably want to go with SQL Source Control.  As I wrote previously, there are a number of changes which you can make to your database that the comparison tools (both from Microsoft and Red Gate) simply cannot handle without the possibility (or probability) of data loss.  Currently, SSDT only offers you the ability to inject PRE and POST custom deployment scripts.  There is no way to insert your own script in the middle to override the default behavior of the tool.  In version 3.0 of SQL Source Control (Early Access version now available) you have that ability to create your own custom migration script to take the place of the commands that the tool would have done, and ensure the preservation of your data.  Or, even if the default tool behavior would have worked, but you simply know a better way then you can take control and do things your way instead of theirs. You Decide In the environment I work in, our automated builds are not triggered off of check-ins, but off of the clock (currently once per night) and so there is no point at which the automated build and unit tests will be triggered without having both sides of the development effort already checked-in.  Therefore having a unified check-in, while handy, is not critical for us.  As for migration scripts, these are critically important to us.  We do a lot of new development on systems that have already been in production for years, and it is not uncommon for us to need to do a refactoring of the database.  Because of the maturity of the existing system, that often involves data migrations or other additional SQL tasks that the comparison tools just can’t detect on their own.  Therefore, the ability to create a custom migration script to override the tool’s default behavior is very important to us.  And so, you can see why we will continue to use Red Gate SQL Source Control for the foreseeable future.

    Read the article

  • Amazon EC2 Instance - m1.medium Ubuntu 12.04 - Started to crash three days ago

    - by Joy
    The environment: Amazon EC2 Instance - m1.medium Ubuntu 12.04 Apache 2.2.22 - Running a Drupal Site Using MySQL DB Server RAM info: ~$ free -gt total used free shared buffers cached Mem: 3 1 2 0 0 0 -/+ buffers/cache: 0 2 Swap: 0 0 0 Total: 3 1 2 Hard drive info: Filesystem Size Used Avail Use% Mounted on /dev/xvda1 7.9G 4.7G 2.9G 62% / udev 1.9G 8.0K 1.9G 1% /dev tmpfs 751M 180K 750M 1% /run none 5.0M 0 5.0M 0% /run/lock none 1.9G 0 1.9G 0% /run/shm /dev/xvdb 394G 199M 374G 1% /mnt The problem About two days ago the site started failing becaue the MySQL server was shut down by Apache with the following message: kernel: [2963685.664359] [31716] 106 31716 226946 22748 0 0 0 mysqld kernel: [2963685.664730] Out of memory: Kill process 31716 (mysqld) score 23 or sacrifice child kernel: [2963685.664764] Killed process 31716 (mysqld) total-vm:907784kB, anon-rss:90992kB, file-rss:0kB kernel: [2963686.153608] init: mysql main process (31716) killed by KILL signal kernel: [2963686.169294] init: mysql main process ended, respawning That states that the VM was occupying 0.9GB, but my Ram has 2GB free, so 1GB was still left free. I understand that in Linux applications can allocate more memory than physically available. I don't know if this is the problme, it's the first time that it has started to happen. Obviously, the MySQL server tries to restart, but there's no memory for it apparently and it won't restart. Here is its error log: Plugin 'FEDERATED' is disabled. The InnoDB memory heap is disabled Mutexes and rw_locks use GCC atomic builtins Compressed tables use zlib 1.2.3.4 Initializing buffer pool, size = 128.0M InnoDB: mmap(137363456 bytes) failed; errno 12 Completed initialization of buffer pool Fatal error: cannot allocate memory for the buffer pool Plugin 'InnoDB' init function returned error. Plugin 'InnoDB' registration as a STORAGE ENGINE failed. Unknown/unsupported storage engine: InnoDB [ERROR] Aborting [Note] /usr/sbin/mysqld: Shutdown complete I simply restarted the Mysql service. About two hours later it happened again. I restarted it. Then it happened again 9 hours later. So then I thought of the MaxClients parameter of apache.conf, so I went to check it out. It was set at 150. I decided to drop it down to 60. As so: <IfModule mpm_prefork_module> ... MaxClients 60 </IfModule> <IfModule mpm_worker_module> ... MaxClients 60 </IfModule> <IfModule mpm_event_module> ... MaxClients 60 </IfModule> Once I did that, I had the apache2 service restart and it all went smoothly for 3/4 of a day. Since at night the MySQL service shut down once again, but this time it wasn't killed by the Apache2 service. Instead it called the OOM-Killer with the following message: kernel: [3104680.005312] mysqld invoked oom-killer: gfp_mask=0x201da, order=0, oom_adj=0, oom_score_adj=0 kernel: [3104680.005351] [<ffffffff81119795>] oom_kill_process+0x85/0xb0 kernel: [3104680.548860] init: mysql main process (30821) killed by KILL signal Now I'm out of ideas. Some articles state that the ideal thing to do is change the kernel behaviour with the following (include it to the file /etc/sysctl.conf ) vm.overcommit_memory = 2 vm.overcommit_ratio = 80 So no overcommits will take place. I'm wondering if this is the way to go? Keep in mind I'm no server administrator, I have basic knowldege. Thanks a bunch in advance.

    Read the article

  • VMware vSphere cluster design for site redundancy

    - by Stefan Radovanovici
    I have a question about the best design for site redudancy when using vSphere clusters. A bit of background info about our situation first though. We are a medium-sized company with two main offices, located in different countries. Our networks are linked by a Layer2 150Mbps leased line which is currently underused. We have a variety of services running for internal use within the company, some on physycal servers and some on existing vSphere clusters. In our department we also run several services (almost all running under various forms of Linux) like NTP, Syslog, jump servers, monitoring servers and so on. We have now the requirement that those servers need to be redundant within each location (which they are not at the moment) and also site redudant (which they are to some extent, the servers are duplicated in the 2nd location with configurations kept in sync via various methods at the application layer). There is no SAN available for us, at least not something that we can use at the moment. Cost is also an issue. While we do have some budget available for this, we can't afford to buy SANs for both locations for example. I looked at the VSA feature and it seems that this could be something for us but I am unsure how to solve the site-redudancy requirement. At the moment for testing purposes I am setting up in a lab a vSphere 5 with VSA on two ESXi hosts. I am currently using the Essentials Plus kit with VSA license, which allows me to build a VSA cluster on up to 3 hosts, together with a vCenter license to manage them. The hosts each have two dual-port network cards and two 600GB drives, running in Raid1. Hardware-wise this will be enough for us to run the all the services we need as VMs and will provide redundandcy within the site. At the moment I see only two option to have site redundancy: build an identical VSA cluter in the second location and keep the various services sync'ed at application layer (database sync, rsync and so on). simply move one of the hosts from the existing cluster to the second location, basically having the VSA cluster span the 150Mbps link between the sites. I would very much prefer the second option but I am unsure how well it'll work, if it can work at all. Technically it should, we can span the needed VLANs across the leased line and have them available in the second location. The advantage would be that we don't need to worry at all about sync'ing databases and the like. But I have the feeling that the bandwidth will not be enough, I have no way of knowing how much traffic will the VSA cluster generate between the hosts. I realize that this will most likely depend on the individual usage of the VMs but still, I have no idea how VSA replicates data between the ESXi hosts. Are these my only options or can my goals be achieved in some other way ? Is there perhaps a way to have some sort of "cold stand by" cluster in the second location where the VMs would be sync'ed once per night from the main location ? The idea is that in case the first site becomes unavailable, we would be able to bring all those VMs online there. We would be ok with the data being 1 day old. Any answers are appreciated. Best regards, Stefan

    Read the article

  • Right-Time Retail Part 1

    - by David Dorf
    This is the first in a three-part series. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Right-Time Revolution Technology enables some amazing feats in retail. I can order flowers for my wife while flying 30,000 feet in the air. I can order my groceries in the subway and have them delivered later that day. I can even see how clothes look on me without setting foot in a store. Who knew that a TV, diamond necklace, or even a car would someday be as easy to purchase as a candy bar? Can technology make a mattress an impulse item? Wake-up and your back is hurting, so you rollover and grab your iPad, then a new mattress is delivered the next day. Behind the scenes the many processes are being choreographed to make the sale happen. This includes moving data between systems with the least amount for friction, which in some cases is near real-time. But real-time isn’t appropriate for all the integrations. Think about what a completely real-time retailer would look like. A consumer grabs toothpaste off the shelf, and all systems are immediately notified so that the backroom clerk comes running out and pushes the consumer aside so he can replace the toothpaste on the shelf. Such a system is not only cost prohibitive, but it’s also very inefficient and ineffectual. Retailers must balance the realities of people, processes, and systems to find the right speed of execution. That’ what “right-time retail” means. Retailers used to sell during the day and count the money and restock at night, but global expansion and the Web have complicated that simplistic viewpoint. Our 24hr society demands not only access but also speed, which constantly pushes the boundaries of our IT systems. In the last twenty years, there have been three major technology advancements that have moved us closer to real-time systems. Networking is the first technology that drove the real-time trend. As systems became connected, it became easier to move data between them. In retail we no longer had to mail the daily business report back to corporate each day as the dial-up modem could transfer the data. That was soon replaced with trickle-polling, when sale transactions were occasionally sent from stores to corporate throughout the day, often through VSAT. Then we got terrestrial networks like DSL and Ethernet that allowed the constant stream of data between stores and corporate. When corporate could see the sales transactions coming from stores, it could better plan for replenishment and promotions. That drove the need for speed into the supply chain and merchandising, but for many years those systems were stymied by the huge volumes of data. Nordstrom has 150 million SKU/Store combinations when planning (RPAS); The Gap generates 110 million price changes during end-of-season (RPM); Argos does 1.78 billion calculations executed each day for replenishment planning (AIP). These areas are now being alleviated by the second technology, storage. The typical laptop disk drive runs at 5,400rpm with PCs stepping up to 7,200rpm and servers hitting 15,000rpm. But the platters can only spin so fast, so to squeeze more performance we’ve had to rely on things like disk striping. Then solid state drives (SSDs) were introduced and prices continue to drop. (Augmenting your harddrive with a SSD is the single best PC upgrade these days.) RAM continues to be expensive, but compressing data in memory has allowed more efficient use. So a few years back, Oracle decided to build a box that incorporated all these advancements to move us closer to real-time. This family of products, often categorized as engineered systems, combines the hardware and software so that they work together to provide better performance. How much better? If Exadata powered a 747, you’d go from New York to Paris in 42 minutes, and it would carry 5,000 passengers. If Exadata powered baseball, games would last only 18 minutes and Boston’s Fenway would hold 370,000 fans. The Exa-family enables processing more data in less time. So with faster networks and storage, that brings us to the third and final ingredient. If we continue to process data in traditional ways, we won’t be able to take advantage of the faster networks and storage. Enter what Harvard calls “The Sexiest Job of the 21st Century” – the data scientist. New technologies like the Hadoop-powered Oracle Big Data Appliance, Oracle Advanced Analytics, and Oracle Endeca Information Discovery change the way in which we organize data. These technologies allow us to extract actionable information from raw data at incredible speeds, often ad-hoc. So the foundation to support the real-time enterprise exists, but how does a retailer begin to take advantage? The most visible way is through real-time marketing, but I’ll save that for part 3 and instead begin with improved integrations for the assets you already have in part 2.

    Read the article

  • Vista 64-bit, DISK BOOT FAILURE

    - by weka
    So I have this Acer Aspire AX3200-U3600A with Windows Vista (64-bit). Every night I turn it off and turn it back on in the morning. Around three weeks ago, I did a fresh factory reimage. Good as new. Then around two days ago, when I turned it on, I noticed it was running extremly slow. As in, it would often freeze up while I had multiple applications open when it usually never froze up. So I decided to restart my computer. Big mistake. My computer froze right after I clicked shut-down. I waited a while. Nothing. Waited some minutes. Nope. I decided to shut it down by pressing the power button. Here is where the problems begin. When I turned it back on, I saw the Windows logo and loading bar and then it loaded to black. I turned it off again forcefully by power button and then once more... then I got: AMD Data Change... Update New Data to DMI! then later the screen clears and I get: AHCI Option ROM BIOS Revision: 01.05.92 Date: 02-19-2008 Copyright (c) 2006-2008 Phoenix Technologies, LTD Port 01: Reset Port Error!! Port 02: then the screen clears again but this time, this loads from the bottom: Nvidia Boot Agent 249.0542 (copyright stuff... blah blah) PXE-E61: Media test failure, check cable. PXE-M0F: Exiting Nvidia Boot Agent DISK BOOT FAILURE, INSERT SYSTEM DISK AND PRESS ENTER. So I try to go into Safe Mode. Well, first of all it doesn't load as fast. After it loads disk.sys from windows/drivers, it will wait a while (2-3 mins) THEN load. However it loads the Acer eRecovery Management Tool. I have three options: Reset computer to factory default, Restore computer from user's backup, or Exit. However, the top two options are gray and disabled where as the Exit is in blue and definitely clickable. So obviously safe mode is not there... A strong thing to note: In the beginning when all of this started, I did a Boot Windows Normal from pressing f8 and I got to my desktop! It logged me in. I could see the icons on my files. However my desktop was extremely slow as in when I clicked on the Start menu, it would wait a while, then load up the menu with JUST the gradient, no text or icons... so as you can see... it saw my HDD? Also, before anyone says, I have NO USB plugged in. My mouse and keyboard are not USB inputs, I assure you. And this came without a recovery CD AND when I went in BIOS, to change the BOOT ORDER, I did NOT see a CD-ROM option. And when I tried pressing ALT+F10 to get into Acer eRecovery Management, the top two options were disabled as well. But sometimes on start-up, I get: Windows has encountered a problem communicating with a device connected to your computer. This error can be caused by unplugging a removable storage device such as an external USB drive while the device is in use, or by faulty hardware such as a hard drive or CD-ROM drive that is failing. Make sure any removeable storage is properly connected and then restart your computer. If you continue to receive this error message, contact the hardware manufacturer. Status: 0xc00000e9 Info: An unexpected I/O error has occured. Then I tried Last Known Good Configuration Settings, that gives me a BSOD. What should I do/

    Read the article

  • Trying to prevent Windows from hibernating/sleeping automatically

    - by user328821
    My Dell XPS 8700 (Win 7) suddenly began putting itself to sleep at 6pm daily, even if I'm typing. I don't know what caused this to occur, except possibly a windows update that took place in the middle of the night. I initially went into settings for power and saw 2 plans set up, one from Dell and the other window's Power saver plan. I set both to never for sleep and hibernate yet it still occurred. I have current drivers and a fairly new UPS that has software to set to shutdown only after power loss. Dell is of little help, can anyone point me in the right direction? I did do the powerdfg -energy program and came up with this: Power Efficiency Diagnostics Report Scan Time 2014-05-08T19:21:48Z Scan Duration 60 seconds System Manufacturer Dell Inc. System Product Name XPS 8700 BIOS Date 08/23/2013 BIOS Version A04 OS Build 7601 Platform Role PlatformRoleDesktop Plugged In true Process Count 115 Thread Count 1631 Report GUID {097caf99-039b-44c3-b154-d797bfbfdfcc} Analysis Results Errors Power Policy:Sleep timeout is disabled (Plugged In) The computer is not configured to automatically sleep after a period of inactivity. System Availability Requests:System Required Request The device or driver has made a request to prevent the system from automatically entering sleep. Requesting Driver Instance HDAUDIO\FUNC_01&VEN_10EC&DEV_0899&SUBSYS_102805B7&REV_1000\4&220b1bbc&0&0001 Requesting Driver Device Realtek High Definition Audio CPU Utilization:Processor utilization is high The average processor utilization during the trace was high. The system will consume less power when the average processor utilization is very low. Review processor utilization for individual processes to determine which applications and services contribute the most to total processor utilization. Average Utilization (%) 9.48 Warnings Platform Timer Resolution:Platform Timer Resolution The default platform timer resolution is 15.6ms (15625000ns) and should be used whenever the system is idle. If the timer resolution is increased, processor power management technologies may not be effective. The timer resolution may be increased due to multimedia playback or graphical animations. Current Timer Resolution (100ns units) 10000 Maximum Timer Period (100ns units) 156001 Platform Timer Resolution:Outstanding Kernel Timer Request A kernel component or device driver has requested a timer resolution smaller than the platform maximum timer resolution. Requested Period 10000 Request Count 2 Platform Timer Resolution:Outstanding Timer Request A program or service has requested a timer resolution smaller than the platform maximum timer resolution. Requested Period 10000 Requesting Process ID 8672 Requesting Process Path \Device\HarddiskVolume3\Program Files (x86)\Mozilla Firefox\firefox.exe Platform Timer Resolution:Outstanding Timer Request A program or service has requested a timer resolution smaller than the platform maximum timer resolution. Requested Period 100000 Requesting Process ID 1212 Requesting Process Path \Device\HarddiskVolume3\Windows\System32\svchost.exe Power Policy:802.11 Radio Power Policy is Maximum Performance (Plugged In) The current power policy for 802.11-compatible wireless network adapters is not configured to use low-power modes. CPU Utilization:Individual process with significant processor utilization. This process is responsible for a significant portion of the total processor utilization recorded during the trace. Process Name audiodg.exe PID 1304 Average Utilization (%) 4.73 Module Average Module Utilization (%) \Device\HarddiskVolume3\Windows\System32\msvcrt.dll 1.88 \Device\HarddiskVolume3\Windows\System32\MaxxAudioAPO5064.dll 1.77 \Device\HarddiskVolume3\Windows\System32\AudioEng.dll 0.80 CPU Utilization:Individual process with significant processor utilization. This process is responsible for a significant portion of the total processor utilization recorded during the trace. Process Name thunderbird.exe PID 6036 Average Utilization (%) 0.35 Module Average Module Utilization (%) \Device\HarddiskVolume3\Program Files (x86)\Mozilla Thunderbird\xul.dll 0.16 \Device\HarddiskVolume3\Program Files (x86)\Mozilla Thunderbird\mozjs.dll 0.05 \SystemRoot\System32\win32k.sys 0.03 CPU Utilization:Individual process with significant processor utilization. This process is responsible for a significant portion of the total processor utilization recorded during the trace. Process Name dwm.exe PID 1340 Average Utilization (%) 0.25 Module Average Module Utilization (%) \Device\HarddiskVolume3\Windows\System32\dwmcore.dll 0.08 \Device\HarddiskVolume3\Windows\System32\nvwgf2umx.dll 0.05 \SystemRoot\system32\ntoskrnl.exe 0.03 CPU Utilization:Individual process with significant processor utilization. This process is responsible for a significant portion of the total processor utilization recorded during the trace.

    Read the article

  • How to export SQL Server data from corrupted database (with disk write error)

    - by damitamit
    IT realised there was a disk write error on our production SQL Server 2005 and hence was causing the backups to fail. By the time they had realised this the nightly backup was old, so were not able to just restore the backup on another server. The database is still running and being used constantly. However DBCC CheckDB fails. Also the SQL Server backup task fails, Copy Database fails, Export Data Wizard fails. However it seems all the data can be read from the tables (i.e using bcp etc) Another observation I have made is that the Transaction Log is nearly double the size of the Database. (Does that mean all the changes arent being written to the MDF?) What would be the best plan of attack to get the database to a state where backups are working and the data is safe? Take the database offline and use the MDF/LDF to somehow create the database on another sql server? Export the data from the database using bcp. Create the database (use the Generate Scripts function on the corrupt db to create the schema on the new db) on another sql server and use bcp again to import the data. Some other option that is the right course of action in this situation? The IT manager says the data is safe as if the server fails, the data can be restored from the mdf/ldf. I'm not sure so insisted that we start exporting the data each night as a failsafe (using bcp for example). IT are also having issues on the hardware side of things as supposedly the disk error in on a virtualized disk and can't be rebuilt like a normal raid array (or something like that). Please excuse my use of incorrect terminology and incorrect assumptions on how Sql Server operates. I'm the application developer and have been called to help (as it seems IT know less about SQL Server than I do). Many Thanks, Amit Results of DBBC CheckDB: Msg 1823, Level 16, State 2, Line 1 A database snapshot cannot be created because it failed to start. Msg 7928, Level 16, State 1, Line 1 The database snapshot for online checks could not be created. Either the reason is given in a previous error or one of the underlying volumes does not support sparse files or alternate streams. Attempting to get exclusive access to run checks offline. Msg 5030, Level 16, State 12, Line 1 The database could not be exclusively locked to perform the operation. Msg 7926, Level 16, State 1, Line 1 Check statement aborted. The database could not be checked as a database snapshot could not be created and the database or table could not be locked. See Books Online for details of when this behavior is expected and what workarounds exist. Also see previous errors for more details. Msg 823, Level 24, State 3, Line 1 The operating system returned error 1(error not found) to SQL Server during a write at offset 0x00000674706000 in file 'G:\AX40_Dynamics_Live.mdf'. Additional messages in the SQL Server error log and system event log may provide more detail. This is a severe system-level error condition that threatens database integrity and must be corrected immediately. Complete a full database consistency check (DBCC CHECKDB). This error can be caused by many factors; for more information, see SQL Server Books Online.

    Read the article

  • My 3D Games crashing in these scenarios

    - by desaivv
    I have a situation here which I am unable to solve. I bought a PC last year March, here are my specs: Intel Core i3 550 @ 3GHz 4 GB RAM @400 MHz XFX GeForce 9500 GT graphics card 1GB @550MHz 500 GB HDD Lately as soon as I load my save game of Skyrim, it crashes. I have been playing Skyrim since I joined Gaming.SE site. Crashes as in the entire scene gets red lines. I can not ALT + TAB back or even CTRL + ALT + DEL either. My only recourse is a hard reset via the power button. Can not take a screen shot either. I have the latest Forceware 296.10 drivers also. This has been happening since the last 2 weeks. I always use Driver Sweeper to clean my old drivers, since that is what XFXForce recommends before installing new drivers. I installed MSI Afterburner lately to see my GPU temperature. My GPU is default, never over-clocked it. In MSI's Afterburner, I can not adjust fan speed. It is greyed out. Also in settings there is no fan tab. With normal Internet browsing it stays at 51 C. Ran Memtest86 over night with level 11. Took about 13 hours, but no errors in my RAM. I even re-installed my OS, with just the 296 drivers. The fan for the GPU does come on. I can play Diablo 2. I can not get past Warcraft 3's menu selection. There WAS some dust in my machine, but I always try and keep everything clean, since in my home town dust is an issue. Always keep cool my entire PC cabinet. My friend came with his functioning graphics card, we bought our PCs at the same time with exact same specifications. His card did not work either. Same problem with the scene freezing with red lines. I did do my research before posting here. That is how I was able to learn about MSI Afterburner, Driver Sweeper, SpeedFan etc. I followed posts on Tom's Hardware too regarding people that had similar problems. One person suggested and was followed by worked as well the suggestion to "Bake the card in an oven". Since I bought it, played Diablo 2 for months, Starcraft 2 campaign for months and Skyrim recently for months. Bought ME3 also. I am at my wits end. I do not know what else to do. I can go out and buy a card, but my friend's card did not work either. I can use the machine for Eclipse or VS2010 development just fine. Just not with 3D gaming. I originally posted this question on Gaming.SE But I was directed here. I have browsed the SU database for my problem and found this, this, and this. But none of these cover my question. My machine is only one year old. Can some experienced superuser(s) shed some light on this scenario? Is it a 3D graphics card problem? Will a brand new card work? What else can I try to pin point the problem? Can it be the Motherboard? Thanks.

    Read the article

  • What is causing the unusual high load average and IOwait?

    - by James
    I noticed on Tuesday night of last week, the load average went up sharply and it seemed abnormal since the traffic is small. Usually, the numbers usually average around .40 or lower and my server stuff (mysql, php and apache) are optimized. I noticed that the IOWait is unusually high even though the processes is barely using any CPU. top - 01:44:39 up 1 day, 21:13, 1 user, load average: 1.41, 1.09, 0.86 Tasks: 60 total, 1 running, 59 sleeping, 0 stopped, 0 zombie Cpu0 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu1 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu2 : 0.0%us, 0.3%sy, 0.0%ni, 99.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu3 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu4 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu5 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu6 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu7 : 0.0%us, 0.0%sy, 0.0%ni, 91.5%id, 8.5%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 1048576k total, 331944k used, 716632k free, 0k buffers Swap: 0k total, 0k used, 0k free, 0k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1 root 15 0 2468 1376 1140 S 0 0.1 0:00.92 init 1656 root 15 0 13652 5212 664 S 0 0.5 0:00.00 apache2 9323 root 18 0 13652 5212 664 S 0 0.5 0:00.00 apache2 10079 root 18 0 3972 1248 972 S 0 0.1 0:00.00 su 10080 root 15 0 4612 1956 1448 S 0 0.2 0:00.01 bash 11298 root 15 0 13652 5212 664 S 0 0.5 0:00.00 apache2 11778 chikorit 15 0 2344 1092 884 S 0 0.1 0:00.05 top 15384 root 18 0 17544 13m 1568 S 0 1.3 0:02.28 miniserv.pl 15585 root 15 0 8280 2736 2168 S 0 0.3 0:00.02 sshd 15608 chikorit 15 0 8280 1436 860 S 0 0.1 0:00.02 sshd Here is the VMStat procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa 1 0 0 768644 0 0 0 0 14 23 0 10 1 0 99 0 IOStat - Nothing unusal Total DISK READ: 67.13 K/s | Total DISK WRITE: 0.00 B/s TID PRIO USER DISK READ DISK WRITE SWAPIN IO COMMAND 19496 be/4 chikorit 11.85 K/s 0.00 B/s 0.00 % 0.00 % apache2 -k start 19501 be/4 mysql 3.95 K/s 0.00 B/s 0.00 % 0.00 % mysqld 19568 be/4 chikorit 11.85 K/s 0.00 B/s 0.00 % 0.00 % apache2 -k start 19569 be/4 chikorit 11.85 K/s 0.00 B/s 0.00 % 0.00 % apache2 -k start 19570 be/4 chikorit 11.85 K/s 0.00 B/s 0.00 % 0.00 % apache2 -k start 19571 be/4 chikorit 7.90 K/s 0.00 B/s 0.00 % 0.00 % apache2 -k start 19573 be/4 chikorit 7.90 K/s 0.00 B/s 0.00 % 0.00 % apache2 -k start 1 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % init 11778 be/4 chikorit 0.00 B/s 0.00 B/s 0.00 % 0.00 % top 19470 be/4 mysql 0.00 B/s 0.00 B/s 0.00 % 0.00 % mysqld Load Average Chart - http://i.stack.imgur.com/kYsD0.png I want to be sure if this is not a MySQL problem before making sure. Also, this is a Ubuntu 10.04 LTS Server on OpenVZ. Edit: This will probably give a good picture on the IO Wait top - 22:12:22 up 17:41, 1 user, load average: 1.10, 1.09, 0.93 Tasks: 33 total, 1 running, 32 sleeping, 0 stopped, 0 zombie Cpu(s): 0.6%us, 0.2%sy, 0.0%ni, 89.0%id, 10.1%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 1048576k total, 260708k used, 787868k free, 0k buffers Swap: 0k total, 0k used, 0k free, 0k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1 root 15 0 2468 1376 1140 S 0 0.1 0:00.88 init 5849 root 15 0 12336 4028 668 S 0 0.4 0:00.00 apache2 8063 root 15 0 12336 4028 668 S 0 0.4 0:00.00 apache2 9732 root 16 0 8280 2728 2168 S 0 0.3 0:00.02 sshd 9746 chikorit 18 0 8412 1444 864 S 0 0.1 0:01.10 sshd 9747 chikorit 18 0 4576 1960 1488 S 0 0.2 0:00.24 bash 13706 chikorit 15 0 2344 1088 884 R 0 0.1 0:00.03 top 15745 chikorit 15 0 12968 5108 1280 S 0 0.5 0:00.00 apache2 15751 chikorit 15 0 72184 25m 18m S 0 2.5 0:00.37 php5-fpm 15790 chikorit 18 0 12472 4640 1192 S 0 0.4 0:00.00 apache2 15797 chikorit 15 0 72888 23m 16m S 0 2.3 0:00.06 php5-fpm 16038 root 15 0 67772 2848 592 D 0 0.3 0:00.00 php5-fpm 16309 syslog 18 0 24084 1316 992 S 0 0.1 0:00.07 rsyslogd 16316 root 15 0 5472 908 500 S 0 0.1 0:00.00 sshd 16326 root 15 0 2304 908 712 S 0 0.1 0:00.02 cron 17464 root 15 0 10252 7560 856 D 0 0.7 0:01.88 psad 17466 root 18 0 1684 276 208 S 0 0.0 0:00.31 psadwatchd 17559 root 18 0 11444 2020 732 S 0 0.2 0:00.47 sendmail-mta 17688 root 15 0 10252 5388 1136 S 0 0.5 0:03.81 python 17752 teamspea 19 0 44648 7308 4676 S 0 0.7 1:09.70 ts3server_linux 18098 root 15 0 12336 6380 3032 S 0 0.6 0:00.47 apache2 18099 chikorit 18 0 10368 2536 464 S 0 0.2 0:00.00 apache2 18120 ntp 15 0 4336 1316 984 S 0 0.1 0:00.87 ntpd 18379 root 15 0 12336 4028 668 S 0 0.4 0:00.00 apache2 18387 mysql 15 0 62796 36m 5864 S 0 3.6 1:43.26 mysqld 19584 root 15 0 12336 4028 668 S 0 0.4 0:00.02 apache2 22498 root 16 0 12336 4028 668 S 0 0.4 0:00.00 apache2 24260 root 15 0 67772 3612 1356 S 0 0.3 0:00.22 php5-fpm 27712 root 15 0 12336 4028 668 S 0 0.4 0:00.00 apache2 27730 root 15 0 12336 4028 668 S 0 0.4 0:00.00 apache2 30343 root 15 0 12336 4028 668 S 0 0.4 0:00.00 apache2 30366 root 15 0 12336 4028 668 S 0 0.4 0:00.00 apache2

    Read the article

  • Why might login failures cause SQL 2005 to dump and ditch?

    - by Byron Sommardahl
    Our SQL 2005 server began timing out and finally stopped responding on Oct 26th. The application logs showed a ton of 17883 events leading up to a reboot. After the reboot everything was fine but we were still scratching our heads. Fast forward 6 days... it happened again. Then again 2 days later. The last night. Today it has happened three times to far. The timeline is fairly predictable when it happens: Trans log backups. Login failure for "user2". Minidump Another minidump for the scheduler Repeated 17883 events. Server fails little by little until it won't accept any requests. Reboot is all that gets us going again (a band-aid) Interesting, though, is that the server box itself doesn't seem to have any problems. CPU usage is normal. Network connectivity is fine. We can remote in and look at logs. Management studio does eventually bog down, though. Today, for the first time, we tried stopping services instead of a reboot. All services stopped on their own except for the SQL Server service. We finally did an "end task" on that one and were able to bring everything back up. It worked fine for about 30 minutes until we started seeing timeouts and 17883's again. This time, probably because we didn't reboot all the way, we saw a bunch of 844 events mixed in with the 17883's. Our entire tech team here is scratching heads... some ideas we're kicking around: MS Cumulative Update hit around the same time as when we first had a problem. Since then, we've rolled it back. Maybe it didn't rollback all the way. The situation looks and feels like an unhandled "stack overflow" (no relation) in that it starts small and compounds over time. Problem with this is that there isn't significant CPU usage. At any rate, we're not ruling SQL 2005 bug out at all. Maybe we added one too many import processes and have reached our limit on this box. (hard to believe). Looking at SQLDUMP0151.log at the time of one of the crashes. There are some "login failures" and then there are two stack dumps. 1st a normal stack dump, 2nd for a scheduler dump. Here's a snippet: (sorry for the lack of line breaks) 2009-11-10 11:59:14.95 spid63 Using 'xpsqlbot.dll' version '2005.90.3042' to execute extended stored procedure 'xp_qv'. This is an informational message only; no user action is required. 2009-11-10 11:59:15.09 spid63 Using 'xplog70.dll' version '2005.90.3042' to execute extended stored procedure 'xp_msver'. This is an informational message only; no user action is required. 2009-11-10 12:02:33.24 Logon Error: 18456, Severity: 14, State: 16. 2009-11-10 12:02:33.24 Logon Login failed for user 'standard_user2'. [CLIENT: 50.36.172.101] 2009-11-10 12:08:21.12 Logon Error: 18456, Severity: 14, State: 16. 2009-11-10 12:08:21.12 Logon Login failed for user 'standard_user2'. [CLIENT: 50.36.172.101] 2009-11-10 12:13:49.38 Logon Error: 18456, Severity: 14, State: 16. 2009-11-10 12:13:49.38 Logon Login failed for user 'standard_user2'. [CLIENT: 50.36.172.101] 2009-11-10 12:15:16.88 Logon Error: 18456, Severity: 14, State: 16. 2009-11-10 12:15:16.88 Logon Login failed for user 'standard_user2'. [CLIENT: 50.36.172.101] 2009-11-10 12:18:24.41 Logon Error: 18456, Severity: 14, State: 16. 2009-11-10 12:18:24.41 Logon Login failed for user 'standard_user2'. [CLIENT: 50.36.172.101] 2009-11-10 12:18:38.88 spid111 Using 'dbghelp.dll' version '4.0.5' 2009-11-10 12:18:39.02 spid111 *Stack Dump being sent to C:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\LOG\SQLDump0149.txt 2009-11-10 12:18:39.02 spid111 SqlDumpExceptionHandler: Process 111 generated fatal exception c0000005 EXCEPTION_ACCESS_VIOLATION. SQL Server is terminating this process. 2009-11-10 12:18:39.02 spid111 * ***************************************************************************** 2009-11-10 12:18:39.02 spid111 * 2009-11-10 12:18:39.02 spid111 * BEGIN STACK DUMP: 2009-11-10 12:18:39.02 spid111 * 11/10/09 12:18:39 spid 111 2009-11-10 12:18:39.02 spid111 * 2009-11-10 12:18:39.02 spid111 * 2009-11-10 12:18:39.02 spid111 * Exception Address = 0159D56F Module(sqlservr+0059D56F) 2009-11-10 12:18:39.02 spid111 * Exception Code = c0000005 EXCEPTION_ACCESS_VIOLATION 2009-11-10 12:18:39.02 spid111 * Access Violation occurred writing address 00000000 2009-11-10 12:18:39.02 spid111 * Input Buffer 138 bytes - 2009-11-10 12:18:39.02 spid111 * " N R S C _ P T A 22 00 4e 00 52 00 53 00 43 00 5f 00 50 00 54 00 41 00 2009-11-10 12:18:39.02 spid111 * C _ Q A . d b o . 43 00 5f 00 51 00 41 00 2e 00 64 00 62 00 6f 00 2e 00 2009-11-10 12:18:39.02 spid111 * U s p S e l N e x 55 00 73 00 70 00 53 00 65 00 6c 00 4e 00 65 00 78 00 2009-11-10 12:18:39.02 spid111 * t A c c o u n t 74 00 41 00 63 00 63 00 6f 00 75 00 6e 00 74 00 00 00 2009-11-10 12:18:39.02 spid111 * @ i n t F o r m I 0a 40 00 69 00 6e 00 74 00 46 00 6f 00 72 00 6d 00 49 2009-11-10 12:18:39.02 spid111 * D & 8 @ t x 00 44 00 00 26 04 04 38 00 00 00 09 40 00 74 00 78 00 2009-11-10 12:18:39.02 spid111 * t A l i a s § 74 00 41 00 6c 00 69 00 61 00 73 00 00 a7 0f 00 09 04 2009-11-10 12:18:39.02 spid111 * Ð GQE9732 d0 00 00 07 00 47 51 45 39 37 33 32 2009-11-10 12:18:39.02 spid111 * 2009-11-10 12:18:39.02 spid111 * 2009-11-10 12:18:39.02 spid111 * MODULE BASE END SIZE 2009-11-10 12:18:39.02 spid111 * sqlservr 01000000 02C09FFF 01c0a000 2009-11-10 12:18:39.02 spid111 * ntdll 7C800000 7C8C1FFF 000c2000 2009-11-10 12:18:39.02 spid111 * kernel32 77E40000 77F41FFF 00102000

    Read the article

  • Linux networking crash: best steps to find out the cause?

    - by Aron Rotteveel
    One of our Linux (CentOS) servers was unreachable last night. The server was not reachable in any way except for the remote console. After logging in with the remote console, it turned out I could not ping any outside hosts either. A simple service network restart solved the issue, but I am still wondering what could have caused this. My log files seem to indicate no error at all (except for the various daemons that need a network connection and failed after the network failure). Are there any additional steps I can take to find out the cause of this problem? EDIT: this just happened again. The server was completely unresponsive until I issued a networking service restart. Any advise is welcome. Could this be caused by a faulty hardware component? As per Madhatters request, here are some excerpts from the log at the time (the network crashed at 20:13): /var/log/messages: Dec 2 20:01:05 graviton kernel: Firewall: *TCP_IN Blocked* IN=eth0 OUT= MAC=<stripped> SRC=<stripped> DST=<stripped> LEN=40 TOS=0x00 PREC=0x00 TTL=101 ID=256 PROTO=TCP SPT=6000 DPT=3306 WINDOW=16384 RES=0x00 SYN URGP=0 Dec 2 20:01:05 graviton kernel: Firewall: *TCP_IN Blocked* IN=eth0 OUT= MAC=<stripped> SRC=<stripped> DST=<stripped> LEN=40 TOS=0x00 PREC=0x00 TTL=100 ID=256 PROTO=TCP SPT=6000 DPT=3306 WINDOW=16384 RES=0x00 SYN URGP=0 Dec 2 20:01:05 graviton kernel: Firewall: *TCP_IN Blocked* IN=eth0 OUT= MAC=<stripped> SRC=<stripped> DST=<stripped> LEN=40 TOS=0x00 PREC=0x00 TTL=101 ID=256 PROTO=TCP SPT=6000 DPT=3306 WINDOW=16384 RES=0x00 SYN URGP=0 Dec 2 20:13:34 graviton junglediskserver: Connection to gateway failed: xGatewayTransport - Connection to gateway failed. The first three messages are simple responses to iptables rules I have set up through the LFD firewall. The last message indicates that JungleDisk, which I use for backups can no longer connect to the gateway. Apart from this, there are no interesting messages around this time. EDIT 4 dec: as per Mattdm's request, here is the output of ethtool eth0: (Please not that these are the settings that currently work. If things go wrong again, I will be sure to post this again if necessary. Settings for eth0: Supported ports: [ TP ] Supported link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full 1000baseT/Full Supports auto-negotiation: Yes Advertised link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full 1000baseT/Full Advertised auto-negotiation: Yes Speed: 1000Mb/s Duplex: Full Port: Twisted Pair PHYAD: 1 Transceiver: internal Auto-negotiation: on Supports Wake-on: g Wake-on: d Link detected: yes As per Joris' request, here is also the output of route -n: aron@graviton [~]# route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface xx.xx.xx.58 0.0.0.0 255.255.255.255 UH 0 0 0 eth0 xx.xx.xx.42 0.0.0.0 255.255.255.255 UH 0 0 0 eth0 xx.xx.xx.43 0.0.0.0 255.255.255.255 UH 0 0 0 eth0 xx.xx.xx.41 0.0.0.0 255.255.255.255 UH 0 0 0 eth0 xx.xx.xx.46 0.0.0.0 255.255.255.255 UH 0 0 0 eth0 xx.xx.xx.47 0.0.0.0 255.255.255.255 UH 0 0 0 eth0 xx.xx.xx.44 0.0.0.0 255.255.255.255 UH 0 0 0 eth0 xx.xx.xx.45 0.0.0.0 255.255.255.255 UH 0 0 0 eth0 xx.xx.xx.50 0.0.0.0 255.255.255.255 UH 0 0 0 eth0 xx.xx.xx.51 0.0.0.0 255.255.255.255 UH 0 0 0 eth0 xx.xx.xx.48 0.0.0.0 255.255.255.255 UH 0 0 0 eth0 xx.xx.xx.49 0.0.0.0 255.255.255.255 UH 0 0 0 eth0 xx.xx.xx.54 0.0.0.0 255.255.255.255 UH 0 0 0 eth0 xx.xx.xx.52 0.0.0.0 255.255.255.255 UH 0 0 0 eth0 xx.xx.xx.53 0.0.0.0 255.255.255.255 UH 0 0 0 eth0 xx.xx.xx.0 0.0.0.0 255.255.255.192 U 0 0 0 eth0 xx.xx.xx.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth0 0.0.0.0 xx.xx.xx.62 0.0.0.0 UG 0 0 0 eth0 The bottom xx.62 is my gateway. EDIT december 28th: the problem occurred again and I got the chance to compare some of the outputs of the above tests. What I found out is that arp -an returns an incomplete MAC address for my gateway (which is not under my control; the server is in a shared rack): During failure: ? (xx.xx.xx.62) at <incomplete> on eth0 After service network restart: ? (xx.xx.xx.62) at 00:00:0C:9F:F0:30 [ether] on eth0 Is this something I can fix or is it time for me to contact the data centre?

    Read the article

  • VFS: file-max limit 1231582 reached

    - by Rick Koshi
    I'm running a Linux 2.6.36 kernel, and I'm seeing some random errors. Things like ls: error while loading shared libraries: libpthread.so.0: cannot open shared object file: Error 23 Yes, my system can't consistently run an 'ls' command. :( I note several errors in my dmesg output: # dmesg | tail [2808967.543203] EXT4-fs (sda3): re-mounted. Opts: (null) [2837776.220605] xv[14450] general protection ip:7f20c20c6ac6 sp:7fff3641b368 error:0 in libpng14.so.14.4.0[7f20c20a9000+29000] [4931344.685302] EXT4-fs (md16): re-mounted. Opts: (null) [4982666.631444] VFS: file-max limit 1231582 reached [4982666.764240] VFS: file-max limit 1231582 reached [4982767.360574] VFS: file-max limit 1231582 reached [4982901.904628] VFS: file-max limit 1231582 reached [4982964.930556] VFS: file-max limit 1231582 reached [4982966.352170] VFS: file-max limit 1231582 reached [4982966.649195] top[31095]: segfault at 14 ip 00007fd6ace42700 sp 00007fff20746530 error 6 in libproc-3.2.8.so[7fd6ace3b000+e000] Obviously, the file-max errors look suspicious, being clustered together and recent. # cat /proc/sys/fs/file-max 1231582 # cat /proc/sys/fs/file-nr 1231712 0 1231582 That also looks a bit odd to me, but the thing is, there's no way I have 1.2 million files open on this system. I'm the only one using it, and it's not visible to anyone outside the local network. # lsof | wc 16046 148253 1882901 # ps -ef | wc 574 6104 44260 I saw some documentation saying: file-max & file-nr: The kernel allocates file handles dynamically, but as yet it doesn't free them again. The value in file-max denotes the maximum number of file- handles that the Linux kernel will allocate. When you get lots of error messages about running out of file handles, you might want to increase this limit. Historically, the three values in file-nr denoted the number of allocated file handles, the number of allocated but unused file handles, and the maximum number of file handles. Linux 2.6 always reports 0 as the number of free file handles -- this is not an error, it just means that the number of allocated file handles exactly matches the number of used file handles. Attempts to allocate more file descriptors than file-max are reported with printk, look for "VFS: file-max limit reached". My first reading of this is that the kernel basically has a built-in file descriptor leak, but I find that very hard to believe. It would imply that any system in active use needs to be rebooted every so often to free up the file descriptors. As I said, I can't believe this would be true, since it's normal to me to have Linux systems stay up for months (even years) at a time. On the other hand, I also can't believe that my nearly-idle system is holding over a million files open. Does anyone have any ideas, either for fixes or further diagnosis? I could, of course, just reboot the system, but I don't want this to be a recurring problem every few weeks. As a stopgap measure, I've quit Firefox, which was accounting for almost 2000 lines of lsof output (!) even though I only had one window open, and now I can run 'ls' again, but I doubt that will fix the problem for long. (edit: Oops, spoke too soon. By the time I finished typing out this question, the symptom was/is back) Thanks in advance for any help. And another update: My system was basically unusable, so I decided I had no option but to reboot. But before I did, I carefully quit one process at a time, checking /proc/sys/fs/file-nr after each termination. I found that, predictably, the number of open files gradually went down as I closed things down. Unfortunately, it wasn't a large effect. Yes, I was able to clear up 5000-10000 open files, but there were still over 1.2 million left. I shut down just about everything. All interactive shells, except for the one ssh I left open to finish closing down, httpd, even nfs service. Basically everything in the process table that wasn't a kernel process, and there were still an appalling number of files apparently left open. After the reboot, I found that /proc/sys/fs/file-nr showed about 2000 files open, which is much more reasonable. Starting up 2 Xvnc sessions as usual, along with the dozen or so monitoring windows I like to keep open, brought the total up to about 4000 files. I can see nothing wrong with that, of course, but I've obviously failed to identify the root cause. I'm still looking for ideas, since I definitely expect it to happen again. And another update, the next day: I watched the system carefully, and discovered that /proc/sys/fs/file-nr showed a growth of about 900 open files per hour. I shut down the system's only NFS client for the night, and the growth stopped. Mind you, it didn't free up the resources, but it did at least stop consuming more. Is this a known bug with NFS? I'll be bringing the NFS client back online today, and I'll narrow it down further. If anyone is familiar with this behavior, feel free to jump in with "Yeah, NFS4 has this problem, go back to NFS3" or something like that.

    Read the article

  • Web browsing is fast, but downloads are slow

    - by Ricket
    I work for a company on my university's campus, helping with general IT problems and some web development. But lately there has been a problem that has me and my boss completely stumped. We, plus one contractor, make up the entire IT department, so I'm reaching out to you for help. All around the office, we have wall jacks. These collect in a closet down the hall and all plug into a switch. This switch, along with our individual server jacks, plugs into another switch, and that switch plugs into our firewall hardware. Then the firewall is connected out to our campus network. Our campus internet is, well, very fast. I don't know exactly the terms, tiers, etc., but we have thousands of students and downloads can run as fast as 10 MB/s at night; uploads are sometimes even faster. I think we're practically ISP level. In short, I have a lot of faith that it is not the campus side of things that is causing a problem, combined with other evidence I'll mention in a moment. So our symptoms: web browsing is fast. Web pages, images, etc. load instantly. No problems there. But then when I go to download something, the download starts fast but very quickly (a matter of seconds) drops to nearly 0. Often it will actually drop to 0 and time out. This happens with even very small files, 1 MB or less. It smells to me like a QoS sort of thing. I'm not entirely sure, and I wanted to get your opinions first. My boss is hesitant to touch our firewall, much less let me touch it, and it was set up and is managed by a consultant remotely. These problems don't seem tied to a time of the day. I've tried downloads after 5:00 and still the same thing happens. From my desk, I can turn on my wireless adapter and pick up the campus wireless access point. If I unplug ethernet and connect to it, downloads are fast. This adds to my suspicion that it's limited to our company network. Also, a number of weeks ago the consultant upgraded our firewall firmware. Suddenly everything was very fast. I tested with downloads from Sun and speedtest.net and things were blazing fast, as they should be with our campus internet! It was wonderful, and I figured the slow speeds were an old firmware bug. In a matter of days, things steadily declined until they were back to the old symptoms. Oh, and we have antivirus installed on every computer, and we keep it up to date. Though I suppose the possibility is still there that someone could have spyware which is bogging down our internet, in which case what is the easiest/best way to find this out? (maybe this should go in a separate question) Thank you for your patience in reading all of this. Do you have any ideas as to what I can try? Is this something that you've experienced before? What sort of tools or methods can I use to try and diagnose the problem? P.S. everything here is Windows. Windows Server 2003 and 2008 on our servers, and Windows XP on employees' machines. Update: We are submitting a ticket to the university to just take a look and see if they see anything unusual and/or can suggestion methods for us to try and pinpoint our problem. Hopefully they'll be helpful! I'll update this to let you know what goes on. Update again: We found a hub (yes, a HUB) right between our campus connection and our firewall. It had only those two ethernet cables plugged into it, nothing else. After removing the hub, our speeds have jumped up to several mbps. However in talking with the campus, we got them to run a gigabit line to our firewall in place of the 100mbps line. As of friday, we are at about 65 mbps up and down (according to speedtest.net at 8am)!! Go NC State!!

    Read the article

  • git | error: Unable to append to .git/logs/refs/remotes/origin/master: Permission denied [SOLVED]

    - by Corbin Tarrant
    I am having a strange issue that I can't seem to resolve. Here is what happend: I had some log files in a github repository that I didn't want there. I found this script that removes files completely from git history like so: #!/bin/bash set -o errexit # Author: David Underhill # Script to permanently delete files/folders from your git repository. To use # it, cd to your repository's root and then run the script with a list of paths # you want to delete, e.g., git-delete-history path1 path2 if [ $# -eq 0 ]; then exit 0are still fi # make sure we're at the root of git repo if [ ! -d .git ]; then echo "Error: must run this script from the root of a git repository" exit 1 fi # remove all paths passed as arguments from the history of the repo files=$@ git filter-branch --index-filter "git rm -rf --cached --ignore-unmatch $files" HEAD # remove the temporary history git-filter-branch otherwise leaves behind for a long time rm -rf .git/refs/original/ && git reflog expire --all && git gc --aggressive --prune I, of course, made a backup first and then tried it. It seemed to work fine. I then did a git push -f and was greeted with the following messages: error: Unable to append to .git/logs/refs/remotes/origin/master: Permission denied error: Cannot update the ref 'refs/remotes/origin/master'. Everything seems to have pushed fine though, because the files seem to be gone from the GitHub repository, if I try and push again I get the same thing: error: Unable to append to .git/logs/refs/remotes/origin/master: Permission denied error: Cannot update the ref 'refs/remotes/origin/master'. Everything up-to-date EDIT $ sudo chgrp {user} .git/logs/refs/remotes/origin/master $ sudo chown {user} .git/logs/refs/remotes/origin/master $ git push Everything up-to-date Thanks! EDIT Uh Oh. Problem. I've been working on this project all night and just went to commit my changes: error: Unable to append to .git/logs/refs/heads/master: Permission denied fatal: cannot update HEAD ref So I: sudo chown {user} .git/logs/refs/heads/master sudo chgrp {user} .git/logs/refs/heads/master I try the commit again and I get: error: Unable to append to .git/logs/HEAD: Permission denied fatal: cannot update HEAD ref So I: sudo chown {user} .git/logs/HEAD sudo chgrp {user} .git/logs/HEAD And then I try the commit again: 16 files changed, 499 insertions(+), 284 deletions(-) create mode 100644 logs/DBerrors.xsl delete mode 100644 logs/emptyPHPerrors.php create mode 100644 logs/trimXMLerrors.php rewrite public/codeCore/Classes/php/DatabaseConnection.php (77%) create mode 100644 public/codeSite/php/init.php $ git push Counting objects: 49, done. Delta compression using up to 2 threads. Compressing objects: 100% (27/27), done. Writing objects: 100% (27/27), 7.72 KiB, done. Total 27 (delta 15), reused 0 (delta 0) To [email protected]:IAmCorbin/MooKit.git 59da24e..68b6397 master -> master Hooray. I jump on http://GitHub.com and check out the repository, and my latest commit is no where to be found. ::scratch head:: So I push again: Everything up-to-date Umm...it doesn't look like it. I've never had this issue before, could this be a problem with github? or did I mess something up with my git project? EDIT Nevermind, I did a simple: git push origin master and it pushed fine.

    Read the article

  • Creating a multi-tenant application using PostgreSQL's schemas and Rails

    - by ramon.tayag
    Stuff I've already figured out I'm learning how to create a multi-tenant application in Rails that serves data from different schemas based on what domain or subdomain is used to view the application. I already have a few concerns answered: How can you get subdomain-fu to work with domains as well? Here's someone that asked the same question which leads you to this blog. What database, and how will it be structured? Here's an excellent talk by Guy Naor, and good question about PostgreSQL and schemas. I already know my schemas will all have the same structure. They will differ in the data they hold. So, how can you run migrations for all schemas? Here's an answer. Those three points cover a lot of the general stuff I need to know. However, in the next steps I seem to have many ways of implementing things. I'm hoping that there's a better, easier way. Finally, to my question When a new user signs up, I can easily create the schema. However, what would be the best and easiest way to load the structure that the rest of the schemas already have? Here are some questions/scenarios that might give you a better idea. Should I pass it on to a shell script that dumps the public schema into a temporary one, and imports it back to my main database (pretty much like what Guy Naor says in his video)? Here's a quick summary/script I got from the helpful #postgres on freenode. While this will probably work, I'm gonna have to do a lot of stuff outside of Rails, which makes me a bit uncomfortable.. which also brings me to the next question. Is there a way to do this straight from Ruby on Rails? Like create a PostgreSQL schema, then just load the Rails database schema (schema.rb - I know, it's confusing) into that PostgreSQL schema. Is there a gem/plugin that has these things already? Methods like "create_pg_schema_and_load_rails_schema(the_new_schema_name)". If there's none, I'll probably work at making one, but I'm doubtful about how well tested it'll be with all the moving parts (especially if I end up using a shell script to create and manage new PostgreSQL schemas). Thanks, and I hope that wasn't too long! UPDATE May 11, 2010 11:26 GMT+8 Since last night I've been able to get a method to work that creates a new schema and loads schema.rb into it. Not sure if what I'm doing is correct (seems to work fine, so far) but it's a step closer at least. If there's a better way please let me know. module SchemaUtils def self.add_schema_to_path(schema) conn = ActiveRecord::Base.connection conn.execute "SET search_path TO #{schema}, #{conn.schema_search_path}" end def self.reset_search_path conn = ActiveRecord::Base.connection conn.execute "SET search_path TO #{conn.schema_search_path}" end def self.create_and_migrate_schema(schema_name) conn = ActiveRecord::Base.connection schemas = conn.select_values("select * from pg_namespace where nspname != 'information_schema' AND nspname NOT LIKE 'pg%'") if schemas.include?(schema_name) tables = conn.tables Rails.logger.info "#{schema_name} exists already with these tables #{tables.inspect}" else Rails.logger.info "About to create #{schema_name}" conn.execute "create schema #{schema_name}" end # Save the old search path so we can set it back at the end of this method old_search_path = conn.schema_search_path # Tried to set the search path like in the methods above (from Guy Naor) # conn.execute "SET search_path TO #{schema_name}" # But the connection itself seems to remember the old search path. # If set this way, it works. conn.schema_search_path = schema_name # Directly from databases.rake. # In Rails 2.3.5 databases.rake can be found in railties/lib/tasks/databases.rake file = "#{Rails.root}/db/schema.rb" if File.exists?(file) Rails.logger.info "About to load the schema #{file}" load(file) else abort %{#{file} doesn't exist yet. It's possible that you just ran a migration!} end Rails.logger.info "About to set search path back to #{old_search_path}." conn.schema_search_path = old_search_path end end

    Read the article

  • Issue 15: Oracle PartnerNetwork Exchange @ Oracle OpenWorld

    - by rituchhibber
         ORACLE FOCUS Oracle PartnerNetwork Exchange@ ORACLE OpenWorld Sylvie MichouSenior DirectorPartner Marketing & Communications and Strategic Programs RESOURCES -- Oracle OpenWorld 2012 Oracle PartnerNetwork Exchange @ OpenWorld Oracle PartnerNetwork Exchange @ OpenWorld Registration Oracle PartnerNetwork Exchange SpecializationTest Fest Oracle OpenWorld Schedule Builder Oracle OpenWorld Promotional Toolkit for Partners Oracle Partner Events Oracle Partner Webcasts Oracle EMEA Partner News SUBSCRIBE FEEDBACK PREVIOUS ISSUES If you are attending our forthcoming Oracle OpenWorld 2012 conference in San Francisco from 30 September to 4 October, you will discover a new dedicated programme of keynotes and sessions tailored especially for you, our valued partners. Oracle PartnerNetwork Exchange @ OpenWorld has been created to enhance the opportunities for you to learn from and network with Oracle executives and experts. The programme also provides more informal opportunities than ever throughout the week to meet up with the people who are most important to your business: customers, prospects, colleagues and the Oracle EMEA Alliances & Channels management team. Oracle remains fully focused on building the industry's most admired partner ecosystem—which today spans over 25,000 partners. This new OPN Exchange programme offers an exciting change of pace for partners throughout the conference. Now it will be possible to enjoy a fully-integrated, partner-dedicated session schedule throughout the week, as well as key social events such as the Sunday night Welcome Reception, networking lunches from Monday to Thursday at the Howard Street Tent, and a fantastic closing event on the last Thursday afternoon. In addition to the regular Oracle OpenWorld conference schedule, if you have registered for the Oracle PartnerNetwork Exchange @ OpenWorld programme, you will be invited to attend a much anticipated global partner keynote presentation, plus more than 40 conference sessions aimed squarely at what's most important to you, as partners. Prominent topics for discussion will include: Oracle technologies and roadmaps and how they fit with partners' business plans; business development; regional distinctions in business practices; and much more. Each session will provide plenty of food for thought ahead of the numerous networking opportunities throughout the week, encouraging the knowledge exchange with Oracle executives, customers, prospects, and colleagues that will make this conference of even greater value for you. At Oracle we always work closely with our partners to deliver solution offerings that improve business value, simplify the IT experience and drive innovation and efficiencies for joint customers. The most important element of our new OPN Exchange is content that helps you get more from technology investments, more from your peer-to-peer connections, and more from your interactions with customers. To this end we've created some partner-specific tools which can be used by OPN members ahead of the conference itself. Crucially, a comprehensive Content Catalog already lists and organises details of every OPN Exchange session, speaker, exhibitor, demonstration and related materials. This Content Catalog can be used by all our partners to identify interesting content that you can add to your own personalised Oracle OpenWorld Schedule Builder, allowing more effective planning and pre-enrolment for vital sessions. There are numerous highlights that you will definitely want to include in those personal schedules. On Sunday morning, 30 September we will start the week with partner dedicated OPN Exchange sessions, following our Global Partner Keynote at 13:00 with Judson Althoff, SVP, Worldwide Alliances & Channels and Embedded Sales and senior executives, giving insight into Oracle's partner vision, strategy, and resources—all designed to help build and strengthen market opportunities for you. This will be followed by a number of OPN Exchange general sessions, the Oracle OpenWorld Opening Keynote with Larry Ellison, CEO, Oracle and concluded with the OPN Exchange AfterDark Welcome Reception, starting at 19:30 at the Metreon. From Monday 1 to Thursday 4 October, you can attend the OPN Exchange sessions that are most relevant to your business today and over the coming year. Oracle's top product and sales leaders will be on hand to discuss Oracle's strategic direction in 40+ targeted and in-depth sessions focussing on critical success factors to develop your business. Oracle's dedication to innovation, specialization, enablement and engineering provides Oracle partners with a huge opportunity to create new services and solutions, differentiate themselves and deliver extreme value to joint customers across the globe. Oracle will even be helping over 1000 partners to earn OPN Specialization certification during the Oracle OpenWorld OPN Exchange Test Fest, which will be providing all the study materials and exams required to drive Specialization for free at the conference. You simply need to check the list of current certification tracks available, and make sure you pre-register to reserve a seat in one of the ten sessions being offered free to OPN Exchange registered attendees. And finally, let's not forget those all-important networking opportunities, which can so often provide partners with valuable long-term alliances as well as exciting new business leads. The Oracle PartnerNetwork Lounge, located at Moscone South, exhibition hall, room 100 is the place where partners can meet formally or informally with colleagues, customers, prospects, and other industry professionals. OPN Specialized partners with OPN Exchange passes can also visit the OPN Video Blogging room to record and share ideas, and at the OPN Information Station you will find consultants available to answer your questions. "For the first time ever we will have a full partner conference within OpenWorld. OPN Exchange @ OpenWorld will kick-off on the first Sunday and run the entire week. We'll have over 40 sessions throughout that time and partners will hear from our top development executives, with special sessions dedicated to partnering throughout. It's going to be a phenomenal event, and we look forward to seeing our partners there." Judson Althoff, SVP, Oracle Worldwide Alliances & Channels and Embedded Sales So if you haven't done so already, please register for Oracle PartnerNetwork Exchange @ OpenWorld today or add OPN Exchange to your existing registration for just $100 through My Account. And if you have any further questions regarding partner activities at Oracle OpenWorld, please don't hesitate to contact the Oracle PartnerNetwork team at [email protected] will be on hand to share the very latest information about: Oracle's SPARC Superclusters: the latest Engineered Systems from Oracle, delivering radically improved performance, faster deployment and greatly reduced operational costs for mixed database and enterprise application consolidation Oracle's SPARC T4 servers: with the newly developed T4 processor and Oracle Solaris providing up to five times the single threaded performance and better overall system throughput for expanded application versatility Oracle Database Appliance: a new way to take advantage of the world's most popular database, Oracle Database 11g, in a single, easy-to-deploy and manage system. It's a complete package engineered to deliver simple, reliable and affordable database services to small and medium size businesses and departmental systems. All hardware and software components are supported together and offer customers unique pay-as-you-grow software licensing to quickly scale from two to 24 processor cores without incurring the costs and downtime usually associated with hardware upgrades Oracle Exalogic: the world's only integrated cloud machine, featuring server hardware and middleware software engineered together for maximum performance with minimum set-up and operational cost Oracle Exadata Database Machine: the only database machine that provides extreme performance for both data warehousing and online transaction processing (OLTP) applications, making it the ideal platform for consolidating onto grids or private clouds. It is a complete package of servers, storage, networking and software that is massively scalable, secure and redundant Oracle Sun ZFS Storage Appliances: providing enterprise-class NAS performance, price-performance, manageability and TCO by combining third-generation software with high-performance controllers, flash-based caches and disks Oracle Pillar Axiom Quality-of-Service: confidently consolidate storage for multiple applications into a single datacentre storage solution Oracle Solaris 11: delivering secure enterprise cloud deployments with the ability to run hundreds of virtual application with no overhead and co-engineered with other Oracle software products to provide the highest levels of security, manageability and performance Oracle Enterprise Manager 12c: Oracle's integrated enterprise IT management product, providing the industry's only complete, integrated and business-driven enterprise cloud management solution Oracle VM 3.0: the latest release of Oracle's server virtualisation and management solution, helping to move datacentres beyond server consolidation to improve application deployment and management. Register today and ensure your place at the Extreme Performance Tour! Extreme Performance Tour events are free to attend, but places are limited. To make sure that you don't miss out, please visit Oracle's Extreme Performance Tour website, select the city that you'd be interest in attending an event in, and then click on the 'Register Now' button for that city to secure your interest. Each individual city page also contains more in-depth information about your local event, including logistics, agenda and maybe even a preview of VIP guest speakers. -- Oracle OpenWorld 2010 Whether you attended Oracle OpenWorld 2009 or not, don't forget to save the date now for Oracle OpenWorld 2010. The event will be held a little earlier next year, from 19th-23rd September, so please don't miss out. With thousands of sessions and hundreds of exhibits and demos already lined up, there's no better place to learn how to optimise your existing systems, get an inside line on upcoming technology breakthroughs, and meet with your partner peers, Oracle strategists and even the developers responsible for the products and services that help you get better results for your end customers. Register Now for Oracle OpenWorld 2010! Perhaps you are interested in learning more about Oracle OpenWorld 2010, but don't wish to register at this time? Great! Please just enter your contact information here and we will contact you at a later date. How to Exhibit at Oracle OpenWorld 2010 Sponsorship Opportunities at Oracle OpenWorld 2010 Advertising Opportunities at Oracle OpenWorld 2010 -- Back to the welcome page

    Read the article

< Previous Page | 56 57 58 59 60 61 62 63 64  | Next Page >