Search Results

Search found 8923 results on 357 pages for 'core dumped'.

Page 311/357 | < Previous Page | 307 308 309 310 311 312 313 314 315 316 317 318  | Next Page >

  • PHP-FPM stops responding and dies [migrated]

    - by user12361
    I'm running Drupal 6 with Nginx 1.5.1 and PHP-FPM (PHP 5.3.26) on a 1GB single core VPS with 3GB of swap space on SSD storage. I just switched from shared hosting to this unmanaged VPS because my site was getting too heavy, so I'm still learning the ropes. I have moderately high traffic, I don't really monitor it closely but Google Adsense usually record close to 30K page views/day. I usually have 50 to 80 authenticated users logged in and a few hundred more anonymous users hitting the Boost static HTML cache at any given moment. The problem I'm having is that PHP-FPM frequently stops responding, resulting in Nginx 502 or 504 errors. I swear I have read every page on the internet about this issue, which seems fairly common, and I've tried endless combinations of configurations, and I can't find a good solution. After restarting Nginx and PHP-FPM, the site runs really fast for a while, and then without warning it simply stops responding. I get a white screen while the browser waits on the server, and after about 30 seconds to a minute it throws an Nginx 502 or 504 error. Sometimes it runs well for 2 minutes, sometimes 5 minutes, sometimes 5 hours, but it always ends up hanging. When I find the server in this state, there is still plenty of free memory (500MB or more) and no major CPU usage, the control and worker PHP-FPM processes are still present, and the server is still pingable and usable via SSH. A reload of PHP-FPM via the init script revives it again. The hangups don't seem to correspond to the amount of traffic, because I observed this behavior consistently when I was testing this configuration on a development VPS with no traffic at all. I've been constantly tweaking the settings, but I can't definitively eliminate the problem. I set Nginx workers to just 1. In the PHP-FPM config I have tried all three of the process managers. "Dynamic" is definitely the least reliable, consistently hanging up after only a few minutes. "Static" also has been unreliable and unpredictable. The least buggy has been "ondemand", but even that is failing me, sometimes after as much as 12 to 24 hours. But I can't leave the server unattended because PHP-FPM dies and never comes back on its own. I tried adjusting the pm.max_children value from as low as 3 to as high as 50, doesn't make a lot of difference, but I currently have it at 10. Same thing for the spare servers values. I also have set pm.max_requests anywhere from 30 to unlimited, and it doesn't seem to make a difference. According to the logs, the PHP-FPM processes are not exiting with SIGSEGV or SIGBUS, but rather with SIGTERM. I get a lot of lines like: WARNING: [pool www] child 3739, script '/var/www/drupal6/index.php' (request: "GET /index.php") execution timed out (38.739494 sec), terminating and: WARNING: [pool www] child 3738 exited on signal 15 (SIGTERM) after 50.004380 seconds from start I actually found several articles that recommend doing a graceful reload of PHP-FPM via cron every few minutes or hours to circumvent this issue. So that's what I did, "/etc/init.d/php-fpm reload" every 5 minutes. So far, it's keeping the lights on. But it feels like a dreadful hack. Is PHP-FPM really that unreliable? Is there anything else I can do? Thanks a lot!

    Read the article

  • The best computer ever

    - by Jeff
    (This is a repost from my personal blog… wow… I need to write more technical stuff!) About three years and three months ago, I bought a 17" MacBook Pro, and it turned out to be the best computer I've ever owned. You might think that every computer with better specs is automatically better than the last, but that hasn't been my experience. My first one was a Sony, back in the Pentium III days, and it cost an astonishing $2,500. That was even more ridiculous in 1999 dollars. It had a dial-up modem, and a CD-ROM, built-in! It may have even played DVD's. A few years later I bought an HP, and it ended up being a pile of shit. The power connector inside came loose from the board, and on occasion would even short. In 2005, I bought a Dell, and it wasn't bad. It had a really high resolution screen (complete with dead pixels, a problem in those days), and it was the first laptop I felt I could do real work on. When 2006 rolled around, Apple started making computers with Intel CPU's, and I bought the very first one the week it came out. I used Boot Camp to run Windows. I still have it in its box somewhere, and I used it for three years. The current 17" was new in 2009. The goodness was largely rooted in having a big screen with lots of dots. This computer has been the source of hundreds of blog posts, tens of thousands of lines of code, video and photo editing, and of course, a whole lot of Web surfing. It connected to corpnet at Microsoft, WiFi in Hawaii and has presented many a deck. It has traveled with me tens of thousands of miles. Last year, I put a solid state drive in it, and it was like getting a new computer. I can boot up a Windows 7 VM in about 19 seconds. Having 8 gigs of RAM has always been fantastic. Everything about it has been fast and fun. When new, the battery (when not using VM's) could get as much as 10 hours. I can still do 7 without much trouble. After 460 charge cycles, the battery health is still between 85 and 90%. The only real negative has been the size and weight. It's only an inch thick, but naturally it's pretty big with a 17" screen. You don't get battery life like that without a huge battery, either, so it's heavy. It was never a deal breaker, but sometimes a long haul across a large airport, you know you're carrying it. Today, Apple announced a new, thinner and lighter 15" laptop, with twice the RAM and CPU cores, and four times the screen resolution. It basically handles my size and weight issues while retaining the resolution, and it still costs less than my 17" did. So I ordered one. Three years is an excellent run, but I kind of budgeted for a new workhorse this year anyway. So if you're interested in a 17" MacBook Pro with a Core 2 Duo 2.66 GHz CPU, 8 gigs of RAM and a 320 gig hard drive (sorry, I'm keeping the SSD), I have one to sell. They've apparently discontinued the 17", which is going to piss off the video community. It's in excellent condition, with a few minor scratches, but I take care of my stuff.

    Read the article

  • What Counts For a DBA – Depth

    - by Louis Davidson
    SQL Server offers very simple interfaces to many of its features. Most people could open up SSMS, connect to a server, write a simple query and see the results. Even several of the core DBA tasks are deceptively straightforward. It doesn’t take a rocket scientist to perform a basic database backup or run a trace (even using the newfangled Extended Events!). However, appearances can be deceptive, and often times it is really important that a DBA understands not just the basics of how to perform a task, but why we do a task, and how that task works. As an analogy, consider a child walking into a darkened room. Most would know that they need to turn on the light, and how to do it, so they flick the switch. But what happens if light fails to shine forth. Most would immediately tell you that you need to consider changing the light bulb. So you hop in the car and take them to the local home store and instruct them to buy a replacement. Confronted with a 40 foot display of light bulbs, how will they decide which of the hundreds of types of bulbs, of different types, fittings, shapes, colors, power and efficiency ratings, is the right choice? Obviously the main lesson the child is going to learn this day is how to use their cell phone as a flashlight so they don’t have to ask for help the next time. Likewise, when the metaphorical toddlers who use your database server have issues, they will instinctively know something is wrong, and may even have some idea what caused it, but will have no depth of knowledge to figure out the right solution. That is where the DBA comes in and attempts to save the day. However, when one looks beneath the shiny UI, SQL Server has its own “40 foot display of light bulbs”, in the form of the tremendous number of tools and the often-bewildering amount of information they can present to the DBA, to help us find issues. Unfortunately, resorting to guesswork, to trying different “bulbs” over and over, hoping to stumble on the answer. This is where the right depth of knowledge goes a long way. If we need to write a SELECT statement, then knowing the syntax and where to find the data is not enough. Knowledge of indexes and query plans is essential. Without it, we might hit on a query that “works”, but we are basically still a user, not a programmer, because we have no real control over our platform. Is that level of knowledge deep enough? Probably not, since knowledge of the underlying metadata and structures would be very useful in helping us make sense of any query plan. Understanding the structure of an index makes the “key lookup” operator not sound like what you do when someone tapes your car key to the ceiling. So is even this level of understanding deep enough? Do we need to understand the memory architecture used to process the query? It might be a comforting level of knowledge, and will doubtless come in handy at some point, but is not strictly necessary in most cases. Beyond that lies (more or less) full knowledge of SQL language and the intricacies of every step the SQL Server engine takes to process our query. My personal theory is that, as a professional, our knowledge of a given task should extend, at a minimum, one level deeper than is strictly necessary to perform the task. Anything deeper can be left to the ridiculously smart, or obsessive, or both. As an example. tasked with storing an integer value between 0 and 99999999, it’s essential that I know that choosing an Integer over Decimal(8,0) will likely offer performance benefits. It is then useful that I also understand the value of adding a CHECK constraint, to make sure the values are valid to the desired range; and comforting that I know a little about the underlying processors, registers and computer math. Anything further, I leave to the likes of Joe Chang, whose recent blog post on the topic offers depth by the bucketful!  

    Read the article

  • When things go awry

    - by Phil Factor
    The moment the Entrepreneur opened his mouth on prime-time national TV, spelled out the URL and waxed big on how exciting ‘his’ new website was, I knew I was in for a busy night. I’d designed and built it. All at once, half a million people tried to log into the website. Although all my stress-testing paid off, I have to admit that the network locked up tight long before there was any danger of a database or website problem. Soon afterwards, the Entrepreneur and the Big Boss were there in the autopsy meeting. We picked through all our systems in detail to see how they’d borne the unexpected strain. Mercifully, in view of the sour mood of the Big Boss, it turned out that the only thing we could have done better was buy a bigger pipe to and from the internet. We’d specified that ‘big pipe’ when designing the system. The Big Boss had then railed at the cost and so we’d subsequently compromised. I felt that my design decisions were vindicated. The Big Boss brooded for a while. Then he made the significant comment: “What really ****** me off is the fact that, for ten minutes, we couldn’t take people’s money.” At that point I stopped feeling smug. Had the internet connection been better, the system would have reached its limit and failed rather precipitously, and that wasn’t what he wanted. Then it occurred to me that what had gummed up the connection was all those images on the site, that had made it so impressive for the visitors. If there had been a way to automatically pare down the site to the bare essentials under stress… Hmm. I began to consider disaster-recovery in the broadest sense – maintaining a service in spite of unusual or unexpected events. What he said makes a lot of sense: sacrifice whatever isn’t essential to keep the core service running when we approach the capacity limits. Maybe in IT we should borrow (or revive) the business concept of the ‘Skeleton service’, maintaining only the priority parts under stress, using a process that is well-prepared and carefully rehearsed. How might this work? Whatever the event we have to prepare for, it is all about understanding the priorities; knowing what one can dispense with when the going gets tough. In the event of database disaster, it’s much faster to deploy a skeletal system with only the essential data than to restore the entire system, though there would have to be a reconciliation process to update the revived database retrospectively, once the emergency was over. It isn’t just the database that could be designed for resilience. One could prepare for unusually high traffic in a website by designing a system that degraded gradually to a ‘skeletal’ site, one that maintained the commercial essentials without fat images, JavaScript libraries and razzmatazz. This is all what the Big Boss scathingly called ‘a mere technicality’. It seems to me that what is needed first is a culture of application and database design which acknowledges that we live in a very imperfect world, and react accordingly when things go awry.

    Read the article

  • Customer Loyalty vs. Customer Engagement: Who Cares?

    - by Jeb Dasteel-Oracle
    Have you read the recent Forbes OracleVoice blog titled Customer Loyalty is Dead. Long Live Engagement!? If you haven’t, take a look. This article prompted lots of conversation in the social realm. Many who read the article voiced their reactions to the headline and now I’m jumping in to add my view. Normal 0 false false false EN-US X-NONE X-NONE Customer loyalty is still key. It’s the effect and engagement is the cause. We at least know that to be true for our customers. We are in an age where customers are demanding to be heard. We need them to be actively involved – or engaged – as well. Greater levels of customer engagement, properly targeted, positively correlate with satisfaction. Our data has shown us this over and over. Satisfied customers are more loyal and more willing to vocalize their satisfaction through referencing, and are more likely to purchase again, all of which in turn drives incremental revenue – from the customer doing the referencing AND the customer on the receiving end of that reference. Turning this around completely, if we begin to see the level of a customer’s engagement start to wane, this is an indicator that their satisfaction, loyalty, and future revenue are likely at risk. At Oracle, we’ve put in place many programs to target, encourage, and then track engagement, allowing us to measure engagement as a determinant of loyalty. Some of these programs include our Key Accounts, solution design and architectural, Executive Sponsorship, as well as executive advisory boards. Specific programs allow us to engage specific contacts within specific customer organizations (based on role) and then systematically track their engagement activities over time, along side of tracking customer satisfaction, loyalty, referenceability, and incremental revenue contribution. Continuous measurement of engagement allows us to better understand customer views of what it means to partner with a provider and adjust program participation to better meet the needs of the partnership. We can also track across customer segments, and design new programs that are even more effective than the ones we have in place today. In case you missed any of my previous Forbes articles, I’ve included links below for easy access. Award-Winning Companies Put Customers First The Power of Peer Networks: 5 Reasons to Get (and Stay) Involved Technology At Work: Traveling In Style Customer Central: 8 Strategies for Putting Customers at the Core of Your Business Technology at Work: Five Companies Doing IT Right /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Thinking Local, Regional and Global

    - by Apeksha Singh-Oracle
    The FIFA World Cup tournament is the biggest single-sport competition: it’s watched by about 1 billion people around the world. Every four years each national team’s manager is challenged to pull together a group players who ply their trade across the globe. For example, of the 23 members of Brazil’s national team, only four actually play for Brazilian teams, and the rest play in England, France, Germany, Spain, Italy and Ukraine. Each country’s national league, each team and each coach has a unique style. Getting all these “localized” players to work together successfully as one unit is no easy feat. In addition to $35 million in prize money, much is at stake – not least national pride and global bragging rights until the next World Cup in four years time. Achieving economic integration in the ASEAN region by 2015 is a bit like trying to create the next World Cup champion by 2018. The team comprises Brunei Darussalam, Cambodia, Indonesia, Lao PDR, Malaysia, Myanmar, Philippines, Singapore, Thailand and Vietnam. All have different languages, currencies, cultures and customs, rules and regulations. But if they can pull together as one unit, the opportunity is not only great for business and the economy, but it’s also a source of regional pride. BCG expects by 2020 the number of firms headquartered in Asia with revenue exceeding $1 billion will double to more than 5,000. Their trade in the region and with the world is forecast to increase to 37% of an estimated $37 trillion of global commerce by 2020 from 30% in 2010. Banks offering transactional banking services to the emerging market place need to prepare to repond to customer needs across the spectrum – MSMEs, SMEs, corporates and multi national corporations. Customers want innovative, differentiated, value added products and services that provide: • Pan regional operational independence while enabling single source of truth at a regional level • Regional connectivity and Cash & Liquidity  optimization • Enabling Consistent experience for their customers  by offering standardized products & services across all ASEAN countries • Multi-channel & self service capabilities / access to real-time information on liquidity and cash flows • Convergence of cash management with supply chain and trade finance While enabling the above to meet customer demands, the need for a comprehensive and robust credit management solution for effective regional banking operations is a must to manage risk. According to BCG, Asia-Pacific wholesale transaction-banking revenues are expected to triple to $139 billion by 2022 from $46 billion in 2012. To take advantage of the trend, banks will have to manage and maximize their own growth opportunities, compete on a broader scale, manage the complexity within the region and increase efficiency. They’ll also have to choose the right operating model and regional IT platform to offer: • Account Services • Cash & Liquidity Management • Trade Services & Supply Chain Financing • Payments • Securities services • Credit and Lending • Treasury services The core platform should be able to balance global needs and local nuances. Certain functions need to be performed at a regional level, while others need to be performed on a country level. Financial reporting and regulatory compliance are a case in point. The ASEAN Economic Community is in the final lap of its preparations for the ultimate challenge: becoming a formidable team in the global league. Meanwhile, transaction banks are designing their own hat trick: implementing a world-class IT platform, positioning themselves to repond to customer needs and establishing a foundation for revenue generation for years to come. Anand Ramachandran Senior Director, Global Banking Solutions Practice Oracle Financial Services Global Business Unit

    Read the article

  • Does *every* project benefit from written specifications?

    - by nikie
    I know this is holy war territory, so please read the question to the end before answering. There are many cases where written specifications make a lot of sense. For example, if you're a contractor and you want to get paid, you need written specs. If you're working in a team with 20 persons, you need written specs. If you're writing a programming language compiler or interpreter (and it's not perl), you'll usually write a formal specification. I don't doubt that there are many more cases where written specifications are a really good idea. I just think that there are cases where there's so little benefit in written specs, that it doesn't outweigh the costs of writing and maintaining them. EDIT: The close votes say that "it is difficult to say what is asked here", so let me clarify: The usefulness of written, detailed specifications is often claimed like a dogma. (If you want examples, look at the comments.) But I don't see the use of them for the kind of development I'm doing. So what is asked here is: How would written specifications help me? Background information: I work for a small company that's developing vertical market software. If our product is easier to use and has better performance than the competition, it sells. If it's harder to use, even if it behaves 100% as the specification says, it doesn't sell. So there are no "external forces" for having written specs. The advantage would have to be somewhere in the development process. Now, I can see how frozen specifications would make a developer's life easier. But we'll never have frozen specs. If we see in the middle of development that feature X is not intuitive to use the way it's specified, then we can only choose between changing the specification or developing a product that won't sell. You'll probably ask by now: How do you know when you're done? Well, we're continually improving our product. The competition does the same. So (hopefully) we're never done. We keep improving the software, and when we reach a point when the benefits of the improvements we've added since the last release outweigh the costs of an update, we create a new release that is then tested, localized, documented and deployed. This also means that there's rarely any schedule pressure. Nobody has to do overtime to make a deadline. If the feature isn't done by the time we want to release the next version, it'll simply go into the next version. The next question might be: How do your developers know what they're supposed to implement? The answer is: They have a lot of domain knowledge. They know the customers business well enough, so a high-level description of the feature (or even just the problem that the customer needs solved) is enough to implement it. If it's not clear, the developer creates a few fake screens to get feedback from marketing/management or customers, but this is nowhere near the level of detail of actual specifications. This might be inefficient for larger teams, but for a small team with low turnover it works quite well. It has the additional benefit that the developer in question often comes up with a better solution than the person writing the specs might have. This question is already getting very long, but let me address one last point: Testing. Like I said in the beginning, if our software behaves 100% like the spec says, it still can be crap. In fact, if it's so unintuitive that you need a spec to know how to test it, it probably is crap. It makes sense to have fixed, written tests for some core functionality and for regression bugs, but again, this is nowhere near a full written spec of how the software should behave when. The main test is: hand the software to a user who doesn't know it yet and tell him to use the new feature X. If she can figure out how to use it and it works, it works.

    Read the article

  • Migrating VB6 to HTML5 is not a fiction - Customer success story

    - by Webgui
    All of you VB developers in the present or past would probably find it hard to believe that the old VB code can be migrated and modernized into the latest .NET based HTML5 without having to rewrite the application. But we have been working on such tools for the past couple of years and already have several real world applications that were fully 'transposed' from VB6. The solution is called Instant CloudMove and its main tool is called the TranspositionStudio. It is a unique solution that relies on the concept of transposition. Transposition comes from mathematics and music and refers to exchanging elements while everything else remains the same or moving an element as is from one environment to another. This means that we are taking the source code and put it in a modern technological environment with relatively few adjustments.The concept is based on a set of Mapping Expressions which are basically links between an element in the source environment and one in the target environment that has the same functionality. About 95% of the code is usually mapped out-of-the-box and the rest is handled with easy-to-use mapping tools designed for Visual Studio developers providing them with a familiar environment and concepts for completing the mapping and allowing them to extend and customize existing mapping expressions. The solution is also based on a circular workflow that enables developers to make any changes as required until the result is satisfying.As opposed to existing migration solutions that offer automation are usually a “black box” to the user, the transposition concept enables full visibility, flexibility and control over the code and process at all times allowing to also add/change functionalities or upgrade the UI within the process and tools.This is exactly the case with our customer’s aging VB6 PMS (Property Management System) which needed a technological update as well as a design refresh. The decision was to move the VB6 application which had about 1 million lines of code into the latest web technology. Since the application was initially written 13 years ago and had many upgrades since the code must be very patchy and includes unused sections. As a result, the company Mihshuv Group considered rewriting the entire application in Java since it already had the knowledge. Rewrite would allow starting with a clean slate and designing functionality, database architecture, UI without any constraints. On the other hand, rewrite entitles a long and detailed specification work as well as a thorough QA and this translates into a long project with high risk and costs.So the company looked for a migration solution as an alternative; the research lead to Gizmox and after examining the technology it was decided to perform a hybrid project which would include an automatic transposition of the core of the VB6 application (200,000 lines of code) while they redesigning the UI, adding new functionality, deleting unused code and rewriting about 140 reports with Crystal Reports will be done manually using Visual WebGui development tools.The migration part of the project was completed in 65 days by 3 developers from Mihshuv Group guided by Gizmox migration experts while the rewrite and UI upgrade tasks took about the same. So in only a few months period Mihshuv Group generated an up-to-date product, written in the latest Web technology with modern, friendly UI and improved functionality. Guest selection screen of the original VB6 PMS Guest selection screen on the new web–based PMS Compared to the initial plan to rewrite the entire application in Java, the hybrid migration/rewrite approach taken by Mihshuv Group using Gizmox technology proved as a great decision. In terms of time and cost there were substantial savings; from a project that was priced for at least a year (without taking into account the huge risk and uncertainty) it became a few months project only. More about this and other customer stories can be found here

    Read the article

  • College Ratings via the Federal Government

    - by user9147039
    A few weeks back you might remember news about a higher education rating system proposal from the Obama administration. As I've discussed previously, political and stakeholder pressures to improve outcomes and increase transparency are stronger than ever before. The executive branch proposal is intended to make progress in this area. Quoting from the proposal itself, "The ratings will be based upon such measures as: Access, such as percentage of students receiving Pell grants; Affordability, such as average tuition, scholarships, and loan debt; and Outcomes, such as graduation and transfer rates, graduate earnings, and advanced degrees of college graduates.” This is going to be quite complex, to say the least. Most notably, higher ed is not monolithic. From community and other 2-year colleges, to small private 4-year, to professional schools, to large public research institutions…the many walks of higher ed life are, well, many. Designing a ratings system that doesn't wind up with lots of unintended consequences and collateral damage will be difficult. At best you would end up potentially tarnishing the reputation of certain institutions that were actually performing well against the metrics and outcome measures that make sense in their "context" of education. At worst you could spend a lot of time and resources designing a system that would lose credibility with its "customers". A lot of institutions I work with already have in place systems like the one described above. They are tracking completion rates, completion timeframes, transfers to other institutions, job placement, and salary information. As I talk to these institutions there are several constants worth noting: • Deciding on which metrics to measure is complicated. While employment and salary data are relatively easy to track, qualitative measures are more difficult. How do you quantify the benefit to someone who studies in one field that may not compensate him or her as well as another field but that provides huge personal fulfillment and reward is a difficult measure to quantify? • The data is available but the systems to transform the data into actual information that can be used in meaningful ways are not. Too often in higher ed information is siloed. As such, much of the data that need to be a part of a comprehensive system sit in multiple organizations, oftentimes outside the reach of core IT. • Politics and culture are big barriers. One of the areas that my team and I spend a lot of time talking about with higher ed institutions all over the world is the imperative to optimize for student success. This, like the tracking of the students’ achievement after graduation, requires a level or organizational capacity that does not currently exist. The primary barrier is the culture of "data islands" in higher ed, and the need for leadership to drive out the divisions between departments, schools, colleges, etc. and institute academy-wide analytics and data stewardship initiatives that will enable student success. • Data quality is a very big issue. So many disparate systems exist (some on premise, some "in the cloud") that keep data about "persons" using different means to identify them. Establishing a single source of truth about an individual and his or her data is difficult without some type of data quality policy and tools. Good tools actually exist but are seldom leveraged. Don't misunderstand - I think it's a great idea to drive additional transparency and accountability into the system of higher education. And not just at home, but globally. Students and parents need access to key data to make informed, responsible choices. The tools exist to not only enable this kind of information to be shared but to capture the very metrics stakeholders care most about and in a way that makes sense in the context of a given institution's "place" in the overall higher ed panoply.

    Read the article

  • Cloud Adoption Challenges

    - by Herve Roggero
    Originally posted on: http://geekswithblogs.net/hroggero/archive/2013/11/07/cloud-adoption-challenges.aspxWhile cloud computing makes sense for most organizations and countless projects, I have seen customers significantly struggle with cloud adoption challenges. This blog post is not an attempt to provide a generic assessment of cloud adoption; rather it is an account of personal experiences in the field, some of which may or may not apply to your organization. Cloud First, Burst? In the rush to cloud adoption some companies have made the decision to redesign their core system with a cloud first approach. However a cloud first approach means that the system may not work anymore on-premises after it has been redesigned, specifically if the system depends on Platform as a Service (PaaS) components (such as Azure Tables). While PaaS makes sense when your company is in a position to adopt the cloud exclusively, it can be difficult to leverage with systems that need to work in different clouds or on-premises. As a result, some companies are starting to rethink their cloud strategy by designing for on-premises first, and modify only the necessary components to burst when needed in the cloud. This generally means that the components need to work equally well in any environment, which requires leveraging Infrastructure as a Service (IaaS) or additional investments for PaaS applications, or both.  What’s the Problem? Although most companies can benefit from cloud computing, not all of them can clearly identify a business reason for doing so other than in very generic terms. I heard many companies claim “it’s cheaper”, or “it allows us to scale”, without any specific metric or clear strategy behind the adoption decision. Other companies have a very clear strategy behind cloud adoption and can precisely articulate business benefits, such as “we have a 500% increase in traffic twice a year, so we need to burst in the cloud to avoid doubling our network and server capacity”. Understanding the problem being solved through by adopting cloud computing can significantly help organizations determine the optimum path and timeline to adoption. Performance or Scalability? I stopped counting the number of times I heard “the cloud doesn’t scale; our database runs faster on a laptop”.  While performance and scalability are related concepts, they are nonetheless different in nature. Performance is a measure of response time under a given load (meaning with a specific number of users), while scalability is the performance curve over various loads. For example one system could see great performance with 100 users, but timeout with 1,000 users, in which case the system wouldn’t scale. However another system could have average performance with 100 users, but display the exact same performance with 1,000,000 users, in which case the system would scale. Understanding that cloud computing does not usually provide high performance, but instead provides the tools necessary to build a scalable system (usually using PaaS services such as queuing and data federation), is fundamental to proper cloud adoption. Uptime? Last but not least, you may want to read the Service Level Agreement of your cloud provider in detail if you haven’t done so. If you are expecting 99.99% uptime annually you may be in for a surprise. Depending on the component being used, there may be no associated SLA at all! Other components may be restarted at any time, or services may experience failover conditions weekly ( or more) based on current overall conditions of the cloud service provider, most of which are outside of your control. As a result, for PaaS cloud environments (and to a certain extent some IaaS systems), applications need to assume failure and gracefully retry to be successful in the cloud in order to provide service continuity to end users. About Herve Roggero Herve Roggero, Windows Azure MVP, is the founder of Blue Syntax Consulting (http://www.bluesyntax.net). Herve's experience includes software development, architecture, database administration and senior management with both global corporations and startup companies. Herve holds multiple certifications, including an MCDBA, MCSE, MCSD. He also holds a Master's degree in Business Administration from Indiana University. Herve is the co-author of "PRO SQL Azure" and “PRO SQL Server 2012 Practices” from Apress, a PluralSight author, and runs the Azure Florida Association.

    Read the article

  • Software Center seems to freeze system when installing, syslog has "blocked for more than 120 seconds" errors

    - by nbm
    12.04 (precise) 64-bit Kernel Linux 3.2.0-39 3.6GB memory Intel Core 2 Duo CPU @ 2.40GHz x2 WUBI-installed Ubuntu running on a MacBook Pro 7.1 with OSX running Vista via Boot Camp (hey, I like lots of OS's m'kay?) When installing from Ubuntu software center my system very frequently freezes. This has happened 4 of the last 5 installs. Most recently I was installing the Google Earth .deb from Google's website: clicking the .deb file automatically opens Software Center (otherwise I would have used Synaptic, as I've grown to expect Software Center to freeze my system and I'm rather tired of it.) By "freeze" I mean nothing works: no dash, no launcher, no mouse movement, no alt-tab, can't open terminal (keyboard does not work). Software center does show the "installing" icon but after that it greys out and I can't click anything. REISUB has no effect but a cold power-down and restart is possible. Occasionally, after 5-10 minutes, I'll be able to move the mouse / use the keyboard and run a launcher command or two, although other open apps (Chrome and Software Center) will still be greyed-out/frozen. (I've never waited longer than that - if still unresponsive after 15 minutes I just power down and restart.) Most recently, which is why I am finally posting a question, I waited about 15 minutes and was finally able to open System Monitor while this was going on. Processes tells me that System Monitor is using about 20% of CPU, and nothing else is using much (zeros mostly). In fact I didn't even see Software Center listed? However at this point the system finally partially unfroze, the installation completed, and while I wasn't about to close Software Center I was able to do a system shutdown and fresh restart and I went and took a look at the syslog. In /var/log/syslog I see a lot of ":blocked for more than 120 seconds" messages. Similar to ubuntu hang out with this message :blocked for more than 120 seconds Which has not been answered, and I'm not running a virtual machine. My full syslog with stack traces looks very, very similar to this: Why do tasks on Amazon Xen instance block for over 120 seconds causing server to hang? Note that that question was solved, but that's because the problem was being caused by Amazon and Amazon fixed the bug. I'm not running anything Amazon-related. My syslog does look very similar, however. My question is also similar to this: Troubleshooting server hang But the referenced "duplicate" in that question is about how to kill processes/restart when the system freezes. I know how to kill processes and restart. I want to figure out what is causing the problem so I can try to fix it. I realize that I could just use Synaptic instead of Ubuntu Software Center, but I'd like to try to solve the problem if possible. I'm thinking I should perhaps submit a bug report, but I wanted to first see if anyone else was having any similar problems, and if so what you all did to fix it. I see a number of questions about Software Center freezing and others, including those I linked, about the "blocked for more than 120 seconds" log error, but I didn't see any question that links the two. I did save a copy of the syslog report if anyone wants to see it, but as mentioned it's quite similar to the one posted in the Amazon-related question...and I didn't want to take up even more space unnecessarily as, my apologies - this question has already become extremely verbose!

    Read the article

  • Identity in .NET 4.5&ndash;Part 3: (Breaking) changes

    - by Your DisplayName here!
    I recently started porting a private build of Thinktecture.IdentityModel to .NET 4.5 and noticed a number of changes. The good news is that I can delete large parts of my library because many features are now in the box. Along the way I found some other nice additions. ClaimsIdentity now has methods to query the claims collection, e.g. HasClaim(), FindFirst(), FindAll(). ClaimsPrincipal has those methods as well. But they work across all contained identities. Nice! ClaimsPrincipal.Current retrieves the ClaimsPrincipal from Thread.CurrentPrincipal. Combined with the above changes, no casting necessary anymore. SecurityTokenHandler now has read and write methods that work directly with strings. This makes it much easier to deal with non-XML tokens like SWT or JWT. A new session security token handler that uses the ASP.NET machine key to protect the cookie. This makes it easier to get started in web farm scenarios. No need for a custom service host factory or the federation behavior anymore. WCF can be switched into “WIF mode” with the useIdentityConfiguration switch (odd name though). Tooling has become better and the new test STS makes it very easy to get started. On the other hand – and that was kind of expected – to bring claims into the core framework, there are also some breaking changes for WIF code. If you want to migrate (and I would recommend that), most changes to your code are mechanical. The following is a brain dump of the changes I encountered. Assembly Microsoft.IdentityModel is gone. The new functionality is now in mscorlib, System.IdentityModel(.Services) and System.ServiceModel. All the namespaces have changed as well. No IClaimsPrincipal and IClaimsIdentity anymore. Configuration section has been split into <system.identityModel /> and <system.identityModel.services />. WCF configuration story has changed as well. Claim.ClaimType is now Claim.Type. ClaimCollection is now IEnumerable<Claim>. IsSessionMode is now IsReferenceMode. Bootstrap token handling is different now. ClaimsPrincipalHttpModule is gone. This is not really needed anymore, apart from maybe claims transformation (see here). Various factory methods on ClaimsPrincipal are gone (e.g. ClaimsPrincipal.CreateFromIdentity()). SecurityTokenHandler.ValidateToken now returns a ReadOnlyCollection<ClaimsIdentity>. Some lower level helper classes are gone or internal now (e.g. KeyGenerator). The WCF WS-Trust bindings are gone. I think this is a pity. They were *really* useful when doing work with WSTrustChannelFactory. Since WIF is part of the Windows operating system and also supported in future versions of .NET, there is no urgent need to migrate to the 4.5 claims model. But obviously, going forward, at some point you want to make the move.

    Read the article

  • Modern OpenGL context failure [migrated]

    - by user209347
    OK, I managed to create an OpenGL context with wglcreatecontextattribARB with version 3.2 in my attrib struct (So I have initialized a 3.2 opengl context). It works, but the strange thing is, when I use glBindBuffer e,g. I still get unreferenced linker error, shouldn't a newer context prevent this? I'm on windows BTW, Linux doesn't have to deal with older and newer contexts (it directly supports the core of its version). The code: PIXELFORMATDESCRIPTOR pfd; HGLRC tmpRC; int iFormat; if (!(hDC = GetDC(hWnd))) { CMsgBox("Unable to create a device context. Program will now close.", "Error"); return false; } ZeroMemory(&pfd, sizeof(pfd)); pfd.nSize = sizeof(pfd); pfd.nVersion = 1; pfd.dwFlags = PFD_DRAW_TO_WINDOW | PFD_SUPPORT_OPENGL | PFD_DOUBLEBUFFER; pfd.iPixelType = PFD_TYPE_RGBA; pfd.cColorBits = attribs->colorbits; pfd.cDepthBits = attribs->depthbits; pfd.iLayerType = PFD_MAIN_PLANE; if (!(iFormat = ChoosePixelFormat(hDC, &pfd))) { CMsgBox("Unable to find a suitable pixel format. Program will now close.", "Error"); return false; } if (!SetPixelFormat(hDC, iFormat, &pfd)) { CMsgBox("Unable to initialize the pixel formats. Program will now close.", "Error"); return false; } if (!(tmpRC=wglCreateContext(hDC))) { CMsgBox("Unable to create a rendering context. Program will now close.", "Error"); return false; } if (!wglMakeCurrent(hDC, tmpRC)) { CMsgBox("Unable to activate the rendering context. Program will now close.", "Error"); return false; } strncpy(vers, (char*)glGetString(GL_VERSION), 3); vers[3] = '\0'; if (sscanf(vers, "%i.%i", &glv, &glsubv) != 2) { CMsgBox("Unable to retrieve the OpenGL version. Program will now close.", "Error"); return false; } hRC = NULL; if (glv > 2) // Have OpenGL 3.+ support { if ((wglCreateContextAttribsARB = (PFNWGLCREATECONTEXTATTRIBSARBPROC)wglGetProcAddress("wglCreateContextAttribsARB"))) { int attribs[] = {WGL_CONTEXT_MAJOR_VERSION_ARB, glv, WGL_CONTEXT_MINOR_VERSION_ARB, glsubv,WGL_CONTEXT_FLAGS_ARB, 0,0}; hRC = wglCreateContextAttribsARB(hDC, 0, attribs); wglMakeCurrent(NULL, NULL); wglDeleteContext(tmpRC); if (!wglMakeCurrent(hDC, hRC)) { CMsgBox("Unable to activate the rendering context. Program will now close.", "Error"); return false; } moderncontext = true; } } if (hRC == NULL) { hRC = tmpRC; moderncontext = false; }

    Read the article

  • installer can't find partition, but fdisk can find them

    - by pxd
    I'm installing ubuntu 12.04, my system had install 2 system -- winxp and ubuntu 10.10. Now, I want to update ubuntu to 12.04. I use usb disk to install 12.04. But, the installer can't not find my partition in my harddisk. But, the fdisk can find them. Can you help me? How to do? ubuntu@ubuntu:~$ sudo lshw -short H/W path Device Class Description system HP 2230s (NN868PA#AB2) /0 bus 3037 /0/9 memory 64KiB BIOS /0/0 processor Intel(R) Core(TM)2 Duo CPU T6570 @ 2.10GHz /0/0/1 memory 2MiB L2 cache /0/0/3 memory 32KiB L1 cache /0/0/0.1 processor Logical CPU /0/0/0.2 processor Logical CPU /0/2 memory 32KiB L1 cache /0/4 memory 2GiB System Memory /0/4/0 memory SODIMM [empty] /0/4/1 memory 2GiB SODIMM DDR2 Synchronous 800 MHz (1.2 ns) /0/100 bridge Mobile 4 Series Chipset Memory Controller Hub /0/100/2 display Mobile 4 Series Chipset Integrated Graphics Controller /0/100/2.1 display Mobile 4 Series Chipset Integrated Graphics Controller /0/100/1a bus 82801I (ICH9 Family) USB UHCI Controller #4 /0/100/1a.1 bus 82801I (ICH9 Family) USB UHCI Controller #5 /0/100/1a.2 bus 82801I (ICH9 Family) USB UHCI Controller #6 /0/100/1a.7 bus 82801I (ICH9 Family) USB2 EHCI Controller #2 /0/100/1b multimedia 82801I (ICH9 Family) HD Audio Controller /0/100/1c bridge 82801I (ICH9 Family) PCI Express Port 1 /0/100/1c.1 bridge 82801I (ICH9 Family) PCI Express Port 2 /0/100/1c.1/0 wlan1 network PRO/Wireless 5100 AGN [Shiloh] Network Connection /0/100/1c.2 bridge 82801I (ICH9 Family) PCI Express Port 3 /0/100/1c.4 bridge 82801I (ICH9 Family) PCI Express Port 5 /0/100/1c.5 bridge 82801I (ICH9 Family) PCI Express Port 6 /0/100/1c.5/0 eth1 network 88E8072 PCI-E Gigabit Ethernet Controller /0/100/1d bus 82801I (ICH9 Family) USB UHCI Controller #1 /0/100/1d.1 bus 82801I (ICH9 Family) USB UHCI Controller #2 /0/100/1d.2 bus 82801I (ICH9 Family) USB UHCI Controller #3 /0/100/1d.7 bus 82801I (ICH9 Family) USB2 EHCI Controller #1 /0/100/1e bridge 82801 Mobile PCI Bridge /0/100/1f bridge ICH9M LPC Interface Controller /0/100/1f.2 scsi0 storage 82801IBM/IEM (ICH9M/ICH9M-E) 4 port SATA Controller [AHCI mode] /0/100/1f.2/0 /dev/sda disk 500GB WDC WD5000BEVT-0 /0/100/1f.2/0/1 /dev/sda1 volume 48GiB Windows NTFS volume /0/100/1f.2/0/2 /dev/sda2 volume 416GiB Extended partition /0/100/1f.2/0/2/5 /dev/sda5 volume 97GiB HPFS/NTFS partition /0/100/1f.2/0/2/6 /dev/sda6 volume 198GiB HPFS/NTFS partition /0/100/1f.2/0/2/7 /dev/sda7 volume 27GiB Linux filesystem partition /0/100/1f.2/0/2/8 /dev/sda8 volume 93GiB Linux filesystem partition /0/100/1f.2/1 /dev/cdrom disk CDDVDW TS-L633M /0/1 scsi6 storage /0/1/0.0.0 /dev/sdb disk 15GB STORAGE DEVICE /0/1/0.0.0/0 /dev/sdb disk 15GB /0/1/0.0.0/0/1 /dev/sdb1 volume 14GiB Windows FAT volume /1 power HZ04037 ubuntu@ubuntu:~$ ubuntu@ubuntu:~$ sudo fdisk -l Disk /dev/sda: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders, total 976773168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x31263125 Device Boot Start End Blocks Id System /dev/sda1 * 63 102277727 51138832+ 7 HPFS/NTFS/exFAT /dev/sda2 102277728 976784129 437253201 f W95 Ext'd (LBA) /dev/sda5 102277791 307078127 102400168+ 7 HPFS/NTFS/exFAT /dev/sda6 307078191 724141151 208531480+ 7 HPFS/NTFS/exFAT /dev/sda7 724142080 781459455 28658688 83 Linux /dev/sda8 781461504 976771071 97654784 83 Linux Disk /dev/sdb: 15.9 GB, 15931539456 bytes 64 heads, 32 sectors/track, 15193 cylinders, total 31116288 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0009eb92 Device Boot Start End Blocks Id Systemfile:///home/ubuntu/Pictures/Screenshot%20from%202012-07-07%2010:25:40.png /dev/sdb1 * 32 31115263 15557616 c W95 FAT32 (LBA) ubuntu 12.04 installer can't find the partition in my hard disk, only find device - /dev/sda.(sorry, I'm new user, so can't send image.)

    Read the article

  • What do you need to know to be a world-class master software developer? [closed]

    - by glitch
    I wanted to bring up this question to you folks and see what you think, hopefully advise me on the matter: let's say you had 30 years of learning and practicing software development in front of you, how would you dedicate your time so that you'd get the biggest bang for your buck. What would you both learn and work on to be a world-class software developer that would make a large impact on the industry and leave behind a legacy? I think that most great developers end up being both broad generalists and specialists in one-two areas of interest. I'm thinking Bill Joy, John Carmack, Linus Torvalds, K&R and so on. I'm thinking that perhaps one approach would be to break things down by categories and establish a base minimum of "software development" greatness. I'm thinking: Operating Systems: completely internalize the core concepts of OS, perhaps gain a lot of familiarity with an OSS one such as Linux. Anything from memory management to device drivers has to be complete second nature. Programming Languages: this is one of those topics that imho has to be fully grokked even if it might take many years. I don't think there's quite anything like going through the process of developing your own compiler, understanding language design trade-offs and so on. Programming Language Pragmatics is one of my favorite books actually, I think you want to have that internalized back to back, and that's just the start. You could go significantly deeper, but I think it's time well spent, because it's such a crucial building block. As a subset of that, you want to really understand the different programming paradigms out there. Imperative, declarative, logic, functional and so on. Anything from assembly to LISP should be at the very least comfortable to write in. Contexts: I believe one should have experience working in different contexts to truly be able to appreciate the trade-offs that are being made every day. Embedded, web development, mobile development, UX development, distributed, cloud computing and so on. Hardware: I'm somewhat conflicted about this one. I think you want some understanding of computer architecture at a low level, but I feel like the concepts that will truly matter will be slightly higher level, such as CPU caching / memory hierarchy, ILP, and so on. Networking: we live in a completely network-dependent era. Having a good understanding of the OSI model, knowing how the Web works, how HTTP works and so on is pretty much a pre-requisite these days. Distributed systems: once again, everything's distributed these days, it's getting progressively harder to ignore this reality. Slightly related, perhaps add solid understanding of how browsers work to that, since the world seems to be moving so much to interfacing with everything through a browser. Tools: Have a really broad toolset that you're familiar with, one that continuously expands throughout the years. Communication: I think being a great writer, effective communicator and a phenomenal team player is pretty much a prerequisite for a lot of a software developer's greatness. It can't be overstated. Software engineering: understanding the process of building software, team dynamics, the requirements of the business-side, all the pitfalls. You want to deeply understand where what you're writing fits from the market perspective. The better you understand all of this, the more of your work will actually see the daylight. This is really just a starting list, I'm confident that there's a ton of other material that you need to master. As I mentioned, you most likely end up specializing in a bunch of these areas as you go along, but I was trying to come up with a baseline. Any thoughts, suggestions and words of wisdom from the grizzled veterans out there who would like to share their thoughts and experiences with this? I'd really love to know what you think!

    Read the article

  • Asus X202e VivoBook, dual boot. How to get around UEFI and have Win8 & Ubuntu?

    - by Nukeface
    I've gotten my hands on an Asus Vivobook X202e. I like it, handy to use, small, etc etc. Oh, it's the i3 core version. For school I still need Windows * sigh * for the .NET development. (I know, possible in Ubuntu, this n that, but for ease atm wanting to keep it with Win8). So. How to install both on this little thing? I've found a way into the BIOS (before splash screen, mash F2. Works only after reboot, not cold boot). But the whole boot loading setup is different than from what I know, and I must've messed up something because it's been "Attempting Repairs", "Analyzing hard disk", and a bunch of other things for the past 15 minutes. (All I've done is selected "disabled" on secure boot, picky as ** Microsoft). Keeping the original Windows installation is of no concern. Found the product key already and have a clean install waiting. BTW, not trying to leech knowledge, even though first question and no answers. I'm more and more active on Stackoverflow. But, especially due to secure boot and windows 8, I'm going over to Ubuntu. Well, more and more anyway, I like my Windows based games as well ;) UPDATE Managed to do a clean install of Windows 8 Pro. After disabling Secure Boot, also had to disable fast boot, and enable Launch CSM, leaving the option which appeared (Launch PXE OpROM) disabled. Then I rebooted, with the USB Boot drive I created using the Windows 7 USB DVD Download Tool (scroll down for download link), provided by Microsoft. During the installation, I chose to install a clean version, therefor deleted the partitions containing current windows files. I left the Recovery partition (you never know...). Of course, the new Windows Installation dit not like this. Apparantly Windows cannot be installed on a GPT hard disk. Remember I hadn't changed the partition table, was still factory default! Minus a few partitions, granted. So deleted ALL partittions, did a format of the disk, created a new partition. Et voila, Windows installation started. FINALLY! WONDROUS After the installation, Windows still had background images located in C:/Users/ ME /AppData/Local/Microsoft/Themes/RoamedThemeFiles/DesktopBackground/ that I had in the previous installation. Before doing: format, delete partition, cascade partitions, create new partition of different size, format partition, install Windows. It managed to keep the images through all that. Anyone got an idea on that one? It also remembered the settings for the Windows Aero theme... UPDATED QUESTION: After all this you'd think I'd have the rest figured out. Wrong. Ubuntu 12.10, 64 bit installation can't read the partitioning of the hdd during the installation. Any ideas on how to fix this so the install for a dual-boot system can proceed? (Preferably without starting anew with Windows as well ;) )

    Read the article

  • ZFS Storage Appliance ? ldap ??????

    - by user13138569
    ZFS Storage Appliance ? Openldap ????????? ???ldap ?????????????? Solaris 11 ? Openldap ????????????? ??? slapd.conf ??ldif ?????????? user01 ??????? ?????? slapd.conf # # See slapd.conf(5) for details on configuration options. # This file should NOT be world readable. # include /etc/openldap/schema/core.schema include /etc/openldap/schema/cosine.schema include /etc/openldap/schema/nis.schema # Define global ACLs to disable default read access. # Do not enable referrals until AFTER you have a working directory # service AND an understanding of referrals. #referral ldap://root.openldap.org pidfile /var/openldap/run/slapd.pid argsfile /var/openldap/run/slapd.args # Load dynamic backend modules: modulepath /usr/lib/openldap moduleload back_bdb.la # moduleload back_hdb.la # moduleload back_ldap.la # Sample security restrictions # Require integrity protection (prevent hijacking) # Require 112-bit (3DES or better) encryption for updates # Require 63-bit encryption for simple bind # security ssf=1 update_ssf=112 simple_bind=64 # Sample access control policy: # Root DSE: allow anyone to read it # Subschema (sub)entry DSE: allow anyone to read it # Other DSEs: # Allow self write access # Allow authenticated users read access # Allow anonymous users to authenticate # Directives needed to implement policy: # access to dn.base="" by * read # access to dn.base="cn=Subschema" by * read # access to * # by self write # by users read # by anonymous auth # # if no access controls are present, the default policy # allows anyone and everyone to read anything but restricts # updates to rootdn. (e.g., "access to * by * read") # # rootdn can always read and write EVERYTHING! ####################################################################### # BDB database definitions ####################################################################### database bdb suffix "dc=oracle,dc=com" rootdn "cn=Manager,dc=oracle,dc=com" # Cleartext passwords, especially for the rootdn, should # be avoid. See slappasswd(8) and slapd.conf(5) for details. # Use of strong authentication encouraged. rootpw secret # The database directory MUST exist prior to running slapd AND # should only be accessible by the slapd and slap tools. # Mode 700 recommended. directory /var/openldap/openldap-data # Indices to maintain index objectClass eq ?????????ldif???? dn: dc=oracle,dc=com objectClass: dcObject objectClass: organization dc: oracle o: oracle dn: cn=Manager,dc=oracle,dc=com objectClass: organizationalRole cn: Manager dn: ou=People,dc=oracle,dc=com objectClass: organizationalUnit ou: People dn: ou=Group,dc=oracle,dc=com objectClass: organizationalUnit ou: Group dn: uid=user01,ou=People,dc=oracle,dc=com uid: user01 objectClass: top objectClass: account objectClass: posixAccount objectClass: shadowAccount cn: user01 uidNumber: 10001 gidNumber: 10000 homeDirectory: /home/user01 userPassword: secret loginShell: /bin/bash shadowLastChange: 10000 shadowMin: 0 shadowMax: 99999 shadowWarning: 14 shadowInactive: 99999 shadowExpire: -1 ldap?????????????ZFS Storage Appliance??????? Configuration SERVICES LDAP ??Base search DN ?ldap??????????? ???? ldap ????????? user01 ???????????????? ???????????? user ????????? Unknown or invalid user ?????????????????? ????????????????Solaris 11 ???????????? ????????????? ldap ????????getent ??????????????? # svcadm enable svc:/network/nis/domain:default # svcadm enable ldap/client # ldapclient manual -a authenticationMethod=none -a defaultSearchBase=dc=oracle,dc=com -a defaultServerList=192.168.56.201 System successfully configured # getent passwd user01 user01:x:10001:10000::/home/user01:/bin/bash ????????? user01 ?????????????? # mount -F nfs -o vers=3 192.168.56.101:/export/user01 /mnt # su user01 bash-4.1$ cd /mnt bash-4.1$ touch aaa bash-4.1$ ls -l total 1 -rw-r--r-- 1 user01 10000 0 May 31 04:32 aaa ?????? ldap ??????????????????????????!

    Read the article

  • Power cycles on/off 3 times before booting properly from cold start, no other issues (New System)

    - by James
    Relevant Specs: Sapphire 5850, core i7 920, Seasonic x750 power supply, ECS X58B-A2 mobo. From a cold boot, meaning all power totally disconnected at the wall, the system will power on for less than a second and then power off completely. After two seconds of being powered off this will repeat and on the third "attempt" the computer will boot. To be very specific here is what happens: The power is turned on at the wall and on the psu, the orange stdby LED on the mobo is illuminated but the system is 'off'. I hit the power button on the case or on the mobo itself I hear the relay (?) in the psu closing The case light comes on and the mobo power light comes on. The fans start rotating. Immediately after this the I hear some relay click - the power lights extinguish, the fans stop, the stdby light remains on. Less than 2 seconds pass and the cycle repeats without any intervention from me. On the third attempt it boots normally and the machine runs perfectly. If I do a soft reboot or a full shutdown the computer starts normally the next time. It's only if I pull the power cord or flick the switch off on the PSU that I get the cycling again. Basically any time the stdby light on the mobo goes out. I have removed the graphics card and I get the same problem. I have removed the PSU, hotwired it to the ON position and verified voltages on all lines. The relay does not cycle when I do this. If I connect only the 24 pin ATX connector to the mobo and not the 8 pin ATX12V / CPU connector then I will not get the cycling, the fans run, the power light stays on, but obviously the system can't boot. Disconnecting all fans has no effect on the problem. My feeling it that it's something to do with the motherboard like a capacitor that's taking a long time to charge because it's leaking or something along those lines. But I can't imagine what could be 'wrong' with it and only manifest itself as a problem under these very specific circumstances. Any ideas? Thanks.

    Read the article

  • Wubi and Vista 64 bits can't work

    - by Daok
    First of all, I have posted this issue at Ubuntu Forum without success yet. Hello, I have downloaded "kubuntu-9.10-desktop-amd64.iso" and I have mounted it on my Windows Vista 64 bits Ultimate. I have downloaded wubi 9.10. The problem is when installing, it crash after few time. Here is the log file: 11-26 21:07 INFO root: === wubi 9.10ubuntu1 rev160 === 11-26 21:07 DEBUG root: Logfile is c:\users\patrick\appdata\local\temp\wubi-9.10ubuntu1-rev160.log 11-26 21:07 DEBUG root: sys.argv = ['main.pyo', '--exefile="Z:\\wubi.exe"'] 11-26 21:07 DEBUG CommonBackend: data_dir=C:\Users\Patrick\AppData\Local\Temp\pyl5A09.tmp\data 11-26 21:07 DEBUG WindowsBackend: 7z=C:\Users\Patrick\AppData\Local\Temp\pyl5A09.tmp\bin\7z.exe 11-26 21:07 DEBUG CommonBackend: Fetching basic info... 11-26 21:07 DEBUG CommonBackend: original_exe=Z:\wubi.exe 11-26 21:07 DEBUG CommonBackend: platform=win32 11-26 21:07 DEBUG CommonBackend: osname=nt 11-26 21:07 DEBUG CommonBackend: language=fr_CA 11-26 21:07 DEBUG CommonBackend: encoding=cp1252 11-26 21:07 DEBUG WindowsBackend: arch=amd64 11-26 21:07 DEBUG CommonBackend: Parsing isolist=C:\Users\Patrick\AppData\Local\Temp\pyl5A09.tmp\data\isolist.ini 11-26 21:07 DEBUG CommonBackend: Adding distro Xubuntu-i386 11-26 21:07 DEBUG CommonBackend: Adding distro Xubuntu-amd64 11-26 21:07 DEBUG CommonBackend: Adding distro Kubuntu-amd64 11-26 21:07 DEBUG CommonBackend: Adding distro Mythbuntu-i386 11-26 21:07 DEBUG CommonBackend: Adding distro Ubuntu-amd64 11-26 21:07 DEBUG CommonBackend: Adding distro Ubuntu-i386 11-26 21:07 DEBUG CommonBackend: Adding distro Mythbuntu-amd64 11-26 21:07 DEBUG CommonBackend: Adding distro Kubuntu-i386 11-26 21:07 DEBUG CommonBackend: Adding distro KubuntuNetbook-i386 11-26 21:07 DEBUG CommonBackend: Adding distro UbuntuNetbookRemix-i386 11-26 21:07 DEBUG WindowsBackend: Fetching host info... 11-26 21:07 DEBUG WindowsBackend: registry_key=Software\Microsoft\Windows\CurrentVersion\Uninstall\Wubi 11-26 21:07 DEBUG WindowsBackend: windows version=vista 11-26 21:07 DEBUG WindowsBackend: windows_version2=Windows (TM) Vista Ultimate 11-26 21:07 DEBUG WindowsBackend: windows_sp=None 11-26 21:07 DEBUG WindowsBackend: windows_build=6002 11-26 21:07 DEBUG WindowsBackend: gmt=-5 11-26 21:07 DEBUG WindowsBackend: country=CA 11-26 21:07 DEBUG WindowsBackend: timezone=America/Montreal 11-26 21:07 DEBUG WindowsBackend: windows_username=Patrick 11-26 21:07 DEBUG WindowsBackend: user_full_name=Patrick 11-26 21:07 DEBUG WindowsBackend: user_directory=C:\Users\Patrick 11-26 21:07 DEBUG WindowsBackend: windows_language_code=1036 11-26 21:07 DEBUG WindowsBackend: windows_language=French 11-26 21:07 DEBUG WindowsBackend: processor_name=Intel(R) Core(TM)2 Quad CPU Q6600 @ 2.40GHz 11-26 21:07 DEBUG WindowsBackend: bootloader=vista 11-26 21:07 DEBUG WindowsBackend: system_drive=Drive(C: hd 239816.335938 mb free ntfs) 11-26 21:07 DEBUG WindowsBackend: drive=Drive(C: hd 239816.335938 mb free ntfs) 11-26 21:07 DEBUG WindowsBackend: drive=Drive(D: cd 0.0 mb free ) 11-26 21:07 DEBUG WindowsBackend: drive=Drive(E: hd 483619.367188 mb free ntfs) 11-26 21:07 DEBUG WindowsBackend: drive=Drive(G: hd 84606.9375 mb free fat32) 11-26 21:07 DEBUG WindowsBackend: drive=Drive(Z: cd 0.0 mb free cdfs) 11-26 21:07 DEBUG WindowsBackend: uninstaller_path=C:\ubuntu\uninstall-wubi.exe 11-26 21:07 DEBUG WindowsBackend: previous_target_dir=C:\ubuntu 11-26 21:07 DEBUG WindowsBackend: previous_distro_name=Kubuntu 11-26 21:07 DEBUG WindowsBackend: keyboard_id=269029385 11-26 21:07 DEBUG WindowsBackend: keyboard_layout=ca 11-26 21:07 DEBUG WindowsBackend: keyboard_variant= 11-26 21:07 DEBUG CommonBackend: python locale=('fr_CA', 'cp1252') 11-26 21:07 DEBUG CommonBackend: locale=fr_CA.UTF-8 11-26 21:07 DEBUG WindowsBackend: total_memory_mb=4095.99999905 11-26 21:07 DEBUG CommonBackend: Searching ISOs on USB devices 11-26 21:07 DEBUG CommonBackend: Searching for local CDs 11-26 21:07 DEBUG Distro: checking whether C:\Users\Patrick\AppData\Local\Temp\pyl5A09.tmp is a valid Ubuntu CD 11-26 21:07 DEBUG Distro: does not contain C:\Users\Patrick\AppData\Local\Temp\pyl5A09.tmp\casper\filesystem.squashfs 11-26 21:07 DEBUG Distro: checking whether C:\Users\Patrick\AppData\Local\Temp\pyl5A09.tmp is a valid Ubuntu CD 11-26 21:07 DEBUG Distro: does not contain C:\Users\Patrick\AppData\Local\Temp\pyl5A09.tmp\casper\filesystem.squashfs 11-26 21:07 DEBUG Distro: checking whether C:\Users\Patrick\AppData\Local\Temp\pyl5A09.tmp is a valid Ubuntu Netbook Remix CD 11-26 21:07 DEBUG Distro: does not contain C:\Users\Patrick\AppData\Local\Temp\pyl5A09.tmp\casper\filesystem.squashfs 11-26 21:07 DEBUG Distro: checking whether C:\Users\Patrick\AppData\Local\Temp\pyl5A09.tmp is a valid Kubuntu CD 11-26 21:07 DEBUG Distro: does not contain C:\Users\Patrick\AppData\Local\Temp\pyl5A09.tmp\casper\filesystem.squashfs 11-26 21:07 DEBUG Distro: checking whether C:\Users\Patrick\AppData\Local\Temp\pyl5A09.tmp is a valid Kubuntu CD 11-26 21:07 DEBUG Distro: does not contain C:\Users\Patrick\AppData\Local\Temp\pyl5A09.tmp\casper\filesystem.squashfs 11-26 21:07 DEBUG Distro: checking whether C:\Users\Patrick\AppData\Local\Temp\pyl5A09.tmp is a valid Kubuntu Netbook CD 11-26 21:07 DEBUG Distro: does not contain C:\Users\Patrick\AppData\Local\Temp\pyl5A09.tmp\casper\filesystem.squashfs 11-26 21:07 DEBUG Distro: checking whether C:\Users\Patrick\AppData\Local\Temp\pyl5A09.tmp is a valid Xubuntu CD 11-26 21:07 DEBUG Distro: does not contain C:\Users\Patrick\AppData\Local\Temp\pyl5A09.tmp\casper\filesystem.squashfs 11-26 21:07 DEBUG Distro: checking whether C:\Users\Patrick\AppData\Local\Temp\pyl5A09.tmp is a valid Xubuntu CD 11-26 21:07 DEBUG Distro: does not contain C:\Users\Patrick\AppData\Local\Temp\pyl5A09.tmp\casper\filesystem.squashfs 11-26 21:07 DEBUG Distro: checking whether C:\Users\Patrick\AppData\Local\Temp\pyl5A09.tmp is a valid Mythbuntu CD 11-26 21:07 DEBUG Distro: does not contain C:\Users\Patrick\AppData\Local\Temp\pyl5A09.tmp\casper\filesystem.squashfs 11-26 21:07 DEBUG Distro: checking whether C:\Users\Patrick\AppData\Local\Temp\pyl5A09.tmp is a valid Mythbuntu CD 11-26 21:07 DEBUG Distro: does not contain C:\Users\Patrick\AppData\Local\Temp\pyl5A09.tmp\casper\filesystem.squashfs 11-26 21:07 DEBUG Distro: checking whether D:\ is a valid Ubuntu CD 11-26 21:07 DEBUG Distro: does not contain D:\casper\filesystem.squashfs 11-26 21:07 DEBUG Distro: checking whether D:\ is a valid Ubuntu CD 11-26 21:07 DEBUG Distro: does not contain D:\casper\filesystem.squashfs 11-26 21:07 DEBUG Distro: checking whether D:\ is a valid Ubuntu Netbook Remix CD 11-26 21:07 DEBUG Distro: does not contain D:\casper\filesystem.squashfs 11-26 21:07 DEBUG Distro: checking whether D:\ is a valid Kubuntu CD 11-26 21:07 DEBUG Distro: does not contain D:\casper\filesystem.squashfs 11-26 21:07 DEBUG Distro: checking whether D:\ is a valid Kubuntu CD 11-26 21:07 DEBUG Distro: does not contain D:\casper\filesystem.squashfs 11-26 21:07 DEBUG Distro: checking whether D:\ is a valid Kubuntu Netbook CD 11-26 21:07 DEBUG Distro: does not contain D:\casper\filesystem.squashfs 11-26 21:07 DEBUG Distro: checking whether D:\ is a valid Xubuntu CD 11-26 21:07 DEBUG Distro: does not contain D:\casper\filesystem.squashfs 11-26 21:07 DEBUG Distro: checking whether D:\ is a valid Xubuntu CD 11-26 21:07 DEBUG Distro: does not contain D:\casper\filesystem.squashfs 11-26 21:07 DEBUG Distro: checking whether D:\ is a valid Mythbuntu CD 11-26 21:07 DEBUG Distro: does not contain D:\casper\filesystem.squashfs 11-26 21:07 DEBUG Distro: checking whether D:\ is a valid Mythbuntu CD 11-26 21:07 DEBUG Distro: does not contain D:\casper\filesystem.squashfs 11-26 21:07 DEBUG Distro: checking whether E:\ is a valid Ubuntu CD 11-26 21:07 DEBUG Distro: does not contain E:\casper\filesystem.squashfs 11-26 21:07 DEBUG Distro: checking whether E:\ is a valid Ubuntu CD 11-26 21:07 DEBUG Distro: does not contain E:\casper\filesystem.squashfs 11-26 21:07 DEBUG Distro: checking whether E:\ is a valid Ubuntu Netbook Remix CD 11-26 21:07 DEBUG Distro: does not contain E:\casper\filesystem.squashfs 11-26 21:07 DEBUG Distro: checking whether E:\ is a valid Kubuntu CD 11-26 21:07 DEBUG Distro: does not contain E:\casper\filesystem.squashfs 11-26 21:07 DEBUG Distro: checking whether E:\ is a valid Kubuntu CD 11-26 21:07 DEBUG Distro: does not contain E:\casper\filesystem.squashfs 11-26 21:07 DEBUG Distro: checking whether E:\ is a valid Kubuntu Netbook CD 11-26 21:07 DEBUG Distro: does not contain E:\casper\filesystem.squashfs 11-26 21:07 DEBUG Distro: checking whether E:\ is a valid Xubuntu CD 11-26 21:07 DEBUG Distro: does not contain E:\casper\filesystem.squashfs 11-26 21:07 DEBUG Distro: checking whether E:\ is a valid Xubuntu CD 11-26 21:07 DEBUG Distro: does not contain E:\casper\filesystem.squashfs 11-26 21:07 DEBUG Distro: checking whether E:\ is a valid Mythbuntu CD 11-26 21:07 DEBUG Distro: does not contain E:\casper\filesystem.squashfs 11-26 21:07 DEBUG Distro: checking whether E:\ is a valid Mythbuntu CD 11-26 21:07 DEBUG Distro: does not contain E:\casper\filesystem.squashfs 11-26 21:07 DEBUG Distro: checking whether G:\ is a valid Ubuntu CD 11-26 21:07 DEBUG Distro: does not contain G:\casper\filesystem.squashfs 11-26 21:07 DEBUG Distro: checking whether G:\ is a valid Ubuntu CD 11-26 21:07 DEBUG Distro: does not contain G:\casper\filesystem.squashfs 11-26 21:07 DEBUG Distro: checking whether G:\ is a valid Ubuntu Netbook Remix CD 11-26 21:07 DEBUG Distro: does not contain G:\casper\filesystem.squashfs 11-26 21:07 DEBUG Distro: checking whether G:\ is a valid Kubuntu CD 11-26 21:07 DEBUG Distro: does not contain G:\casper\filesystem.squashfs 11-26 21:07 DEBUG Distro: checking whether G:\ is a valid Kubuntu CD 11-26 21:07 DEBUG Distro: does not contain G:\casper\filesystem.squashfs 11-26 21:07 DEBUG Distro: checking whether G:\ is a valid Kubuntu Netbook CD 11-26 21:07 DEBUG Distro: does not contain G:\casper\filesystem.squashfs 11-26 21:07 DEBUG Distro: checking whether G:\ is a valid Xubuntu CD 11-26 21:07 DEBUG Distro: does not contain G:\casper\filesystem.squashfs 11-26 21:07 DEBUG Distro: checking whether G:\ is a valid Xubuntu CD 11-26 21:07 DEBUG Distro: does not contain G:\casper\filesystem.squashfs 11-26 21:07 DEBUG Distro: checking whether G:\ is a valid Mythbuntu CD 11-26 21:07 DEBUG Distro: does not contain G:\casper\filesystem.squashfs 11-26 21:07 DEBUG Distro: checking whether G:\ is a valid Mythbuntu CD 11-26 21:07 DEBUG Distro: does not contain G:\casper\filesystem.squashfs 11-26 21:07 DEBUG Distro: checking whether Z:\ is a valid Ubuntu CD 11-26 21:07 DEBUG Distro: parsing info from str=Kubuntu 9.10 "Karmic Koala" - Release amd64 (20091027) 11-26 21:07 DEBUG Distro: parsed info={'name': 'Kubuntu', 'subversion': 'Release', 'version': '9.10', 'build': '20091027', 'codename': 'Karmic Koala', 'arch': 'amd64'} 11-26 21:07 DEBUG Distro: wrong name: Kubuntu != Ubuntu 11-26 21:07 DEBUG Distro: checking whether Z:\ is a valid Ubuntu CD 11-26 21:07 DEBUG Distro: wrong name: Kubuntu != Ubuntu 11-26 21:07 DEBUG Distro: checking whether Z:\ is a valid Ubuntu Netbook Remix CD 11-26 21:07 DEBUG Distro: wrong name: Kubuntu != Ubuntu Netbook Remix 11-26 21:07 DEBUG Distro: checking whether Z:\ is a valid Kubuntu CD 11-26 21:07 INFO Distro: Found a valid CD for Kubuntu: Z:\ 11-26 21:07 INFO root: Running the CD menu... 11-26 21:07 DEBUG WindowsFrontend: __init__... 11-26 21:07 DEBUG WindowsFrontend: on_init... 11-26 21:07 INFO WinuiPage: appname=wubi, localedir=C:\Users\Patrick\AppData\Local\Temp\pyl5A09.tmp\translations, languages=['fr_CA', 'fr'] 11-26 21:07 INFO WinuiPage: appname=wubi, localedir=C:\Users\Patrick\AppData\Local\Temp\pyl5A09.tmp\translations, languages=['fr_CA', 'fr'] 11-26 21:07 INFO root: CD menu finished 11-26 21:07 INFO root: Already installed, running the uninstaller... 11-26 21:07 INFO root: Running the uninstaller... 11-26 21:07 INFO CommonBackend: This is the uninstaller running 11-26 21:07 INFO WinuiPage: appname=wubi, localedir=C:\Users\Patrick\AppData\Local\Temp\pyl5A09.tmp\translations, languages=['fr_CA', 'fr'] 11-26 21:07 INFO root: Received settings 11-26 21:07 INFO WinuiPage: appname=wubi, localedir=C:\Users\Patrick\AppData\Local\Temp\pyl5A09.tmp\translations, languages=['fr_CA', 'fr'] 11-26 21:07 DEBUG TaskList: # Running tasklist... 11-26 21:07 DEBUG TaskList: ## Running Sauvegarder l'ISO... 11-26 21:07 DEBUG TaskList: ## Finished Sauvegarder l'ISO 11-26 21:07 DEBUG TaskList: ## Running Supprimer l'entrée pour le programme d'amorçage... 11-26 21:07 DEBUG WindowsBackend: Could not find bcd id 11-26 21:07 DEBUG WindowsBackend: undo_bootini C: 11-26 21:07 DEBUG WindowsBackend: undo_configsys Drive(C: hd 239816.335938 mb free ntfs) 11-26 21:07 DEBUG WindowsBackend: undo_bootini E: 11-26 21:07 DEBUG WindowsBackend: undo_configsys Drive(E: hd 483619.367188 mb free ntfs) 11-26 21:07 DEBUG WindowsBackend: undo_bootini G: 11-26 21:07 DEBUG WindowsBackend: undo_configsys Drive(G: hd 84606.9375 mb free fat32) 11-26 21:07 DEBUG TaskList: ## Finished Supprimer l'entrée pour le programme d'amorçage 11-26 21:07 DEBUG TaskList: ## Running Supprimer le répertoire cible... 11-26 21:07 DEBUG CommonBackend: Deleting C:\ubuntu 11-26 21:07 DEBUG TaskList: ## Finished Supprimer le répertoire cible 11-26 21:07 DEBUG TaskList: ## Running Supprimer la clé du registre... 11-26 21:07 DEBUG TaskList: ## Finished Supprimer la clé du registre 11-26 21:07 DEBUG TaskList: # Finished tasklist 11-26 21:07 INFO root: Almost finished uninstalling 11-26 21:07 INFO root: Finished uninstallation 11-26 21:07 DEBUG CommonBackend: Fetching basic info... 11-26 21:07 DEBUG CommonBackend: original_exe=Z:\wubi.exe 11-26 21:07 DEBUG CommonBackend: platform=win32 11-26 21:07 DEBUG CommonBackend: osname=nt 11-26 21:07 DEBUG WindowsBackend: arch=amd64 11-26 21:07 DEBUG CommonBackend: Parsing isolist=C:\Users\Patrick\AppData\Local\Temp\pyl5A09.tmp\data\isolist.ini 11-26 21:07 DEBUG CommonBackend: Adding distro Xubuntu-i386 11-26 21:07 DEBUG CommonBackend: Adding distro Xubuntu-amd64 11-26 21:07 DEBUG CommonBackend: Adding distro Kubuntu-amd64 11-26 21:07 DEBUG CommonBackend: Adding distro Mythbuntu-i386 11-26 21:07 DEBUG CommonBackend: Adding distro Ubuntu-amd64 11-26 21:07 DEBUG CommonBackend: Adding distro Ubuntu-i386 11-26 21:07 DEBUG CommonBackend: Adding distro Mythbuntu-amd64 11-26 21:07 DEBUG CommonBackend: Adding distro Kubuntu-i386 11-26 21:07 DEBUG CommonBackend: Adding distro KubuntuNetbook-i386 11-26 21:07 DEBUG CommonBackend: Adding distro UbuntuNetbookRemix-i386 11-26 21:07 DEBUG WindowsBackend: Fetching host info... 11-26 21:07 DEBUG WindowsBackend: registry_key=Software\Microsoft\Windows\CurrentVersion\Uninstall\Wubi 11-26 21:07 DEBUG WindowsBackend: windows version=vista 11-26 21:07 DEBUG WindowsBackend: windows_version2=Windows (TM) Vista Ultimate 11-26 21:07 DEBUG WindowsBackend: windows_sp=None 11-26 21:07 DEBUG WindowsBackend: windows_build=6002 11-26 21:07 DEBUG WindowsBackend: gmt=-5 11-26 21:07 DEBUG WindowsBackend: country=CA 11-26 21:07 DEBUG WindowsBackend: timezone=America/Montreal 11-26 21:07 DEBUG WindowsBackend: windows_username=Patrick 11-26 21:07 DEBUG WindowsBackend: user_full_name=Patrick 11-26 21:07 DEBUG WindowsBackend: user_directory=C:\Users\Patrick 11-26 21:07 DEBUG WindowsBackend: windows_language_code=1036 11-26 21:07 DEBUG WindowsBackend: windows_language=French 11-26 21:07 DEBUG WindowsBackend: processor_name=Intel(R) Core(TM)2 Quad CPU Q6600 @ 2.40GHz 11-26 21:07 DEBUG WindowsBackend: bootloader=vista 11-26 21:07 DEBUG WindowsBackend: system_drive=Drive(C: hd 240512.851563 mb free ntfs) 11-26 21:07 DEBUG WindowsBackend: drive=Drive(C: hd 240512.851563 mb free ntfs) 11-26 21:07 DEBUG WindowsBackend: drive=Drive(D: cd 0.0 mb free ) 11-26 21:07 DEBUG WindowsBackend: drive=Drive(E: hd 483523.867188 mb free ntfs) 11-26 21:07 DEBUG WindowsBackend: drive=Drive(G: hd 84445.65625 mb free fat32) 11-26 21:07 DEBUG WindowsBackend: drive=Drive(Z: cd 0.0 mb free cdfs) 11-26 21:07 DEBUG WindowsBackend: uninstaller_path=None 11-26 21:07 DEBUG WindowsBackend: previous_target_dir=None 11-26 21:07 DEBUG WindowsBackend: previous_distro_name=None 11-26 21:07 DEBUG WindowsBackend: keyboard_id=269029385 11-26 21:07 DEBUG WindowsBackend: keyboard_layout=ca 11-26 21:07 DEBUG WindowsBackend: keyboard_variant= 11-26 21:07 DEBUG WindowsBackend: total_memory_mb=4095.99999905 11-26 21:07 DEBUG CommonBackend: Searching ISOs on USB devices 11-26 21:07 DEBUG CommonBackend: Searching for local CDs 11-26 21:07 DEBUG Distro: checking whether C:\Users\Patrick\AppData\Local\Temp\pyl5A09.tmp is a valid Ubuntu CD 11-26 21:07 DEBUG Distro: does not contain C:\Users\Patrick\AppData\Local\Temp\pyl5A09.tmp\casper\filesystem.squashfs 11-26 21:07 DEBUG Distro: checking whether C:\Users\Patrick\AppData\Local\Temp\pyl5A09.tmp is a valid Ubuntu CD 11-26 21:07 DEBUG Distro: does not contain C:\Users\Patrick\AppData\Local\Temp\pyl5A09.tmp\casper\filesystem.squashfs 11-26 21:07 DEBUG Distro: checking whether C:\Users\Patrick\AppData\Local\Temp\pyl5A09.tmp is a valid Ubuntu Netbook Remix CD 11-26 21:07 DEBUG Distro: does not contain C:\Users\Patrick\AppData\Local\Temp\pyl5A09.tmp\casper\filesystem.squashfs 11-26 21:07 DEBUG Distro: checking whether C:\Users\Patrick\AppData\Local\Temp\pyl5A09.tmp is a valid Kubuntu CD 11-26 21:07 DEBUG Distro: does not contain C:\Users\Patrick\AppData\Local\Temp\pyl5A09.tmp\casper\filesystem.squashfs 11-26 21:07 DEBUG Distro: checking whether C:\Users\Patrick\AppData\Local\Temp\pyl5A09.tmp is a valid Kubuntu CD 11-26 21:07 DEBUG Distro: does not contain C:\Users\Patrick\AppData\Local\Temp\pyl5A09.tmp\casper\filesystem.squashfs 11-26 21:07 DEBUG Distro: checking whether C:\Users\Patrick\AppData\Local\Temp\pyl5A09.tmp is a valid Kubuntu Netbook CD 11-26 21:07 DEBUG Distro: does not contain C:\Users\Patrick\AppData\Local\Temp\pyl5A09.tmp\casper\filesystem.squashfs 11-26 21:07 DEBUG Distro: checking whether C:\Users\Patrick\AppData\Local\Temp\pyl5A09.tmp is a valid Xubuntu CD 11-26 21:07 DEBUG Distro: does not contain C:\Users\Patrick\AppData\Local\Temp\pyl5A09.tmp\casper\filesystem.squashfs 11-26 21:07 DEBUG Distro: checking whether C:\Users\Patrick\AppData\Local\Temp\pyl5A09.tmp is a valid Xubuntu CD 11-26 21:07 DEBUG Distro: does not contain C:\Users\Patrick\AppData\Local\Temp\pyl5A09.tmp\casper\filesystem.squashfs 11-26 21:07 DEBUG Distro: checking whether C:\Users\Patrick\AppData\Local\Temp\pyl5A09.tmp is a valid Mythbuntu CD 11-26 21:07 DEBUG Distro: does not contain C:\Users\Patrick\AppData\Local\Temp\pyl5A09.tmp\casper\filesystem.squashfs 11-26 21:07 DEBUG Distro: checking whether C:\Users\Patrick\AppData\Local\Temp\pyl5A09.tmp is a valid Mythbuntu CD 11-26 21:07 DEBUG Distro: does not contain C:\Users\Patrick\AppData\Local\Temp\pyl5A09.tmp\casper\filesystem.squashfs 11-26 21:07 DEBUG Distro: checking whether D:\ is a valid Ubuntu CD 11-26 21:07 DEBUG Distro: does not contain D:\casper\filesystem.squashfs 11-26 21:07 DEBUG Distro: checking whether D:\ is a valid Ubuntu CD 11-26 21:07 DEBUG Distro: does not contain D:\casper\filesystem.squashfs 11-26 21:07 DEBUG Distro: checking whether D:\ is a valid Ubuntu Netbook Remix CD 11-26 21:07 DEBUG Distro: does not contain D:\casper\filesystem.squashfs 11-26 21:07 DEBUG Distro: checking whether D:\ is a valid Kubuntu CD 11-26 21:07 DEBUG Distro: does not contain D:\casper\filesystem.squashfs 11-26 21:07 DEBUG Distro: checking whether D:\ is a valid Kubuntu CD 11-26 21:07 DEBUG Distro: does not contain D:\casper\filesystem.squashfs 11-26 21:07 DEBUG Distro: checking whether D:\ is a valid Kubuntu Netbook CD 11-26 21:07 DEBUG Distro: does not contain D:\casper\filesystem.squashfs 11-26 21:07 DEBUG Distro: checking whether D:\ is a valid Xubuntu CD 11-26 21:07 DEBUG Distro: does not contain D:\casper\filesystem.squashfs 11-26 21:07 DEBUG Distro: checking whether D:\ is a valid Xubuntu CD 11-26 21:07 DEBUG Distro: does not contain D:\casper\filesystem.squashfs 11-26 21:07 DEBUG Distro: checking whether D:\ is a valid Mythbuntu CD 11-26 21:07 DEBUG Distro: does not contain D:\casper\filesystem.squashfs 11-26 21:07 DEBUG Distro: checking whether D:\ is a valid Mythbuntu CD 11-26 21:07 DEBUG Distro: does not contain D:\casper\filesystem.squashfs 11-26 21:07 DEBUG Distro: checking whether E:\ is a valid Ubuntu CD 11-26 21:07 DEBUG Distro: does not contain E:\casper\filesystem.squashfs 11-26 21:07 DEBUG Distro: checking whether E:\ is a valid Ubuntu CD 11-26 21:07 DEBUG Distro: does not contain E:\casper\filesystem.squashfs 11-26 21:07 DEBUG Distro: checking whether E:\ is a valid Ubuntu Netbook Remix CD 11-26 21:07 DEBUG Distro: does not contain E:\casper\filesystem.squashfs 11-26 21:07 DEBUG Distro: checking whether E:\ is a valid Kubuntu CD 11-26 21:07 DEBUG Distro: does not contain E:\casper\filesystem.squashfs 11-26 21:07 DEBUG Distro: checking whether E:\ is a valid Kubuntu CD 11-26 21:07 DEBUG Distro: does not contain E:\casper\filesystem.squashfs 11-26 21:07 DEBUG Distro: checking whether E:\ is a valid Kubuntu Netbook CD 11-26 21:07 DEBUG Distro: does not contain E:\casper\filesystem.squashfs 11-26 21:07 DEBUG Distro: checking whether E:\ is a valid Xubuntu CD 11-26 21:07 DEBUG Distro: does not contain E:\casper\filesystem.squashfs 11-26 21:07 DEBUG Distro: checking whether E:\ is a valid Xubuntu CD 11-26 21:07 DEBUG Distro: does not contain E:\casper\filesystem.squashfs 11-26 21:07 DEBUG Distro: checking whether E:\ is a valid Mythbuntu CD 11-26 21:07 DEBUG Distro: does not contain E:\casper\filesystem.squashfs 11-26 21:07 DEBUG Distro: checking whether E:\ is a valid Mythbuntu CD 11-26 21:07 DEBUG Distro: does not contain E:\casper\filesystem.squashfs 11-26 21:07 DEBUG Distro: checking whether G:\ is a valid Ubuntu CD 11-26 21:07 DEBUG Distro: does not contain G:\casper\filesystem.squashfs 11-26 21:07 DEBUG Distro: checking whether G:\ is a valid Ubuntu CD 11-26 21:07 DEBUG Distro: does not contain G:\casper\filesystem.squashfs 11-26 21:07 DEBUG Distro: checking whether G:\ is a valid Ubuntu Netbook Remix CD 11-26 21:07 DEBUG Distro: does not contain G:\casper\filesystem.squashfs 11-26 21:07 DEBUG Distro: checking whether G:\ is a valid Kubuntu CD 11-26 21:07 DEBUG Distro: does not contain G:\casper\filesystem.squashfs 11-26 21:07 DEBUG Distro: checking whether G:\ is a valid Kubuntu CD 11-26 21:07 DEBUG Distro: does not contain G:\casper\filesystem.squashfs 11-26 21:07 DEBUG Distro: checking whether G:\ is a valid Kubuntu Netbook CD 11-26 21:07 DEBUG Distro: does not contain G:\casper\filesystem.squashfs 11-26 21:07 DEBUG Distro: checking whether G:\ is a valid Xubuntu CD 11-26 21:07 DEBUG Distro: does not contain G:\casper\filesystem.squashfs 11-26 21:07 DEBUG Distro: checking whether G:\ is a valid Xubuntu CD 11-26 21:07 DEBUG Distro: does not contain G:\casper\filesystem.squashfs 11-26 21:07 DEBUG Distro: checking whether G:\ is a valid Mythbuntu CD 11-26 21:07 DEBUG Distro: does not contain G:\casper\filesystem.squashfs 11-26 21:07 DEBUG Distro: checking whether G:\ is a valid Mythbuntu CD 11-26 21:07 DEBUG Distro: does not contain G:\casper\filesystem.squashfs 11-26 21:07 DEBUG Distro: checking whether Z:\ is a valid Ubuntu CD 11-26 21:07 DEBUG Distro: wrong name: Kubuntu != Ubuntu 11-26 21:07 DEBUG Distro: checking whether Z:\ is a valid Ubuntu CD 11-26 21:07 DEBUG Distro: wrong name: Kubuntu != Ubuntu 11-26 21:07 DEBUG Distro: checking whether Z:\ is a valid Ubuntu Netbook Remix CD 11-26 21:07 DEBUG Distro: wrong name: Kubuntu != Ubuntu Netbook Remix 11-26 21:07 DEBUG Distro: checking whether Z:\ is a valid Kubuntu CD 11-26 21:07 INFO Distro: Found a valid CD for Kubuntu: Z:\ 11-26 21:07 INFO root: Running the installer... 11-26 21:07 INFO WinuiPage: appname=wubi, localedir=C:\Users\Patrick\AppData\Local\Temp\pyl5A09.tmp\translations, languages=['fr_CA', 'fr'] 11-26 21:07 INFO WinuiPage: appname=wubi, localedir=C:\Users\Patrick\AppData\Local\Temp\pyl5A09.tmp\translations, languages=['fr_CA', 'fr'] 11-26 21:07 DEBUG WinuiInstallationPage: target_drive=C:, installation_size=17000MB, distro_name=Kubuntu, language=en_US, locale=en_US.UTF-8, username=patrick 11-26 21:07 INFO root: Received settings 11-26 21:07 INFO WinuiPage: appname=wubi, localedir=C:\Users\Patrick\AppData\Local\Temp\pyl5A09.tmp\translations, languages=['en_US', 'en'] 11-26 21:07 DEBUG TaskList: # Running tasklist... 11-26 21:07 DEBUG TaskList: ## Running select_target_dir... 11-26 21:07 INFO WindowsBackend: Installing into C:\ubuntu 11-26 21:07 DEBUG TaskList: ## Finished select_target_dir 11-26 21:07 DEBUG TaskList: ## Running create_dir_structure... 11-26 21:07 DEBUG CommonBackend: Creating dir C:\ubuntu 11-26 21:07 DEBUG CommonBackend: Creating dir C:\ubuntu\disks 11-26 21:07 DEBUG CommonBackend: Creating dir C:\ubuntu\install 11-26 21:07 DEBUG CommonBackend: Creating dir C:\ubuntu\install\boot 11-26 21:07 DEBUG CommonBackend: Creating dir C:\ubuntu\disks\boot 11-26 21:07 DEBUG CommonBackend: Creating dir C:\ubuntu\disks\boot\grub 11-26 21:07 DEBUG CommonBackend: Creating dir C:\ubuntu\install\boot\grub 11-26 21:07 DEBUG TaskList: ## Finished create_dir_structure 11-26 21:07 DEBUG TaskList: ## Running uncompress_target_dir... 11-26 21:07 DEBUG TaskList: ## Finished uncompress_target_dir 11-26 21:07 DEBUG TaskList: ## Running create_uninstaller... 11-26 21:07 DEBUG WindowsBackend: Copying uninstaller Z:\wubi.exe -> C:\ubuntu\uninstall-wubi.exe 11-26 21:07 DEBUG registry: Setting registry key -2147483646 Software\Microsoft\Windows\CurrentVersion\Uninstall\Wubi UninstallString C:\ubuntu\uninstall-wubi.exe 11-26 21:07 DEBUG registry: Setting registry key -2147483646 Software\Microsoft\Windows\CurrentVersion\Uninstall\Wubi InstallationDir C:\ubuntu 11-26 21:07 DEBUG registry: Setting registry key -2147483646 Software\Microsoft\Windows\CurrentVersion\Uninstall\Wubi DisplayName Kubuntu 11-26 21:07 DEBUG registry: Setting registry key -2147483646 Software\Microsoft\Windows\CurrentVersion\Uninstall\Wubi DisplayIcon C:\ubuntu\Kubuntu.ico 11-26 21:07 DEBUG registry: Setting registry key -2147483646 Software\Microsoft\Windows\CurrentVersion\Uninstall\Wubi DisplayVersion 9.10ubuntu1-rev160 11-26 21:07 DEBUG registry: Setting registry key -2147483646 Software\Microsoft\Windows\CurrentVersion\Uninstall\Wubi Publisher Kubuntu 11-26 21:07 DEBUG registry: Setting registry key -2147483646 Software\Microsoft\Windows\CurrentVersion\Uninstall\Wubi URLInfoAbout http://www.kubuntu.org 11-26 21:07 DEBUG registry: Setting registry key -2147483646 Software\Microsoft\Windows\CurrentVersion\Uninstall\Wubi HelpLink http://www.ubuntu.com/support 11-26 21:07 DEBUG TaskList: ## Finished create_uninstaller 11-26 21:07 DEBUG TaskList: ## Running copy_installation_files... 11-26 21:07 DEBUG WindowsBackend: Copying C:\Users\Patrick\AppData\Local\Temp\pyl5A09.tmp\data\custom-installation -> C:\ubuntu\install\custom-installation 11-26 21:07 DEBUG WindowsBackend: Copying C:\Users\Patrick\AppData\Local\Temp\pyl5A09.tmp\winboot -> C:\ubuntu\winboot 11-26 21:07 DEBUG WindowsBackend: Copying C:\Users\Patrick\AppData\Local\Temp\pyl5A09.tmp\data\images\Kubuntu.ico -> C:\ubuntu\Kubuntu.ico 11-26 21:07 DEBUG TaskList: ## Finished copy_installation_files 11-26 21:07 DEBUG TaskList: ## Running get_iso... 11-26 21:07 DEBUG TaskList: New task copy_file 11-26 21:07 DEBUG TaskList: ### Running copy_file... 11-26 21:09 DEBUG TaskList: ### Finished copy_file 11-26 21:09 ERROR TaskList: [Errno 22] Invalid argument Traceback (most recent call last): File "\lib\wubi\backends\common\tasklist.py", line 197, in __call__ File "\lib\wubi\backends\common\utils.py", line 209, in copy_file IOError: [Errno 22] Invalid argument 11-26 21:09 DEBUG TaskList: # Cancelling tasklist 11-26 21:09 DEBUG TaskList: New task check_iso 11-26 21:09 ERROR root: [Errno 22]

    Read the article

  • Where to find new Micro-BTX (uBTX) motherboards? Or should I just replace the box?

    - by John Rudy
    OK, so I'm guessing that it's dead. It's not my machine, and the owner is on a very fixed (IE, none) income. I'm generous, but I'm not that generous, since I already gave him what (at the time) was a fully functional and fairly well-equipped machine. (Aside from the mobo and proc, almost nothing else in it was stock. I'd taken it up to 3GB of RAM, upgraded the hard drive, added a decent video card, installed a wireless adapter, running Vista, etc.) According to further research, the machine uses a Micro-BTX (uBTX) motherboard, and since it's an AMD Athlon64, the AM2 socket. So I'm looking at a few options, and am wondering what's the best route to take? Find an AM2 socket uBTX mobo. I can't find them new online anywhere, leading me to believe that this is an obsolete form factor/chip combination. I don't want a refurb or a system pull because, quite honestly, once I deal with this mess, I don't want to go through it again in another year or two. Find an Intel uBTX mobo and a (relatively -- hah, I still want at least a dual-core) inexpensive Intel CPU. At this point, the only things stock in the machine would be the case and the PSU. :) Buy a bare-bones kit (mobo/proc/PSU/case, sometimes even RAM) from somewhere like CompUSA/TigerDirect or Fry's and move all of the other hardware over. This makes life difficult because the copy of Vista is an upgrade, tied to the copy of XP which shipped on the Gateway, which is OEM and won't install on the new box. :) If I change the CPU brand (AMD to Intel), will I need to reinstall Windows, or can it just be reactivated? Where can I actually find a new, in-box, not system pull, not refurb AM2 uBTX mobo? Do they even exist anymore? What kind of money are we talking (US dollars)? The end goal is to get the machine functional again as cheaply as humanly possible. If it were my own machine, I wouldn't even be asking this, I'd be custom-building a new one. However, it's not mine, I'm shelling out of pocket for the fix (plus the work), and thus want to keep that end price low-low-low.

    Read the article

  • Uwsgi starts from root but not as a service

    - by vittore
    I have nginx + uwsgi setup for flask website. thats my nginx server { listen 80; server_name _; location /static/ { alias /var/www/site/app/static/; } location / { uwsgi_pass 127.0.0.1:5080; include uwsgi_params; } } And here is my uwsgi config.xml <uwsgi> <socket>127.0.0.1:5080</socket> <autoload/> <daemonize>/var/log/uwsgi_webapp.log</daemonize> <pythonpath>/var/www/site/</pythonpath> <module>run:app</module> <plugins>python27</plugins> <virtualenv>/var/www/venv/</virtualenv> <processes>1</processes> <enable-threads/> <master /> <harakiri>60</harakiri> <max-requests>2000</max-requests> <limit-as>512</limit-as> <reload-on-as>256</reload-on-as> <reload-on-rss>192</reload-on-rss> <no-orphans/> <vacuum/> </uwsgi> When I trying to start uwsgi service (service uwsgi start) it says ok but there is no uwsgi process and I see the following in the log: *** Starting uWSGI 1.0.3-debian (64bit) on [Fri Oct 25 00:43:13 2013] *** compiled with version: 4.6.3 on 17 July 2012 02:26:54 current working directory: / writing pidfile to /run/uwsgi/app/gsk/pid detected binary path: /usr/bin/uwsgi-core setgid() to 33 setuid() to 33 limiting address space of processes... your process address space limit is 536870912 bytes (512 MB) your memory page size is 4096 bytes *** WARNING: you have enabled harakiri without post buffering. Slow upload could be rejected on post-unbuffered webservers *** uwsgi socket 0 bound to TCP address 127.0.0.1:5080 fd 6 bind(): Permission denied [socket.c line 107] However when I start uwsgi as a root uwsgi --socket 127.0.0.1:5080 --module run --callab app --harakiri 15 --harakiri-verbose --logto2 tmp/uwsgi.log It starts just fine and after restarting nginx I can access website. What can be an issue ?

    Read the article

  • XRDP: window manager not starting

    - by niboshi
    I have setup my Ubuntu server so that I can connect and login to XRDP from Windows remote desktop. My problem is that after logging in, no window-manager is started. It only displays a single gnome-terminal with no border and gray meshed background. It seems that /usr/sbin/xrdp-sesman itself is running (from observation of ps and /var/run/xrdp/xrdp-sesman.pid). I put debugging line like touch /home/myname/aaaaa into ~/startwm.sh or /etc/xrdp/startwm.sh, but the file aaaaa did not generated after logging in, so these scripts have not been executed. (Both of them have chmod +x permission.) Am I missing some configuration file, or is there any way of further inspection? Any help is appreciated. Thanks. Contents of /etc/xrdp/sesman.ini [Globals] ListenAddress=127.0.0.1 ListenPort=3350 EnableUserWindowManager=0 # or 1 UserWindowManager=startwm.sh DefaultWindowManager=startwm.sh # or commented-out [Security] AllowRootLogin=1 MaxLoginRetry=4 TerminalServerUsers=tsusers TerminalServerAdmins=tsadmins [Sessions] MaxSessions=10 KillDisconnected=0 IdleTimeLimit=0 DisconnectedTimeLimit=0 [Logging] LogFile=/var/log/xrdp-sesman.log LogLevel=DEBUG EnableSyslog=0 SyslogLevel=DEBUG [X11rdp] param1=-bs param2=-ac param3=-nolisten param4=tcp [Xvnc] param1=-bs param2=-ac param3=-nolisten param4=tcp Contents of /var/log/xrdp-sesman.log after logging in: [20120402-21:29:34] [CORE ] starting sesman with pid 11064 [20120402-21:29:34] [INFO ] listening... [20120402-21:29:39] [INFO ] scp thread on sck 7 started successfully [20120402-21:29:39] [INFO ] granted TS access to user myname [20120402-21:29:39] [INFO ] starting Xvnc session... [20120402-21:29:40] [INFO ] starting xrdp-sessvc - xpid=11074 - wmpid=11073 [20120402-21:29:49] [INFO ] session 11072 - user myname- terminated Process tree Below is a part of ps aufx output during xrdp session: xrdp 12344 0.0 0.4 22856 8732 ? Sl Apr02 0:01 /usr/sbin/xrdp root 12346 0.0 0.0 15672 2000 ? S Apr02 0:00 /usr/sbin/xrdp-sesman root 24346 0.0 0.0 3780 872 ? S 00:00 0:00 \_ /usr/sbin/xrdp-sessvc 24348 24347 myname 24347 0.4 0.6 76468 13700 ? Sl 00:00 0:14 \_ gnome-terminal myname 24362 0.0 0.0 2220 716 ? S 00:00 0:00 | \_ gnome-pty-helper myname 24363 0.0 0.2 6912 5268 pts/13 Ss 00:00 0:00 | \_ bash myname 27902 0.0 0.0 2824 1096 pts/13 R+ 00:53 0:00 | \_ ps aufx myname 24348 0.0 0.9 24984 19216 ? S 00:00 0:01 \_ Xvnc :18 -geometry 1920x1080 -depth 24 -rfbauth /home/myname/.vnc/sesman_myname_passwd -bs -ac -nolisten tcp root 24349 0.0 0.0 16596 1304 ? Sl 00:00 0:00 \_ xrdp-chansrv Environment Ubuntu 11.10 Oneiric xrdp version: 0.5.0~20100303cvs-6ubuntu2

    Read the article

  • Best available technology for layered disk cache in linux

    - by SpliFF
    I've just bought a 6-core Phenom with 16G of RAM. I use it primarily for compiling and video encoding (and occassional web/db). I'm finding all activities get disk-bound and I just can't keep all 6 cores fed. I'm buying an SSD raid to sit between the HDD and tmpfs. I want to setup a "layered" filesystem where reads are cached on tmpfs but writes safely go through to the SSD. I want files (or blocks) that haven't been read lately on the SSD to then be written back to a HDD using a compressed FS or block layer. So basically reads: - Check tmpfs - Check SSD - Check HD And writes: - Straight to SSD (for safety), then tmpfs (for speed) And periodically, or when space gets low: - Move least frequently accessed files down one layer. I've seen a few projects of interest. CacheFS, cachefsd, bcache seem pretty close but I'm having trouble determining which are practical. bcache seems a little risky (early adoption), cachefs seems tied to specific network filesystems. There are "union" projects unionfs and aufs that let you mount filesystems over each other (USB device over a DVD usually) but both are distributed as a patch and I get the impression this sort of "transparent" mounting was going to become a kernel feature rather than a FS. I know the kernel has a built-in disk cache but it doesn't seem to work well with compiling. I see a 20x speed improvement when I move my source files to tmpfs. I think it's because the standard buffers are dedicated to a specific process and compiling creates and destroys thousands of processes during a build (just guessing there). It looks like I really want those files precached. I've read tmpfs can use virtual memory. In that case is it practical to create a giant tmpfs with swap on the SSD? I don't need to boot off the resulting layered filesystem. I can load grub, kernel and initrd from elsewhere if needed. So that's the background. The question has several components I guess: Recommended FS and/or block layer for the SSD and compressed HDD. Recommended mkfs parameters (block size, options etc...) Recommended cache/mount technology to bind the layers transparently Required mount parameters Required kernel options / patches, etc..

    Read the article

  • Double Layer DVD+R burning problem - I/O Error

    - by Mehper C. Palavuzlar
    I have a modern PC (Quad Core CPU, 4 GB RAM, Win7 Home Premium 64-bit) but I have a problem with burning .dvd images to Double Layer (8.5 GB) DVDs. I wasted too many DVD+R DL discs but to no avail. Here is a short explanation of what I did: I'm using ImgBurn v2.5.0.0 (latest version). I'm trying to burn an image file (.dvd) which is together with the related .iso file in the same folder. In ImgBurn, I select the file with .dvd extension, and set writing speed to 2.4x. Burning process starts normally, but around 7% of the process, it gives a I/O Write Error, which is as follows: I wasted 3 discs (Magic, Made in Taiwan, DVD+R DL, 8.5 GB) trying the same thing. My DVD writer is LG GH22NP20 with IDE connection type. I updated its firmware from 1.04 to 2.00 but no success in burning again. Then my cousin brought his LG (an older model) which, he claims, was successful in writing DL discs with the same brand (Magic). I plugged off my LG and plugged the older one in, and tried to burn the image again. It also gave an I/O Error even without standing till 7%. I tried another burning program (CloneCD), but failed again. Then I bought other brands (TDK and VERBATIM) and tried to burn the image. Burning process started successfully, but around 14% (for Verbatim) and 25% (for TDK) failed again. Here is a screeny from ImgBurn: I've burned lots of 4.7 GB DVD+Rs and DVD-Rs successfully, even without a single error, with this LG DVD writer, but this case is very bothering for me. What should I do? Should I buy a new DVD writer other than LG? Could this be related to Windows or my hardware configuration? Thanks for your help. Edit: My burner works on my cousin's machine. So the problem must be related to my system. What could be the reason? Latest news: I borrowed an external USB DVD writer from a friend, which is PHILIPS SPD3000CC (an old model). Guess what! It's burning DVD+R DLs successfully! How come an internal DVD writer of a brand new computer system cannot burn DL DVDs? Now I'm considering buying a new internal DVD writer with not IDE, but SATA connection...

    Read the article

  • What is the difference between Anycast and GeoDNS / GeoIP wrt HA?

    - by Riyad
    Based on the Wikipedia description of Anycast, it includes both the distribution of a domain-name-to-many-IP-mapping across many DNS servers as well as replying to clients with the most geographically close (or fastest) server. In the context of a globally distributed, highly available site like google.com (or any CDN service with many global edge locations) this sounds like the two key features one would need. DNS services like Amazon's Route53, EasyDNS and DNSMadeEasy all advertise themselves as Anycast-enabled networks. Therefore my assumption is that each of these DNS services transparently offer me those two killer features: multi-IP-to-domain mapping AND routing clients to the closest node. However, each of these services seem to separate out these two functionalities, referring to the 2nd one (routing clients to closest node) as "GeoDNS", "GeoIP" or "Global Traffic Director" and charge extra for the service. If a core tenant of an Anycast-capable system is to already do this, why is this functionality being earmarked as this extra feature? What is this "GeoDNS" feature doing that a standard Anycast DNS service won't do (according to the definition of Anycast from Wikipedia -- I understand what is being advertised, just not why it isn't implied already). I get extra-confused when a DNS service like Route53 that doesn't support this nebulous "GeoDNS" feature lists functionality like: Fast – Using a global anycast network of DNS servers around the world, Route 53 is designed to automatically route your users to the optimal location depending on network conditions. As a result, the service offers low query latency for your end users, as well as low update latency for your DNS record management needs. ... which sounds exactly like what GeoDNS is intended to do, but geographically directing clients is something they explicitly don't support it yet. Ultimately I am looking for the two following features from a DNS provider: Map multiple IP addresses to a single domain name (like google.com, amazon.com, etc. does) Utilize a DNS service that will respond to client requests for that domain with the IP address of the nearest server to the requestee. As mentioned, it seems like this is all part of an "Anycast" DNS service (all of which these services are), but the features and marketing I see from them suggest otherwise, making me think I need to learn a bit more about how DNS works before making a deployment choice. Thanks in advance for any clarifications.

    Read the article

< Previous Page | 307 308 309 310 311 312 313 314 315 316 317 318  | Next Page >