Search Results

Search found 11685 results on 468 pages for 'intel core i3'.

Page 421/468 | < Previous Page | 417 418 419 420 421 422 423 424 425 426 427 428  | Next Page >

  • Does *every* project benefit from written specifications?

    - by nikie
    I know this is holy war territory, so please read the question to the end before answering. There are many cases where written specifications make a lot of sense. For example, if you're a contractor and you want to get paid, you need written specs. If you're working in a team with 20 persons, you need written specs. If you're writing a programming language compiler or interpreter (and it's not perl), you'll usually write a formal specification. I don't doubt that there are many more cases where written specifications are a really good idea. I just think that there are cases where there's so little benefit in written specs, that it doesn't outweigh the costs of writing and maintaining them. EDIT: The close votes say that "it is difficult to say what is asked here", so let me clarify: The usefulness of written, detailed specifications is often claimed like a dogma. (If you want examples, look at the comments.) But I don't see the use of them for the kind of development I'm doing. So what is asked here is: How would written specifications help me? Background information: I work for a small company that's developing vertical market software. If our product is easier to use and has better performance than the competition, it sells. If it's harder to use, even if it behaves 100% as the specification says, it doesn't sell. So there are no "external forces" for having written specs. The advantage would have to be somewhere in the development process. Now, I can see how frozen specifications would make a developer's life easier. But we'll never have frozen specs. If we see in the middle of development that feature X is not intuitive to use the way it's specified, then we can only choose between changing the specification or developing a product that won't sell. You'll probably ask by now: How do you know when you're done? Well, we're continually improving our product. The competition does the same. So (hopefully) we're never done. We keep improving the software, and when we reach a point when the benefits of the improvements we've added since the last release outweigh the costs of an update, we create a new release that is then tested, localized, documented and deployed. This also means that there's rarely any schedule pressure. Nobody has to do overtime to make a deadline. If the feature isn't done by the time we want to release the next version, it'll simply go into the next version. The next question might be: How do your developers know what they're supposed to implement? The answer is: They have a lot of domain knowledge. They know the customers business well enough, so a high-level description of the feature (or even just the problem that the customer needs solved) is enough to implement it. If it's not clear, the developer creates a few fake screens to get feedback from marketing/management or customers, but this is nowhere near the level of detail of actual specifications. This might be inefficient for larger teams, but for a small team with low turnover it works quite well. It has the additional benefit that the developer in question often comes up with a better solution than the person writing the specs might have. This question is already getting very long, but let me address one last point: Testing. Like I said in the beginning, if our software behaves 100% like the spec says, it still can be crap. In fact, if it's so unintuitive that you need a spec to know how to test it, it probably is crap. It makes sense to have fixed, written tests for some core functionality and for regression bugs, but again, this is nowhere near a full written spec of how the software should behave when. The main test is: hand the software to a user who doesn't know it yet and tell him to use the new feature X. If she can figure out how to use it and it works, it works.

    Read the article

  • Thinking Local, Regional and Global

    - by Apeksha Singh-Oracle
    The FIFA World Cup tournament is the biggest single-sport competition: it’s watched by about 1 billion people around the world. Every four years each national team’s manager is challenged to pull together a group players who ply their trade across the globe. For example, of the 23 members of Brazil’s national team, only four actually play for Brazilian teams, and the rest play in England, France, Germany, Spain, Italy and Ukraine. Each country’s national league, each team and each coach has a unique style. Getting all these “localized” players to work together successfully as one unit is no easy feat. In addition to $35 million in prize money, much is at stake – not least national pride and global bragging rights until the next World Cup in four years time. Achieving economic integration in the ASEAN region by 2015 is a bit like trying to create the next World Cup champion by 2018. The team comprises Brunei Darussalam, Cambodia, Indonesia, Lao PDR, Malaysia, Myanmar, Philippines, Singapore, Thailand and Vietnam. All have different languages, currencies, cultures and customs, rules and regulations. But if they can pull together as one unit, the opportunity is not only great for business and the economy, but it’s also a source of regional pride. BCG expects by 2020 the number of firms headquartered in Asia with revenue exceeding $1 billion will double to more than 5,000. Their trade in the region and with the world is forecast to increase to 37% of an estimated $37 trillion of global commerce by 2020 from 30% in 2010. Banks offering transactional banking services to the emerging market place need to prepare to repond to customer needs across the spectrum – MSMEs, SMEs, corporates and multi national corporations. Customers want innovative, differentiated, value added products and services that provide: • Pan regional operational independence while enabling single source of truth at a regional level • Regional connectivity and Cash & Liquidity  optimization • Enabling Consistent experience for their customers  by offering standardized products & services across all ASEAN countries • Multi-channel & self service capabilities / access to real-time information on liquidity and cash flows • Convergence of cash management with supply chain and trade finance While enabling the above to meet customer demands, the need for a comprehensive and robust credit management solution for effective regional banking operations is a must to manage risk. According to BCG, Asia-Pacific wholesale transaction-banking revenues are expected to triple to $139 billion by 2022 from $46 billion in 2012. To take advantage of the trend, banks will have to manage and maximize their own growth opportunities, compete on a broader scale, manage the complexity within the region and increase efficiency. They’ll also have to choose the right operating model and regional IT platform to offer: • Account Services • Cash & Liquidity Management • Trade Services & Supply Chain Financing • Payments • Securities services • Credit and Lending • Treasury services The core platform should be able to balance global needs and local nuances. Certain functions need to be performed at a regional level, while others need to be performed on a country level. Financial reporting and regulatory compliance are a case in point. The ASEAN Economic Community is in the final lap of its preparations for the ultimate challenge: becoming a formidable team in the global league. Meanwhile, transaction banks are designing their own hat trick: implementing a world-class IT platform, positioning themselves to repond to customer needs and establishing a foundation for revenue generation for years to come. Anand Ramachandran Senior Director, Global Banking Solutions Practice Oracle Financial Services Global Business Unit

    Read the article

  • Migrating VB6 to HTML5 is not a fiction - Customer success story

    - by Webgui
    All of you VB developers in the present or past would probably find it hard to believe that the old VB code can be migrated and modernized into the latest .NET based HTML5 without having to rewrite the application. But we have been working on such tools for the past couple of years and already have several real world applications that were fully 'transposed' from VB6. The solution is called Instant CloudMove and its main tool is called the TranspositionStudio. It is a unique solution that relies on the concept of transposition. Transposition comes from mathematics and music and refers to exchanging elements while everything else remains the same or moving an element as is from one environment to another. This means that we are taking the source code and put it in a modern technological environment with relatively few adjustments.The concept is based on a set of Mapping Expressions which are basically links between an element in the source environment and one in the target environment that has the same functionality. About 95% of the code is usually mapped out-of-the-box and the rest is handled with easy-to-use mapping tools designed for Visual Studio developers providing them with a familiar environment and concepts for completing the mapping and allowing them to extend and customize existing mapping expressions. The solution is also based on a circular workflow that enables developers to make any changes as required until the result is satisfying.As opposed to existing migration solutions that offer automation are usually a “black box” to the user, the transposition concept enables full visibility, flexibility and control over the code and process at all times allowing to also add/change functionalities or upgrade the UI within the process and tools.This is exactly the case with our customer’s aging VB6 PMS (Property Management System) which needed a technological update as well as a design refresh. The decision was to move the VB6 application which had about 1 million lines of code into the latest web technology. Since the application was initially written 13 years ago and had many upgrades since the code must be very patchy and includes unused sections. As a result, the company Mihshuv Group considered rewriting the entire application in Java since it already had the knowledge. Rewrite would allow starting with a clean slate and designing functionality, database architecture, UI without any constraints. On the other hand, rewrite entitles a long and detailed specification work as well as a thorough QA and this translates into a long project with high risk and costs.So the company looked for a migration solution as an alternative; the research lead to Gizmox and after examining the technology it was decided to perform a hybrid project which would include an automatic transposition of the core of the VB6 application (200,000 lines of code) while they redesigning the UI, adding new functionality, deleting unused code and rewriting about 140 reports with Crystal Reports will be done manually using Visual WebGui development tools.The migration part of the project was completed in 65 days by 3 developers from Mihshuv Group guided by Gizmox migration experts while the rewrite and UI upgrade tasks took about the same. So in only a few months period Mihshuv Group generated an up-to-date product, written in the latest Web technology with modern, friendly UI and improved functionality. Guest selection screen of the original VB6 PMS Guest selection screen on the new web–based PMS Compared to the initial plan to rewrite the entire application in Java, the hybrid migration/rewrite approach taken by Mihshuv Group using Gizmox technology proved as a great decision. In terms of time and cost there were substantial savings; from a project that was priced for at least a year (without taking into account the huge risk and uncertainty) it became a few months project only. More about this and other customer stories can be found here

    Read the article

  • College Ratings via the Federal Government

    - by user9147039
    A few weeks back you might remember news about a higher education rating system proposal from the Obama administration. As I've discussed previously, political and stakeholder pressures to improve outcomes and increase transparency are stronger than ever before. The executive branch proposal is intended to make progress in this area. Quoting from the proposal itself, "The ratings will be based upon such measures as: Access, such as percentage of students receiving Pell grants; Affordability, such as average tuition, scholarships, and loan debt; and Outcomes, such as graduation and transfer rates, graduate earnings, and advanced degrees of college graduates.” This is going to be quite complex, to say the least. Most notably, higher ed is not monolithic. From community and other 2-year colleges, to small private 4-year, to professional schools, to large public research institutions…the many walks of higher ed life are, well, many. Designing a ratings system that doesn't wind up with lots of unintended consequences and collateral damage will be difficult. At best you would end up potentially tarnishing the reputation of certain institutions that were actually performing well against the metrics and outcome measures that make sense in their "context" of education. At worst you could spend a lot of time and resources designing a system that would lose credibility with its "customers". A lot of institutions I work with already have in place systems like the one described above. They are tracking completion rates, completion timeframes, transfers to other institutions, job placement, and salary information. As I talk to these institutions there are several constants worth noting: • Deciding on which metrics to measure is complicated. While employment and salary data are relatively easy to track, qualitative measures are more difficult. How do you quantify the benefit to someone who studies in one field that may not compensate him or her as well as another field but that provides huge personal fulfillment and reward is a difficult measure to quantify? • The data is available but the systems to transform the data into actual information that can be used in meaningful ways are not. Too often in higher ed information is siloed. As such, much of the data that need to be a part of a comprehensive system sit in multiple organizations, oftentimes outside the reach of core IT. • Politics and culture are big barriers. One of the areas that my team and I spend a lot of time talking about with higher ed institutions all over the world is the imperative to optimize for student success. This, like the tracking of the students’ achievement after graduation, requires a level or organizational capacity that does not currently exist. The primary barrier is the culture of "data islands" in higher ed, and the need for leadership to drive out the divisions between departments, schools, colleges, etc. and institute academy-wide analytics and data stewardship initiatives that will enable student success. • Data quality is a very big issue. So many disparate systems exist (some on premise, some "in the cloud") that keep data about "persons" using different means to identify them. Establishing a single source of truth about an individual and his or her data is difficult without some type of data quality policy and tools. Good tools actually exist but are seldom leveraged. Don't misunderstand - I think it's a great idea to drive additional transparency and accountability into the system of higher education. And not just at home, but globally. Students and parents need access to key data to make informed, responsible choices. The tools exist to not only enable this kind of information to be shared but to capture the very metrics stakeholders care most about and in a way that makes sense in the context of a given institution's "place" in the overall higher ed panoply.

    Read the article

  • Cloud Adoption Challenges

    - by Herve Roggero
    Originally posted on: http://geekswithblogs.net/hroggero/archive/2013/11/07/cloud-adoption-challenges.aspxWhile cloud computing makes sense for most organizations and countless projects, I have seen customers significantly struggle with cloud adoption challenges. This blog post is not an attempt to provide a generic assessment of cloud adoption; rather it is an account of personal experiences in the field, some of which may or may not apply to your organization. Cloud First, Burst? In the rush to cloud adoption some companies have made the decision to redesign their core system with a cloud first approach. However a cloud first approach means that the system may not work anymore on-premises after it has been redesigned, specifically if the system depends on Platform as a Service (PaaS) components (such as Azure Tables). While PaaS makes sense when your company is in a position to adopt the cloud exclusively, it can be difficult to leverage with systems that need to work in different clouds or on-premises. As a result, some companies are starting to rethink their cloud strategy by designing for on-premises first, and modify only the necessary components to burst when needed in the cloud. This generally means that the components need to work equally well in any environment, which requires leveraging Infrastructure as a Service (IaaS) or additional investments for PaaS applications, or both.  What’s the Problem? Although most companies can benefit from cloud computing, not all of them can clearly identify a business reason for doing so other than in very generic terms. I heard many companies claim “it’s cheaper”, or “it allows us to scale”, without any specific metric or clear strategy behind the adoption decision. Other companies have a very clear strategy behind cloud adoption and can precisely articulate business benefits, such as “we have a 500% increase in traffic twice a year, so we need to burst in the cloud to avoid doubling our network and server capacity”. Understanding the problem being solved through by adopting cloud computing can significantly help organizations determine the optimum path and timeline to adoption. Performance or Scalability? I stopped counting the number of times I heard “the cloud doesn’t scale; our database runs faster on a laptop”.  While performance and scalability are related concepts, they are nonetheless different in nature. Performance is a measure of response time under a given load (meaning with a specific number of users), while scalability is the performance curve over various loads. For example one system could see great performance with 100 users, but timeout with 1,000 users, in which case the system wouldn’t scale. However another system could have average performance with 100 users, but display the exact same performance with 1,000,000 users, in which case the system would scale. Understanding that cloud computing does not usually provide high performance, but instead provides the tools necessary to build a scalable system (usually using PaaS services such as queuing and data federation), is fundamental to proper cloud adoption. Uptime? Last but not least, you may want to read the Service Level Agreement of your cloud provider in detail if you haven’t done so. If you are expecting 99.99% uptime annually you may be in for a surprise. Depending on the component being used, there may be no associated SLA at all! Other components may be restarted at any time, or services may experience failover conditions weekly ( or more) based on current overall conditions of the cloud service provider, most of which are outside of your control. As a result, for PaaS cloud environments (and to a certain extent some IaaS systems), applications need to assume failure and gracefully retry to be successful in the cloud in order to provide service continuity to end users. About Herve Roggero Herve Roggero, Windows Azure MVP, is the founder of Blue Syntax Consulting (http://www.bluesyntax.net). Herve's experience includes software development, architecture, database administration and senior management with both global corporations and startup companies. Herve holds multiple certifications, including an MCDBA, MCSE, MCSD. He also holds a Master's degree in Business Administration from Indiana University. Herve is the co-author of "PRO SQL Azure" and “PRO SQL Server 2012 Practices” from Apress, a PluralSight author, and runs the Azure Florida Association.

    Read the article

  • Identity in .NET 4.5&ndash;Part 3: (Breaking) changes

    - by Your DisplayName here!
    I recently started porting a private build of Thinktecture.IdentityModel to .NET 4.5 and noticed a number of changes. The good news is that I can delete large parts of my library because many features are now in the box. Along the way I found some other nice additions. ClaimsIdentity now has methods to query the claims collection, e.g. HasClaim(), FindFirst(), FindAll(). ClaimsPrincipal has those methods as well. But they work across all contained identities. Nice! ClaimsPrincipal.Current retrieves the ClaimsPrincipal from Thread.CurrentPrincipal. Combined with the above changes, no casting necessary anymore. SecurityTokenHandler now has read and write methods that work directly with strings. This makes it much easier to deal with non-XML tokens like SWT or JWT. A new session security token handler that uses the ASP.NET machine key to protect the cookie. This makes it easier to get started in web farm scenarios. No need for a custom service host factory or the federation behavior anymore. WCF can be switched into “WIF mode” with the useIdentityConfiguration switch (odd name though). Tooling has become better and the new test STS makes it very easy to get started. On the other hand – and that was kind of expected – to bring claims into the core framework, there are also some breaking changes for WIF code. If you want to migrate (and I would recommend that), most changes to your code are mechanical. The following is a brain dump of the changes I encountered. Assembly Microsoft.IdentityModel is gone. The new functionality is now in mscorlib, System.IdentityModel(.Services) and System.ServiceModel. All the namespaces have changed as well. No IClaimsPrincipal and IClaimsIdentity anymore. Configuration section has been split into <system.identityModel /> and <system.identityModel.services />. WCF configuration story has changed as well. Claim.ClaimType is now Claim.Type. ClaimCollection is now IEnumerable<Claim>. IsSessionMode is now IsReferenceMode. Bootstrap token handling is different now. ClaimsPrincipalHttpModule is gone. This is not really needed anymore, apart from maybe claims transformation (see here). Various factory methods on ClaimsPrincipal are gone (e.g. ClaimsPrincipal.CreateFromIdentity()). SecurityTokenHandler.ValidateToken now returns a ReadOnlyCollection<ClaimsIdentity>. Some lower level helper classes are gone or internal now (e.g. KeyGenerator). The WCF WS-Trust bindings are gone. I think this is a pity. They were *really* useful when doing work with WSTrustChannelFactory. Since WIF is part of the Windows operating system and also supported in future versions of .NET, there is no urgent need to migrate to the 4.5 claims model. But obviously, going forward, at some point you want to make the move.

    Read the article

  • Modern OpenGL context failure [migrated]

    - by user209347
    OK, I managed to create an OpenGL context with wglcreatecontextattribARB with version 3.2 in my attrib struct (So I have initialized a 3.2 opengl context). It works, but the strange thing is, when I use glBindBuffer e,g. I still get unreferenced linker error, shouldn't a newer context prevent this? I'm on windows BTW, Linux doesn't have to deal with older and newer contexts (it directly supports the core of its version). The code: PIXELFORMATDESCRIPTOR pfd; HGLRC tmpRC; int iFormat; if (!(hDC = GetDC(hWnd))) { CMsgBox("Unable to create a device context. Program will now close.", "Error"); return false; } ZeroMemory(&pfd, sizeof(pfd)); pfd.nSize = sizeof(pfd); pfd.nVersion = 1; pfd.dwFlags = PFD_DRAW_TO_WINDOW | PFD_SUPPORT_OPENGL | PFD_DOUBLEBUFFER; pfd.iPixelType = PFD_TYPE_RGBA; pfd.cColorBits = attribs->colorbits; pfd.cDepthBits = attribs->depthbits; pfd.iLayerType = PFD_MAIN_PLANE; if (!(iFormat = ChoosePixelFormat(hDC, &pfd))) { CMsgBox("Unable to find a suitable pixel format. Program will now close.", "Error"); return false; } if (!SetPixelFormat(hDC, iFormat, &pfd)) { CMsgBox("Unable to initialize the pixel formats. Program will now close.", "Error"); return false; } if (!(tmpRC=wglCreateContext(hDC))) { CMsgBox("Unable to create a rendering context. Program will now close.", "Error"); return false; } if (!wglMakeCurrent(hDC, tmpRC)) { CMsgBox("Unable to activate the rendering context. Program will now close.", "Error"); return false; } strncpy(vers, (char*)glGetString(GL_VERSION), 3); vers[3] = '\0'; if (sscanf(vers, "%i.%i", &glv, &glsubv) != 2) { CMsgBox("Unable to retrieve the OpenGL version. Program will now close.", "Error"); return false; } hRC = NULL; if (glv > 2) // Have OpenGL 3.+ support { if ((wglCreateContextAttribsARB = (PFNWGLCREATECONTEXTATTRIBSARBPROC)wglGetProcAddress("wglCreateContextAttribsARB"))) { int attribs[] = {WGL_CONTEXT_MAJOR_VERSION_ARB, glv, WGL_CONTEXT_MINOR_VERSION_ARB, glsubv,WGL_CONTEXT_FLAGS_ARB, 0,0}; hRC = wglCreateContextAttribsARB(hDC, 0, attribs); wglMakeCurrent(NULL, NULL); wglDeleteContext(tmpRC); if (!wglMakeCurrent(hDC, hRC)) { CMsgBox("Unable to activate the rendering context. Program will now close.", "Error"); return false; } moderncontext = true; } } if (hRC == NULL) { hRC = tmpRC; moderncontext = false; }

    Read the article

  • What do you need to know to be a world-class master software developer? [closed]

    - by glitch
    I wanted to bring up this question to you folks and see what you think, hopefully advise me on the matter: let's say you had 30 years of learning and practicing software development in front of you, how would you dedicate your time so that you'd get the biggest bang for your buck. What would you both learn and work on to be a world-class software developer that would make a large impact on the industry and leave behind a legacy? I think that most great developers end up being both broad generalists and specialists in one-two areas of interest. I'm thinking Bill Joy, John Carmack, Linus Torvalds, K&R and so on. I'm thinking that perhaps one approach would be to break things down by categories and establish a base minimum of "software development" greatness. I'm thinking: Operating Systems: completely internalize the core concepts of OS, perhaps gain a lot of familiarity with an OSS one such as Linux. Anything from memory management to device drivers has to be complete second nature. Programming Languages: this is one of those topics that imho has to be fully grokked even if it might take many years. I don't think there's quite anything like going through the process of developing your own compiler, understanding language design trade-offs and so on. Programming Language Pragmatics is one of my favorite books actually, I think you want to have that internalized back to back, and that's just the start. You could go significantly deeper, but I think it's time well spent, because it's such a crucial building block. As a subset of that, you want to really understand the different programming paradigms out there. Imperative, declarative, logic, functional and so on. Anything from assembly to LISP should be at the very least comfortable to write in. Contexts: I believe one should have experience working in different contexts to truly be able to appreciate the trade-offs that are being made every day. Embedded, web development, mobile development, UX development, distributed, cloud computing and so on. Hardware: I'm somewhat conflicted about this one. I think you want some understanding of computer architecture at a low level, but I feel like the concepts that will truly matter will be slightly higher level, such as CPU caching / memory hierarchy, ILP, and so on. Networking: we live in a completely network-dependent era. Having a good understanding of the OSI model, knowing how the Web works, how HTTP works and so on is pretty much a pre-requisite these days. Distributed systems: once again, everything's distributed these days, it's getting progressively harder to ignore this reality. Slightly related, perhaps add solid understanding of how browsers work to that, since the world seems to be moving so much to interfacing with everything through a browser. Tools: Have a really broad toolset that you're familiar with, one that continuously expands throughout the years. Communication: I think being a great writer, effective communicator and a phenomenal team player is pretty much a prerequisite for a lot of a software developer's greatness. It can't be overstated. Software engineering: understanding the process of building software, team dynamics, the requirements of the business-side, all the pitfalls. You want to deeply understand where what you're writing fits from the market perspective. The better you understand all of this, the more of your work will actually see the daylight. This is really just a starting list, I'm confident that there's a ton of other material that you need to master. As I mentioned, you most likely end up specializing in a bunch of these areas as you go along, but I was trying to come up with a baseline. Any thoughts, suggestions and words of wisdom from the grizzled veterans out there who would like to share their thoughts and experiences with this? I'd really love to know what you think!

    Read the article

  • ZFS Storage Appliance ? ldap ??????

    - by user13138569
    ZFS Storage Appliance ? Openldap ????????? ???ldap ?????????????? Solaris 11 ? Openldap ????????????? ??? slapd.conf ??ldif ?????????? user01 ??????? ?????? slapd.conf # # See slapd.conf(5) for details on configuration options. # This file should NOT be world readable. # include /etc/openldap/schema/core.schema include /etc/openldap/schema/cosine.schema include /etc/openldap/schema/nis.schema # Define global ACLs to disable default read access. # Do not enable referrals until AFTER you have a working directory # service AND an understanding of referrals. #referral ldap://root.openldap.org pidfile /var/openldap/run/slapd.pid argsfile /var/openldap/run/slapd.args # Load dynamic backend modules: modulepath /usr/lib/openldap moduleload back_bdb.la # moduleload back_hdb.la # moduleload back_ldap.la # Sample security restrictions # Require integrity protection (prevent hijacking) # Require 112-bit (3DES or better) encryption for updates # Require 63-bit encryption for simple bind # security ssf=1 update_ssf=112 simple_bind=64 # Sample access control policy: # Root DSE: allow anyone to read it # Subschema (sub)entry DSE: allow anyone to read it # Other DSEs: # Allow self write access # Allow authenticated users read access # Allow anonymous users to authenticate # Directives needed to implement policy: # access to dn.base="" by * read # access to dn.base="cn=Subschema" by * read # access to * # by self write # by users read # by anonymous auth # # if no access controls are present, the default policy # allows anyone and everyone to read anything but restricts # updates to rootdn. (e.g., "access to * by * read") # # rootdn can always read and write EVERYTHING! ####################################################################### # BDB database definitions ####################################################################### database bdb suffix "dc=oracle,dc=com" rootdn "cn=Manager,dc=oracle,dc=com" # Cleartext passwords, especially for the rootdn, should # be avoid. See slappasswd(8) and slapd.conf(5) for details. # Use of strong authentication encouraged. rootpw secret # The database directory MUST exist prior to running slapd AND # should only be accessible by the slapd and slap tools. # Mode 700 recommended. directory /var/openldap/openldap-data # Indices to maintain index objectClass eq ?????????ldif???? dn: dc=oracle,dc=com objectClass: dcObject objectClass: organization dc: oracle o: oracle dn: cn=Manager,dc=oracle,dc=com objectClass: organizationalRole cn: Manager dn: ou=People,dc=oracle,dc=com objectClass: organizationalUnit ou: People dn: ou=Group,dc=oracle,dc=com objectClass: organizationalUnit ou: Group dn: uid=user01,ou=People,dc=oracle,dc=com uid: user01 objectClass: top objectClass: account objectClass: posixAccount objectClass: shadowAccount cn: user01 uidNumber: 10001 gidNumber: 10000 homeDirectory: /home/user01 userPassword: secret loginShell: /bin/bash shadowLastChange: 10000 shadowMin: 0 shadowMax: 99999 shadowWarning: 14 shadowInactive: 99999 shadowExpire: -1 ldap?????????????ZFS Storage Appliance??????? Configuration SERVICES LDAP ??Base search DN ?ldap??????????? ???? ldap ????????? user01 ???????????????? ???????????? user ????????? Unknown or invalid user ?????????????????? ????????????????Solaris 11 ???????????? ????????????? ldap ????????getent ??????????????? # svcadm enable svc:/network/nis/domain:default # svcadm enable ldap/client # ldapclient manual -a authenticationMethod=none -a defaultSearchBase=dc=oracle,dc=com -a defaultServerList=192.168.56.201 System successfully configured # getent passwd user01 user01:x:10001:10000::/home/user01:/bin/bash ????????? user01 ?????????????? # mount -F nfs -o vers=3 192.168.56.101:/export/user01 /mnt # su user01 bash-4.1$ cd /mnt bash-4.1$ touch aaa bash-4.1$ ls -l total 1 -rw-r--r-- 1 user01 10000 0 May 31 04:32 aaa ?????? ldap ??????????????????????????!

    Read the article

  • Power cycles on/off 3 times before booting properly from cold start, no other issues (New System)

    - by James
    Relevant Specs: Sapphire 5850, core i7 920, Seasonic x750 power supply, ECS X58B-A2 mobo. From a cold boot, meaning all power totally disconnected at the wall, the system will power on for less than a second and then power off completely. After two seconds of being powered off this will repeat and on the third "attempt" the computer will boot. To be very specific here is what happens: The power is turned on at the wall and on the psu, the orange stdby LED on the mobo is illuminated but the system is 'off'. I hit the power button on the case or on the mobo itself I hear the relay (?) in the psu closing The case light comes on and the mobo power light comes on. The fans start rotating. Immediately after this the I hear some relay click - the power lights extinguish, the fans stop, the stdby light remains on. Less than 2 seconds pass and the cycle repeats without any intervention from me. On the third attempt it boots normally and the machine runs perfectly. If I do a soft reboot or a full shutdown the computer starts normally the next time. It's only if I pull the power cord or flick the switch off on the PSU that I get the cycling again. Basically any time the stdby light on the mobo goes out. I have removed the graphics card and I get the same problem. I have removed the PSU, hotwired it to the ON position and verified voltages on all lines. The relay does not cycle when I do this. If I connect only the 24 pin ATX connector to the mobo and not the 8 pin ATX12V / CPU connector then I will not get the cycling, the fans run, the power light stays on, but obviously the system can't boot. Disconnecting all fans has no effect on the problem. My feeling it that it's something to do with the motherboard like a capacitor that's taking a long time to charge because it's leaking or something along those lines. But I can't imagine what could be 'wrong' with it and only manifest itself as a problem under these very specific circumstances. Any ideas? Thanks.

    Read the article

  • Uwsgi starts from root but not as a service

    - by vittore
    I have nginx + uwsgi setup for flask website. thats my nginx server { listen 80; server_name _; location /static/ { alias /var/www/site/app/static/; } location / { uwsgi_pass 127.0.0.1:5080; include uwsgi_params; } } And here is my uwsgi config.xml <uwsgi> <socket>127.0.0.1:5080</socket> <autoload/> <daemonize>/var/log/uwsgi_webapp.log</daemonize> <pythonpath>/var/www/site/</pythonpath> <module>run:app</module> <plugins>python27</plugins> <virtualenv>/var/www/venv/</virtualenv> <processes>1</processes> <enable-threads/> <master /> <harakiri>60</harakiri> <max-requests>2000</max-requests> <limit-as>512</limit-as> <reload-on-as>256</reload-on-as> <reload-on-rss>192</reload-on-rss> <no-orphans/> <vacuum/> </uwsgi> When I trying to start uwsgi service (service uwsgi start) it says ok but there is no uwsgi process and I see the following in the log: *** Starting uWSGI 1.0.3-debian (64bit) on [Fri Oct 25 00:43:13 2013] *** compiled with version: 4.6.3 on 17 July 2012 02:26:54 current working directory: / writing pidfile to /run/uwsgi/app/gsk/pid detected binary path: /usr/bin/uwsgi-core setgid() to 33 setuid() to 33 limiting address space of processes... your process address space limit is 536870912 bytes (512 MB) your memory page size is 4096 bytes *** WARNING: you have enabled harakiri without post buffering. Slow upload could be rejected on post-unbuffered webservers *** uwsgi socket 0 bound to TCP address 127.0.0.1:5080 fd 6 bind(): Permission denied [socket.c line 107] However when I start uwsgi as a root uwsgi --socket 127.0.0.1:5080 --module run --callab app --harakiri 15 --harakiri-verbose --logto2 tmp/uwsgi.log It starts just fine and after restarting nginx I can access website. What can be an issue ?

    Read the article

  • XRDP: window manager not starting

    - by niboshi
    I have setup my Ubuntu server so that I can connect and login to XRDP from Windows remote desktop. My problem is that after logging in, no window-manager is started. It only displays a single gnome-terminal with no border and gray meshed background. It seems that /usr/sbin/xrdp-sesman itself is running (from observation of ps and /var/run/xrdp/xrdp-sesman.pid). I put debugging line like touch /home/myname/aaaaa into ~/startwm.sh or /etc/xrdp/startwm.sh, but the file aaaaa did not generated after logging in, so these scripts have not been executed. (Both of them have chmod +x permission.) Am I missing some configuration file, or is there any way of further inspection? Any help is appreciated. Thanks. Contents of /etc/xrdp/sesman.ini [Globals] ListenAddress=127.0.0.1 ListenPort=3350 EnableUserWindowManager=0 # or 1 UserWindowManager=startwm.sh DefaultWindowManager=startwm.sh # or commented-out [Security] AllowRootLogin=1 MaxLoginRetry=4 TerminalServerUsers=tsusers TerminalServerAdmins=tsadmins [Sessions] MaxSessions=10 KillDisconnected=0 IdleTimeLimit=0 DisconnectedTimeLimit=0 [Logging] LogFile=/var/log/xrdp-sesman.log LogLevel=DEBUG EnableSyslog=0 SyslogLevel=DEBUG [X11rdp] param1=-bs param2=-ac param3=-nolisten param4=tcp [Xvnc] param1=-bs param2=-ac param3=-nolisten param4=tcp Contents of /var/log/xrdp-sesman.log after logging in: [20120402-21:29:34] [CORE ] starting sesman with pid 11064 [20120402-21:29:34] [INFO ] listening... [20120402-21:29:39] [INFO ] scp thread on sck 7 started successfully [20120402-21:29:39] [INFO ] granted TS access to user myname [20120402-21:29:39] [INFO ] starting Xvnc session... [20120402-21:29:40] [INFO ] starting xrdp-sessvc - xpid=11074 - wmpid=11073 [20120402-21:29:49] [INFO ] session 11072 - user myname- terminated Process tree Below is a part of ps aufx output during xrdp session: xrdp 12344 0.0 0.4 22856 8732 ? Sl Apr02 0:01 /usr/sbin/xrdp root 12346 0.0 0.0 15672 2000 ? S Apr02 0:00 /usr/sbin/xrdp-sesman root 24346 0.0 0.0 3780 872 ? S 00:00 0:00 \_ /usr/sbin/xrdp-sessvc 24348 24347 myname 24347 0.4 0.6 76468 13700 ? Sl 00:00 0:14 \_ gnome-terminal myname 24362 0.0 0.0 2220 716 ? S 00:00 0:00 | \_ gnome-pty-helper myname 24363 0.0 0.2 6912 5268 pts/13 Ss 00:00 0:00 | \_ bash myname 27902 0.0 0.0 2824 1096 pts/13 R+ 00:53 0:00 | \_ ps aufx myname 24348 0.0 0.9 24984 19216 ? S 00:00 0:01 \_ Xvnc :18 -geometry 1920x1080 -depth 24 -rfbauth /home/myname/.vnc/sesman_myname_passwd -bs -ac -nolisten tcp root 24349 0.0 0.0 16596 1304 ? Sl 00:00 0:00 \_ xrdp-chansrv Environment Ubuntu 11.10 Oneiric xrdp version: 0.5.0~20100303cvs-6ubuntu2

    Read the article

  • Best available technology for layered disk cache in linux

    - by SpliFF
    I've just bought a 6-core Phenom with 16G of RAM. I use it primarily for compiling and video encoding (and occassional web/db). I'm finding all activities get disk-bound and I just can't keep all 6 cores fed. I'm buying an SSD raid to sit between the HDD and tmpfs. I want to setup a "layered" filesystem where reads are cached on tmpfs but writes safely go through to the SSD. I want files (or blocks) that haven't been read lately on the SSD to then be written back to a HDD using a compressed FS or block layer. So basically reads: - Check tmpfs - Check SSD - Check HD And writes: - Straight to SSD (for safety), then tmpfs (for speed) And periodically, or when space gets low: - Move least frequently accessed files down one layer. I've seen a few projects of interest. CacheFS, cachefsd, bcache seem pretty close but I'm having trouble determining which are practical. bcache seems a little risky (early adoption), cachefs seems tied to specific network filesystems. There are "union" projects unionfs and aufs that let you mount filesystems over each other (USB device over a DVD usually) but both are distributed as a patch and I get the impression this sort of "transparent" mounting was going to become a kernel feature rather than a FS. I know the kernel has a built-in disk cache but it doesn't seem to work well with compiling. I see a 20x speed improvement when I move my source files to tmpfs. I think it's because the standard buffers are dedicated to a specific process and compiling creates and destroys thousands of processes during a build (just guessing there). It looks like I really want those files precached. I've read tmpfs can use virtual memory. In that case is it practical to create a giant tmpfs with swap on the SSD? I don't need to boot off the resulting layered filesystem. I can load grub, kernel and initrd from elsewhere if needed. So that's the background. The question has several components I guess: Recommended FS and/or block layer for the SSD and compressed HDD. Recommended mkfs parameters (block size, options etc...) Recommended cache/mount technology to bind the layers transparently Required mount parameters Required kernel options / patches, etc..

    Read the article

  • Setting up Edimax EW-7206APg as Universal Repeater

    - by Ondra Žižka
    Hi, I've troubles setting up Edimax EW-7206APg as a Universal Repeater. I've read few manuals, but they are unclear on certain points. I've managed the repeater to get to a state when it's in a "connected" state. I've set the same WPA passphrase as the router has because I haven't seen any other place to set it at. These are my settings: System Uptime 0day:1h:33m:11s Hardware Version Rev. A Runtime Code Version 1.32 Wireless Configuration Mode Universal Repeater ESSID edimax Channel Number 6 Security WPA-shared key BSSID 00:c0:9f:40:bd:38 Associated Clients 0 Wireless Repeater Interface Configuration ESSID Dusan Security WPA BSSID 00:4f:62:23:8f:7e State Connected LAN Configuration IP Address 192.168.0.10 Subnet Mask 255.255.255.0 Default Gateway 192.168.0.1 MAC Address 00:c0:9f:40:bd:37 This is ipconfig /all: Prípona DNS podle pripojení . . . : riomail.cz Popis . . . . . . . . . . . . . . : Intel(R) PRO/Wireless 2200BG Network Connection Fyzická Adresa. . . . . . . . . . : 00-0E-35-3D-77-68 Protokol DHCP povolen . . . . . . : Ano Automatická konfigurace povolena : Ano Adresa IP . . . . . . . . . . . . : 192.168.0.5 Maska podsíte . . . . . . . . . . : 255.255.255.0 Výchozí brána . . . . . . . . . . : 192.168.0.1 Server DHCP . . . . . . . . . . . : 192.168.0.1 Servery DNS . . . . . . . . . . . : 94.74.192.252 94.74.192.244 I can ping the repeater, I can ping the root AP, but not a DNS server or any other IP beyond the root AP. Anyone has an idea what's wrong? Thanks, Ondra

    Read the article

  • Double Layer DVD+R burning problem - I/O Error

    - by Mehper C. Palavuzlar
    I have a modern PC (Quad Core CPU, 4 GB RAM, Win7 Home Premium 64-bit) but I have a problem with burning .dvd images to Double Layer (8.5 GB) DVDs. I wasted too many DVD+R DL discs but to no avail. Here is a short explanation of what I did: I'm using ImgBurn v2.5.0.0 (latest version). I'm trying to burn an image file (.dvd) which is together with the related .iso file in the same folder. In ImgBurn, I select the file with .dvd extension, and set writing speed to 2.4x. Burning process starts normally, but around 7% of the process, it gives a I/O Write Error, which is as follows: I wasted 3 discs (Magic, Made in Taiwan, DVD+R DL, 8.5 GB) trying the same thing. My DVD writer is LG GH22NP20 with IDE connection type. I updated its firmware from 1.04 to 2.00 but no success in burning again. Then my cousin brought his LG (an older model) which, he claims, was successful in writing DL discs with the same brand (Magic). I plugged off my LG and plugged the older one in, and tried to burn the image again. It also gave an I/O Error even without standing till 7%. I tried another burning program (CloneCD), but failed again. Then I bought other brands (TDK and VERBATIM) and tried to burn the image. Burning process started successfully, but around 14% (for Verbatim) and 25% (for TDK) failed again. Here is a screeny from ImgBurn: I've burned lots of 4.7 GB DVD+Rs and DVD-Rs successfully, even without a single error, with this LG DVD writer, but this case is very bothering for me. What should I do? Should I buy a new DVD writer other than LG? Could this be related to Windows or my hardware configuration? Thanks for your help. Edit: My burner works on my cousin's machine. So the problem must be related to my system. What could be the reason? Latest news: I borrowed an external USB DVD writer from a friend, which is PHILIPS SPD3000CC (an old model). Guess what! It's burning DVD+R DLs successfully! How come an internal DVD writer of a brand new computer system cannot burn DL DVDs? Now I'm considering buying a new internal DVD writer with not IDE, but SATA connection...

    Read the article

  • Ubuntu: Failure to login with multiple video adapters

    - by tsilb
    Forgive my ignorance, for I am a complete linux noob. I have a computer with three video cards and six monitors. Works great on Windows. Trying to get it to run Ubuntu as well. It loads fine when I have it configured to run on one adapter; detects both screens, runs ok. But I want to turn the other 4 monitors on and run the whole thing as one extended desktop (one session, etc). So I downloaded and installed the newest ATI driver for Linux, which seems to work, kinda. I ran this to set up the screens: aticonfig --adapter=all --initial -f Now when I boot, Ubuntu seems to turn on all the screens (3 viewports, each with two cloned displays from what I can tell). When I enter my login info OR move the mouse off the main screen, the screens freeze and the kbd/ms become unresponsive. aticonfig generated xorg.conf included below. Have tried the following: aticonfig -initial -f - works, but only detects the primary adapter and 2 screens aticccle - Tells me I have to reboot after enabling the other cards. Then goes into above described freezing state. aticonfig --adapter=all --initial -f - see above Manually editing xorg.conf file with my limited knowledge - Was able to get two adapters running, but only the second adapter initialized while the primary stopped at the Ubuntu boot screen. Was unable to see the login prompt. Froze after I logged in blindly (was able to hear the login sound). Using generic "radeon" driver instead of ATI Proprietary driver with the above init attempts Toggling xinerama Various combinations of the above Hardware: Intel Core 2 Quad q6600 8GB DDR2 (3x) ATI Radeon HD 4680 5 monitors (21W, 21W, 22W Portrait, 22W Portrait, 19")and an HDTV (26"W, HDMI) in a horizontal arrangement I know next to nothing about Linux/Ubuntu aside from basic filesystem navigation, editing text files, and accessing my local and networked Windows stores and shares. Basically this is the most advanced thing I've had to do. I installed today. Please advise how to make this configuration work. my xorg.conf: Section "ServerLayout" Identifier "Layout0" Screen 0 "aticonfig-Screen[0]-0" 0 0 Screen "aticonfig-Screen[1]-0" RightOf "aticonfig-Screen[0]-0" Screen "aticonfig-Screen[2]-0" RightOf "aticonfig-Screen[1]-0" Option "RenderAccel" "true" Option "AllowGLXWithComposite" "true" EndSection Section "Files" EndSection Section "Module" EndSection Section "ServerFlags" Option "Xinerama" "0" EndSection Section "Monitor" Identifier "aticonfig-Monitor[0]-0" Option "VendorName" "ATI Proprietary Driver" Option "ModelName" "Generic Autodetecting Monitor" Option "DPMS" "true" EndSection Section "Monitor" Identifier "aticonfig-Monitor[1]-0" Option "VendorName" "ATI Proprietary Driver" Option "ModelName" "Generic Autodetecting Monitor" Option "DPMS" "true" EndSection Section "Monitor" Identifier "aticonfig-Monitor[2]-0" Option "VendorName" "ATI Proprietary Driver" Option "ModelName" "Generic Autodetecting Monitor" Option "DPMS" "true" EndSection Section "Device" Identifier "aticonfig-Device[0]-0" Driver "fglrx" BusID "PCI:1:0:0" EndSection Section "Device" Identifier "aticonfig-Device[1]-0" Driver "fglrx" BusID "PCI:3:0:0" EndSection Section "Device" Identifier "aticonfig-Device[2]-0" Driver "fglrx" BusID "PCI:4:0:0" EndSection Section "Screen" Identifier "aticonfig-Screen[0]-0" Device "aticonfig-Device[0]-0" Monitor "aticonfig-Monitor[0]-0" DefaultDepth 24 SubSection "Display" Viewport 0 0 Depth 24 EndSubSection EndSection Section "Screen" Identifier "aticonfig-Screen[1]-0" Device "aticonfig-Device[1]-0" Monitor "aticonfig-Monitor[1]-0" DefaultDepth 24 SubSection "Display" Viewport 0 0 Depth 24 EndSubSection EndSection Section "Screen" Identifier "aticonfig-Screen[2]-0" Device "aticonfig-Device[2]-0" Monitor "aticonfig-Monitor[2]-0" DefaultDepth 24 SubSection "Display" Viewport 0 0 Depth 24 EndSubSection EndSection

    Read the article

  • What is the difference between Anycast and GeoDNS / GeoIP wrt HA?

    - by Riyad
    Based on the Wikipedia description of Anycast, it includes both the distribution of a domain-name-to-many-IP-mapping across many DNS servers as well as replying to clients with the most geographically close (or fastest) server. In the context of a globally distributed, highly available site like google.com (or any CDN service with many global edge locations) this sounds like the two key features one would need. DNS services like Amazon's Route53, EasyDNS and DNSMadeEasy all advertise themselves as Anycast-enabled networks. Therefore my assumption is that each of these DNS services transparently offer me those two killer features: multi-IP-to-domain mapping AND routing clients to the closest node. However, each of these services seem to separate out these two functionalities, referring to the 2nd one (routing clients to closest node) as "GeoDNS", "GeoIP" or "Global Traffic Director" and charge extra for the service. If a core tenant of an Anycast-capable system is to already do this, why is this functionality being earmarked as this extra feature? What is this "GeoDNS" feature doing that a standard Anycast DNS service won't do (according to the definition of Anycast from Wikipedia -- I understand what is being advertised, just not why it isn't implied already). I get extra-confused when a DNS service like Route53 that doesn't support this nebulous "GeoDNS" feature lists functionality like: Fast – Using a global anycast network of DNS servers around the world, Route 53 is designed to automatically route your users to the optimal location depending on network conditions. As a result, the service offers low query latency for your end users, as well as low update latency for your DNS record management needs. ... which sounds exactly like what GeoDNS is intended to do, but geographically directing clients is something they explicitly don't support it yet. Ultimately I am looking for the two following features from a DNS provider: Map multiple IP addresses to a single domain name (like google.com, amazon.com, etc. does) Utilize a DNS service that will respond to client requests for that domain with the IP address of the nearest server to the requestee. As mentioned, it seems like this is all part of an "Anycast" DNS service (all of which these services are), but the features and marketing I see from them suggest otherwise, making me think I need to learn a bit more about how DNS works before making a deployment choice. Thanks in advance for any clarifications.

    Read the article

  • Dual head setup for Ubuntu 10.04.1 and Windows XP Pro with same hardware configuration

    - by mejpark
    Hello. I have a Dell OptiPlex 360 workstation at work, with 2 x ATI RV280 [Radeon 9200 PRO] graphics cards installed, which are attached to two identical 19" HII flat panel monitors. I'm using the open source Radeon driver with Ubuntu, and the proprietary drivers with Windows. The good news is that dual head configuration works for both OSes. The bad news is, I have to use a different hardware configuration for each OS to achieve this. Hardware config #1: Dual monitors work for Windows XP Pro like this: First display -> external VGA port Second display -> DVI input on gfx card Hardware config #2: Dual monitors work for Ubuntu 10.04.1 like this: First display -> VGA port on gfx card Second display -> DVI input on gfx card I connected up the displays according to Config #2 and booted up Windows, which resulted in a mirror image on both screens. I was unable to login, as the login box was not visible. I unplugged the VGA lead from gfx card and plugged it into the external VGA port (Config #1) - Windows dual head works again, but the VGA-connected screen is not recognised by Ubuntu and remains in standby mode. Is it possible to configure a dual head setup for Ubuntu using Config #1, or am I missing something? I tried setting up dual monitors using Config #1, this morning which didn't work. By default, there is no xorg.conf file in Ubuntu 10.04.1, so I generated one using: $ sudo X :2 -configure X.Org X Server 1.7.6 Release Date: 2010-03-17 X Protocol Version 11, Revision 0 Build Operating System: Linux 2.6.24-27-server i686 Ubuntu Current Operating System: Linux harrier 2.6.32-24-generic #42-Ubuntu SMP Fri Aug 20 14:24:04 UTC 2010 i686 Kernel command line: BOOT_IMAGE=/boot/vmlinuz-2.6.32-24-generic root=UUID=a34c1931-98d4-4a34-880c-c227a2936c4a ro quiet splash Build Date: 21 July 2010 12:47:34PM xorg-server 2:1.7.6-2ubuntu7.3 (For technical support please see http://www.ubuntu.com/support) Current version of pixman: 0.16.4 Before reporting problems, check http://wiki.x.org to make sure that you have the latest version. Markers: (--) probed, (**) from config file, (==) default setting, (++) from command line, (!!) notice, (II) informational, (WW) warning, (EE) error, (NI) not implemented, (??) unknown. (==) Log file: "/var/log/Xorg.2.log", Time: Mon Sep 13 10:02:02 2010 List of video drivers: apm ark intel mach64 s3virge trident mga tseng ati nouveau neomagic i740 openchrome voodoo s3 i128 radeon siliconmotion nv ztv vmware v4l chips rendition savage sisusb tdfx geode sis r128 cirrus fbdev vesa (++) Using config file: "/home/michael/xorg.conf.new" (==) Using config directory: "/usr/lib/X11/xorg.conf.d" (II) [KMS] No DRICreatePCIBusID symbol, no kernel modesetting. Xorg detected your mouse at device /dev/input/mice. Please check your config if the mouse is still not operational, as by default Xorg tries to autodetect the protocol. Xorg has configured a multihead system, please check your config. Your xorg.conf file is /home/michael/xorg.conf.new To test the server, run 'X -config /home/michael/xorg.conf.new' ddxSigGiveUp: Closing log $ sudo X -config /home/michael/xorg.conf.new Fatal server error: Server is already active for display 0 If this server is no longer running, remove /tmp/.X0-lock and start again. Please consult the The X.Org Foundation support at http://wiki.x.org for help. ddxSigGiveUp: Closing log I then booted Ubuntu in failsafe mode, dropped into root shell, and executed $ X -config /home/michael/xorg.conf.new again. The screen went blank and turned off, so I reset the machine. There must be a way round this. Any help to set up a dual head config for Ubuntu using Config #1 would be hugely appreciated. TIA, Mike

    Read the article

  • How to solve "403 Forbidden" on CentOS6 with SELinux Disabled?

    - by André
    I have a machine on Linode that is driving me crazy. Linode does not have SELinux on CentOS6... I'm trying to configure to put my website in "/home/websites/public_html/mysite.com/public" As I don´t have SELinux enable, how can I avoid the "403 Forbidden" that I get when trying to access the webpage? Sorry for my english. Best Regards, Update1, ERROR_LOG [Mon Oct 17 14:04:16 2011] [error] [client 127.0.0.1] (13)Permission denied: access to / denied [Mon Oct 17 14:08:07 2011] [error] [client 127.0.0.1] (13)Permission denied: access to / denied [Mon Oct 17 14:10:25 2011] [error] [client 127.0.0.1] (13)Permission denied: access to / denied [Mon Oct 17 14:10:41 2011] [error] [client 127.0.0.1] (13)Permission denied: access to / denied [Mon Oct 17 14:32:35 2011] [error] [client 127.0.0.1] (13)Permission denied: access to / denied [Mon Oct 17 14:34:45 2011] [error] [client 58.218.199.227] (13)Permission denied: access to /proxy-1.php denied [Mon Oct 17 15:32:25 2011] [error] [client 127.0.0.1] (13)Permission denied: access to / denied [Mon Oct 17 15:37:26 2011] [error] [client 127.0.0.1] (13)Permission denied: access to / denied [Mon Oct 17 15:37:43 2011] [error] [client 127.0.0.1] (13)Permission denied: access to / denied [Mon Oct 17 15:38:32 2011] [error] [client 127.0.0.1] (13)Permission denied: access to / denied [Mon Oct 17 15:42:56 2011] [crit] [client 127.0.0.1] (13)Permission denied: /home/websites/.htaccess pcfg_openfile: unable to check htaccess file, ensure it is readable [Mon Oct 17 15:43:12 2011] [crit] [client 127.0.0.1] (13)Permission denied: /home/websites/.htaccess pcfg_openfile: unable to check htaccess file, ensure it is readable [Mon Oct 17 15:45:34 2011] [crit] [client 127.0.0.1] (13)Permission denied: /home/websites/.htaccess pcfg_openfile: unable to check htaccess file, ensure it is readable [Mon Oct 17 15:51:25 2011] [crit] [client 127.0.0.1] (13)Permission denied: /home/websites/.htaccess pcfg_openfile: unable to check htaccess file, ensure it is readable Upadate2, /home/websites directory drwx------ 3 websites websites 4096 Oct 17 14:52 . drwxr-xr-x. 3 root root 4096 Oct 17 13:42 .. -rw------- 1 websites websites 372 Oct 17 14:52 .bash_history -rw-r--r-- 1 websites websites 18 May 30 11:46 .bash_logout -rw-r--r-- 1 websites websites 176 May 30 11:46 .bash_profile -rw-r--r-- 1 websites websites 124 May 30 11:46 .bashrc drwxrwxr-x 3 websites apache 4096 Oct 17 13:45 public_html Update3, httpd.conf ### Section 1: Global Environment ServerTokens OS ServerRoot "/etc/httpd" PidFile run/httpd.pid Timeout 60 KeepAlive Off MaxKeepAliveRequests 100 KeepAliveTimeout 15 <IfModule prefork.c> StartServers 8 MinSpareServers 5 MaxSpareServers 20 ServerLimit 256 MaxClients 256 MaxRequestsPerChild 4000 </IfModule> <IfModule worker.c> StartServers 4 MaxClients 300 MinSpareThreads 25 MaxSpareThreads 75 ThreadsPerChild 25 MaxRequestsPerChild 0 </IfModule> #Listen 12.34.56.78:80 Listen 80 LoadModule auth_basic_module modules/mod_auth_basic.so LoadModule auth_digest_module modules/mod_auth_digest.so LoadModule authn_file_module modules/mod_authn_file.so LoadModule authn_alias_module modules/mod_authn_alias.so LoadModule authn_anon_module modules/mod_authn_anon.so LoadModule authn_dbm_module modules/mod_authn_dbm.so LoadModule authn_default_module modules/mod_authn_default.so LoadModule authz_host_module modules/mod_authz_host.so LoadModule authz_user_module modules/mod_authz_user.so LoadModule authz_owner_module modules/mod_authz_owner.so LoadModule authz_groupfile_module modules/mod_authz_groupfile.so LoadModule authz_dbm_module modules/mod_authz_dbm.so LoadModule authz_default_module modules/mod_authz_default.so LoadModule ldap_module modules/mod_ldap.so LoadModule authnz_ldap_module modules/mod_authnz_ldap.so LoadModule include_module modules/mod_include.so LoadModule log_config_module modules/mod_log_config.so LoadModule logio_module modules/mod_logio.so LoadModule env_module modules/mod_env.so LoadModule ext_filter_module modules/mod_ext_filter.so LoadModule mime_magic_module modules/mod_mime_magic.so LoadModule expires_module modules/mod_expires.so LoadModule deflate_module modules/mod_deflate.so LoadModule headers_module modules/mod_headers.so LoadModule usertrack_module modules/mod_usertrack.so LoadModule setenvif_module modules/mod_setenvif.so LoadModule mime_module modules/mod_mime.so LoadModule dav_module modules/mod_dav.so LoadModule status_module modules/mod_status.so LoadModule autoindex_module modules/mod_autoindex.so LoadModule info_module modules/mod_info.so LoadModule dav_fs_module modules/mod_dav_fs.so LoadModule vhost_alias_module modules/mod_vhost_alias.so LoadModule negotiation_module modules/mod_negotiation.so LoadModule dir_module modules/mod_dir.so LoadModule actions_module modules/mod_actions.so LoadModule speling_module modules/mod_speling.so LoadModule userdir_module modules/mod_userdir.so LoadModule alias_module modules/mod_alias.so LoadModule substitute_module modules/mod_substitute.so LoadModule rewrite_module modules/mod_rewrite.so LoadModule proxy_module modules/mod_proxy.so LoadModule proxy_balancer_module modules/mod_proxy_balancer.so LoadModule proxy_ftp_module modules/mod_proxy_ftp.so LoadModule proxy_http_module modules/mod_proxy_http.so LoadModule proxy_ajp_module modules/mod_proxy_ajp.so LoadModule proxy_connect_module modules/mod_proxy_connect.so LoadModule cache_module modules/mod_cache.so LoadModule suexec_module modules/mod_suexec.so LoadModule disk_cache_module modules/mod_disk_cache.so LoadModule cgi_module modules/mod_cgi.so LoadModule version_module modules/mod_version.so Include conf.d/*.conf #ExtendedStatus On User apache Group apache ServerAdmin root@localhost #ServerName www.example.com:80 UseCanonicalName Off DocumentRoot "/var/www/html" # # Each directory to which Apache has access can be configured with respect # to which services and features are allowed and/or disabled in that # directory (and its subdirectories). # # First, we configure the "default" to be a very restrictive set of # features. # <Directory /> Options FollowSymLinks AllowOverride None </Directory> # # Note that from this point forward you must specifically allow # particular features to be enabled - so if something's not working as # you might expect, make sure that you have specifically enabled it # below. # # # This should be changed to whatever you set DocumentRoot to. # <Directory "/home/websites/public_html"> # # Possible values for the Options directive are "None", "All", # or any combination of: # Indexes Includes FollowSymLinks SymLinksifOwnerMatch ExecCGI MultiViews # # Note that "MultiViews" must be named *explicitly* --- "Options All" # doesn't give it to you. # # The Options directive is both complicated and important. Please see # http://httpd.apache.org/docs/2.2/mod/core.html#options # for more information. # Options Indexes FollowSymLinks # # AllowOverride controls what directives may be placed in .htaccess files. # It can be "All", "None", or any combination of the keywords: # Options FileInfo AuthConfig Limit # AllowOverride None # # Controls who can get stuff from this server. # Order allow,deny Allow from all </Directory> # # UserDir: The name of the directory that is appended onto a user's home # directory if a ~user request is received. # # The path to the end user account 'public_html' directory must be # accessible to the webserver userid. This usually means that ~userid # must have permissions of 711, ~userid/public_html must have permissions # of 755, and documents contained therein must be world-readable. # Otherwise, the client will only receive a "403 Forbidden" message. # # See also: http://httpd.apache.org/docs/misc/FAQ.html#forbidden # <IfModule mod_userdir.c> # # UserDir is disabled by default since it can confirm the presence # of a username on the system (depending on home directory # permissions). # UserDir disabled # # To enable requests to /~user/ to serve the user's public_html # directory, remove the "UserDir disabled" line above, and uncomment # the following line instead: # #UserDir public_html </IfModule> # # Control access to UserDir directories. The following is an example # for a site where these directories are restricted to read-only. # #<Directory /home/*/public_html> # AllowOverride FileInfo AuthConfig Limit # Options MultiViews Indexes SymLinksIfOwnerMatch IncludesNoExec # <Limit GET POST OPTIONS> # Order allow,deny # Allow from all # </Limit> # <LimitExcept GET POST OPTIONS> # Order deny,allow # Deny from all # </LimitExcept> #</Directory> # # DirectoryIndex: sets the file that Apache will serve if a directory # is requested. # # The index.html.var file (a type-map) is used to deliver content- # negotiated documents. The MultiViews Option can be used for the # same purpose, but it is much slower. # DirectoryIndex index.html index.html.var # # AccessFileName: The name of the file to look for in each directory # for additional configuration directives. See also the AllowOverride # directive. # AccessFileName .htaccess # # The following lines prevent .htaccess and .htpasswd files from being # viewed by Web clients. # <Files ~ "^\.ht"> Order allow,deny Deny from all Satisfy All </Files> # # TypesConfig describes where the mime.types file (or equivalent) is # to be found. # TypesConfig /etc/mime.types # # DefaultType is the default MIME type the server will use for a document # if it cannot otherwise determine one, such as from filename extensions. # If your server contains mostly text or HTML documents, "text/plain" is # a good value. If most of your content is binary, such as applications # or images, you may want to use "application/octet-stream" instead to # keep browsers from trying to display binary files as though they are # text. # DefaultType text/plain # # The mod_mime_magic module allows the server to use various hints from the # contents of the file itself to determine its type. The MIMEMagicFile # directive tells the module where the hint definitions are located. # <IfModule mod_mime_magic.c> # MIMEMagicFile /usr/share/magic.mime MIMEMagicFile conf/magic </IfModule> # # HostnameLookups: Log the names of clients or just their IP addresses # e.g., www.apache.org (on) or 204.62.129.132 (off). # The default is off because it'd be overall better for the net if people # had to knowingly turn this feature on, since enabling it means that # each client request will result in AT LEAST one lookup request to the # nameserver. # HostnameLookups Off #EnableMMAP off #EnableSendfile off # # ErrorLog: The location of the error log file. # If you do not specify an ErrorLog directive within a <VirtualHost> # container, error messages relating to that virtual host will be # logged here. If you *do* define an error logfile for a <VirtualHost> # container, that host's errors will be logged there and not here. # ErrorLog logs/error_log LogLevel warn # # The following directives define some format nicknames for use with # a CustomLog directive (see below). # LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined LogFormat "%h %l %u %t \"%r\" %>s %b" common LogFormat "%{Referer}i -> %U" referer LogFormat "%{User-agent}i" agent # "combinedio" includes actual counts of actual bytes received (%I) and sent (%O); this # requires the mod_logio module to be loaded. #LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\" %I %O" combinedio # # The location and format of the access logfile (Common Logfile Format). # If you do not define any access logfiles within a <VirtualHost> # container, they will be logged here. Contrariwise, if you *do* # define per-<VirtualHost> access logfiles, transactions will be # logged therein and *not* in this file. # #CustomLog logs/access_log common # # If you would like to have separate agent and referer logfiles, uncomment # the following directives. # #CustomLog logs/referer_log referer #CustomLog logs/agent_log agent # # For a single logfile with access, agent, and referer information # (Combined Logfile Format), use the following directive: # CustomLog logs/access_log combined ServerSignature On Alias /icons/ "/var/www/icons/" <Directory "/var/www/icons"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order allow,deny Allow from all </Directory> # # WebDAV module configuration section. # <IfModule mod_dav_fs.c> # Location of the WebDAV lock database. DAVLockDB /var/lib/dav/lockdb </IfModule> # # ScriptAlias: This controls which directories contain server scripts. # ScriptAliases are essentially the same as Aliases, except that # documents in the realname directory are treated as applications and # run by the server when requested rather than as documents sent to the client. # The same rules about trailing "/" apply to ScriptAlias directives as to # Alias. # ScriptAlias /cgi-bin/ "/var/www/cgi-bin/" # # "/var/www/cgi-bin" should be changed to whatever your ScriptAliased # CGI directory exists, if you have that configured. # <Directory "/var/www/cgi-bin"> AllowOverride None Options None Order allow,deny Allow from all </Directory> IndexOptions FancyIndexing VersionSort NameWidth=* HTMLTable Charset=UTF-8 AddIconByEncoding (CMP,/icons/compressed.gif) x-compress x-gzip AddIconByType (TXT,/icons/text.gif) text/* AddIconByType (IMG,/icons/image2.gif) image/* AddIconByType (SND,/icons/sound2.gif) audio/* AddIconByType (VID,/icons/movie.gif) video/* AddIcon /icons/binary.gif .bin .exe AddIcon /icons/binhex.gif .hqx AddIcon /icons/tar.gif .tar AddIcon /icons/world2.gif .wrl .wrl.gz .vrml .vrm .iv AddIcon /icons/compressed.gif .Z .z .tgz .gz .zip AddIcon /icons/a.gif .ps .ai .eps AddIcon /icons/layout.gif .html .shtml .htm .pdf AddIcon /icons/text.gif .txt AddIcon /icons/c.gif .c AddIcon /icons/p.gif .pl .py AddIcon /icons/f.gif .for AddIcon /icons/dvi.gif .dvi AddIcon /icons/uuencoded.gif .uu AddIcon /icons/script.gif .conf .sh .shar .csh .ksh .tcl AddIcon /icons/tex.gif .tex AddIcon /icons/bomb.gif core AddIcon /icons/back.gif .. AddIcon /icons/hand.right.gif README AddIcon /icons/folder.gif ^^DIRECTORY^^ AddIcon /icons/blank.gif ^^BLANKICON^^ # # DefaultIcon is which icon to show for files which do not have an icon # explicitly set. # DefaultIcon /icons/unknown.gif # # AddDescription allows you to place a short description after a file in # server-generated indexes. These are only displayed for FancyIndexed # directories. # Format: AddDescription "description" filename # #AddDescription "GZIP compressed document" .gz #AddDescription "tar archive" .tar #AddDescription "GZIP compressed tar archive" .tgz # # ReadmeName is the name of the README file the server will look for by # default, and append to directory listings. # # HeaderName is the name of a file which should be prepended to # directory indexes. ReadmeName README.html HeaderName HEADER.html # # IndexIgnore is a set of filenames which directory indexing should ignore # and not include in the listing. Shell-style wildcarding is permitted. # IndexIgnore .??* *~ *# HEADER* README* RCS CVS *,v *,t # # DefaultLanguage and AddLanguage allows you to specify the language of # a document. You can then use content negotiation to give a browser a # file in a language the user can understand. # # Specify a default language. This means that all data # going out without a specific language tag (see below) will # be marked with this one. You probably do NOT want to set # this unless you are sure it is correct for all cases. # # * It is generally better to not mark a page as # * being a certain language than marking it with the wrong # * language! # # DefaultLanguage nl # # Note 1: The suffix does not have to be the same as the language # keyword --- those with documents in Polish (whose net-standard # language code is pl) may wish to use "AddLanguage pl .po" to # avoid the ambiguity with the common suffix for perl scripts. # # Note 2: The example entries below illustrate that in some cases # the two character 'Language' abbreviation is not identical to # the two character 'Country' code for its country, # E.g. 'Danmark/dk' versus 'Danish/da'. # # Note 3: In the case of 'ltz' we violate the RFC by using a three char # specifier. There is 'work in progress' to fix this and get # the reference data for rfc1766 cleaned up. # # Catalan (ca) - Croatian (hr) - Czech (cs) - Danish (da) - Dutch (nl) # English (en) - Esperanto (eo) - Estonian (et) - French (fr) - German (de) # Greek-Modern (el) - Hebrew (he) - Italian (it) - Japanese (ja) # Korean (ko) - Luxembourgeois* (ltz) - Norwegian Nynorsk (nn) # Norwegian (no) - Polish (pl) - Portugese (pt) # Brazilian Portuguese (pt-BR) - Russian (ru) - Swedish (sv) # Simplified Chinese (zh-CN) - Spanish (es) - Traditional Chinese (zh-TW) # AddLanguage ca .ca AddLanguage cs .cz .cs AddLanguage da .dk AddLanguage de .de AddLanguage el .el AddLanguage en .en AddLanguage eo .eo AddLanguage es .es AddLanguage et .et AddLanguage fr .fr AddLanguage he .he AddLanguage hr .hr AddLanguage it .it AddLanguage ja .ja AddLanguage ko .ko AddLanguage ltz .ltz AddLanguage nl .nl AddLanguage nn .nn AddLanguage no .no AddLanguage pl .po AddLanguage pt .pt AddLanguage pt-BR .pt-br AddLanguage ru .ru AddLanguage sv .sv AddLanguage zh-CN .zh-cn AddLanguage zh-TW .zh-tw # # LanguagePriority allows you to give precedence to some languages # in case of a tie during content negotiation. # # Just list the languages in decreasing order of preference. We have # more or less alphabetized them here. You probably want to change this. # LanguagePriority en ca cs da de el eo es et fr he hr it ja ko ltz nl nn no pl pt pt-BR ru sv zh-CN zh-TW # # ForceLanguagePriority allows you to serve a result page rather than # MULTIPLE CHOICES (Prefer) [in case of a tie] or NOT ACCEPTABLE (Fallback) # [in case no accepted languages matched the available variants] # ForceLanguagePriority Prefer Fallback # # Specify a default charset for all content served; this enables # interpretation of all content as UTF-8 by default. To use the # default browser choice (ISO-8859-1), or to allow the META tags # in HTML content to override this choice, comment out this # directive: # AddDefaultCharset UTF-8 # # AddType allows you to add to or override the MIME configuration # file mime.types for specific file types. # #AddType application/x-tar .tgz # # AddEncoding allows you to have certain browsers uncompress # information on the fly. Note: Not all browsers support this. # Despite the name similarity, the following Add* directives have nothing # to do with the FancyIndexing customization directives above. # #AddEncoding x-compress .Z #AddEncoding x-gzip .gz .tgz # If the AddEncoding directives above are commented-out, then you # probably should define those extensions to indicate media types: # AddType application/x-compress .Z AddType application/x-gzip .gz .tgz # # MIME-types for downloading Certificates and CRLs # AddType application/x-x509-ca-cert .crt AddType application/x-pkcs7-crl .crl # # AddHandler allows you to map certain file extensions to "handlers": # actions unrelated to filetype. These can be either built into the server # or added with the Action directive (see below) # # To use CGI scripts outside of ScriptAliased directories: # (You will also need to add "ExecCGI" to the "Options" directive.) # #AddHandler cgi-script .cgi # # For files that include their own HTTP headers: # #AddHandler send-as-is asis # # For type maps (negotiated resources): # (This is enabled by default to allow the Apache "It Worked" page # to be distributed in multiple languages.) # AddHandler type-map var # # Filters allow you to process content before it is sent to the client. # # To parse .shtml files for server-side includes (SSI): # (You will also need to add "Includes" to the "Options" directive.) # AddType text/html .shtml AddOutputFilter INCLUDES .shtml # # Action lets you define media types that will execute a script whenever # a matching file is called. This eliminates the need for repeated URL # pathnames for oft-used CGI file processors. # Format: Action media/type /cgi-script/location # Format: Action handler-name /cgi-script/location # # # Customizable error responses come in three flavors: # 1) plain text 2) local redirects 3) external redirects # # Some examples: #ErrorDocument 500 "The server made a boo boo." #ErrorDocument 404 /missing.html #ErrorDocument 404 "/cgi-bin/missing_handler.pl" #ErrorDocument 402 http://www.example.com/subscription_info.html # # # Putting this all together, we can internationalize error responses. # # We use Alias to redirect any /error/HTTP_<error>.html.var response to # our collection of by-error message multi-language collections. We use # includes to substitute the appropriate text. # # You can modify the messages' appearance without changing any of the # default HTTP_<error>.html.var files by adding the line: # # Alias /error/include/ "/your/include/path/" # # which allows you to create your own set of files by starting with the # /var/www/error/include/ files and # copying them to /your/include/path/, even on a per-VirtualHost basis. # Alias /error/ "/var/www/error/" <IfModule mod_negotiation.c> <IfModule mod_include.c> <Directory "/var/www/error"> AllowOverride None Options IncludesNoExec AddOutputFilter Includes html AddHandler type-map var Order allow,deny Allow from all LanguagePriority en es de fr ForceLanguagePriority Prefer Fallback </Directory> # ErrorDocument 400 /error/HTTP_BAD_REQUEST.html.var # ErrorDocument 401 /error/HTTP_UNAUTHORIZED.html.var # ErrorDocument 403 /error/HTTP_FORBIDDEN.html.var # ErrorDocument 404 /error/HTTP_NOT_FOUND.html.var # ErrorDocument 405 /error/HTTP_METHOD_NOT_ALLOWED.html.var # ErrorDocument 408 /error/HTTP_REQUEST_TIME_OUT.html.var # ErrorDocument 410 /error/HTTP_GONE.html.var # ErrorDocument 411 /error/HTTP_LENGTH_REQUIRED.html.var # ErrorDocument 412 /error/HTTP_PRECONDITION_FAILED.html.var # ErrorDocument 413 /error/HTTP_REQUEST_ENTITY_TOO_LARGE.html.var # ErrorDocument 414 /error/HTTP_REQUEST_URI_TOO_LARGE.html.var # ErrorDocument 415 /error/HTTP_UNSUPPORTED_MEDIA_TYPE.html.var # ErrorDocument 500 /error/HTTP_INTERNAL_SERVER_ERROR.html.var # ErrorDocument 501 /error/HTTP_NOT_IMPLEMENTED.html.var # ErrorDocument 502 /error/HTTP_BAD_GATEWAY.html.var # ErrorDocument 503 /error/HTTP_SERVICE_UNAVAILABLE.html.var # ErrorDocument 506 /error/HTTP_VARIANT_ALSO_VARIES.html.var </IfModule> </IfModule> # # The following directives modify normal HTTP response behavior to # handle known problems with browser implementations. # BrowserMatch "Mozilla/2" nokeepalive BrowserMatch "MSIE 4\.0b2;" nokeepalive downgrade-1.0 force-response-1.0 BrowserMatch "RealPlayer 4\.0" force-response-1.0 BrowserMatch "Java/1\.0" force-response-1.0 BrowserMatch "JDK/1\.0" force-response-1.0 # # The following directive disables redirects on non-GET requests for # a directory that does not include the trailing slash. This fixes a # problem with Microsoft WebFolders which does not appropriately handle # redirects for folders with DAV methods. # Same deal with Apple's DAV filesystem and Gnome VFS support for DAV. # BrowserMatch "Microsoft Data Access Internet Publishing Provider" redirect-carefully BrowserMatch "MS FrontPage" redirect-carefully BrowserMatch "^WebDrive" redirect-carefully BrowserMatch "^WebDAVFS/1.[0123]" redirect-carefully BrowserMatch "^gnome-vfs/1.0" redirect-carefully BrowserMatch "^XML Spy" redirect-carefully BrowserMatch "^Dreamweaver-WebDAV-SCM1" redirect-carefully # # Allow server status reports generated by mod_status, # with the URL of http://servername/server-status # Change the ".example.com" to match your domain to enable. # #<Location /server-status> # SetHandler server-status # Order deny,allow # Deny from all # Allow from .example.com #</Location> # # Allow remote server configuration reports, with the URL of # http://servername/server-info (requires that mod_info.c be loaded). # Change the ".example.com" to match your domain to enable. # #<Location /server-info> # SetHandler server-info # Order deny,allow # Deny from all # Allow from .example.com #</Location> # # Proxy Server directives. Uncomment the following lines to # enable the proxy server: # #<IfModule mod_proxy.c> #ProxyRequests On # #<Proxy *> # Order deny,allow # Deny from all # Allow from .example.com #</Proxy> # # Enable/disable the handling of HTTP/1.1 "Via:" headers. # ("Full" adds the server version; "Block" removes all outgoing Via: headers) # Set to one of: Off | On | Full | Block # #ProxyVia On # # To enable a cache of proxied content, uncomment the following lines. # See http://httpd.apache.org/docs/2.2/mod/mod_cache.html for more details. # #<IfModule mod_disk_cache.c> # CacheEnable disk / # CacheRoot "/var/cache/mod_proxy" #</IfModule> # #</IfModule> # End of proxy directives. ### Section 3: Virtual Hosts # # VirtualHost: If you want to maintain multiple domains/hostnames on your # machine you can setup VirtualHost containers for them. Most configurations # use only name-based virtual hosts so the server doesn't need to worry about # IP addresses. This is indicated by the asterisks in the directives below. # # Please see the documentation at # <URL:http://httpd.apache.org/docs/2.2/vhosts/> # for further details before you try to setup virtual hosts. # # You may use the command line option '-S' to verify your virtual host # configuration. # # Use name-based virtual hosting. # NameVirtualHost *:80 # # NOTE: NameVirtualHost cannot be used without a port specifier # (e.g. :80) if mod_ssl is being used, due to the nature of the # SSL protocol. # # # VirtualHost example: # Almost any Apache directive may go into a VirtualHost container. # The first VirtualHost section is used for requests without a known # server name. # #<VirtualHost *:80> # ServerAdmin [email protected] # DocumentRoot /www/docs/dummy-host.example.com # ServerName dummy-host.example.com # ErrorLog logs/dummy-host.example.com-error_log # CustomLog logs/dummy-host.example.com-access_log common #</VirtualHost> # domain: mysite.com # public: /home/websites/public_html/mysite.com/ <VirtualHost *:80> # Admin email, Server Name (domain name) and any aliases ServerAdmin [email protected] ServerName mysite.com ServerAlias www.mysite.com # Index file and Document Root (where the public files are located) DirectoryIndex index.html DocumentRoot /home/websites/public_html/mysite.com/public # Custom log file locations LogLevel warn ErrorLog /home/websites/public_html/mysite.com/log/error.log CustomLog /home/websites/public_html/mysite.com/log/access.log combined </VirtualHost>

    Read the article

  • High CPU from httpd process

    - by KHWeb
    I am currently getting high CPU on a server that is just running a couple of sites with very low traffic. One of the sites is in still development going live soon. However, this site is very very slow...When browsing through its pages I can see that the CPU goes from 30% to 100% for httpd (see top output below). I have tuned httpd & MySQL, Apache Solr, Tomcat for high performance, and I am using APC. Not sure what to do from here or how to find the culprit as I have a bunch of messages on the httpd log and have been chasing dead ends for some time...any help is greatly appreciated. Server: AuthenticAMD, Quad-Core AMD Opteron(tm) Processor 2352, RAM 16GB Linux 2.6.27 64-bit, Centos 5.5 Plesk 9.5.4, MySQL 5.1.48, PHP 5.2.17 Apache/2.2.3 (CentOS) DAV/2 mod_jk/1.2.15 mod_ssl/2.2.3 OpenSSL/0.9.8e-fips-rhel5 PHP/5.2.17 mod_perl/2.0.4 Perl/v5.8.8 Tomcat6-6.0.29-1.jpp5, Tomcat-native-1.1.20-1.el5, Apache Solr top 17595 apache 20 0 1825m 507m 10m R 100.4 3.2 0:17.50 httpd 17596 apache 20 0 1565m 247m 9936 R 83.1 1.5 0:10.86 httpd 17598 apache 20 0 1430m 110m 6472 S 54.5 0.7 0:08.66 httpd 17599 apache 20 0 1438m 124m 12m S 37.2 0.8 0:11.20 httpd 16197 mysql 20 0 13.0g 2.0g 5440 S 9.6 12.6 297:12.79 mysqld 17617 root 20 0 12748 1172 812 R 0.7 0.0 0:00.88 top 8169 tomcat 20 0 4613m 268m 6056 S 0.3 1.7 6:40.56 java httpd error_log [debug] prefork.c(991): AcceptMutex: sysvsem (default: sysvsem) [info] mod_fcgid: Process manager 17593 started [debug] proxy_util.c(1854): proxy: grabbed scoreboard slot 0 in child 17594 for worker proxy:reverse [debug] proxy_util.c(1967): proxy: initialized single connection worker 0 in child 17594 for (*) [debug] proxy_util.c(1854): proxy: grabbed scoreboard slot 0 in child 17595 for worker proxy:reverse [debug] proxy_util.c(1873): proxy: worker proxy:reverse already initialized [notice] child pid 22782 exit signal Segmentation fault (11) [error] (43)Identifier removed: apr_global_mutex_lock(jk_log_lock) failed [debug] util_ldap.c(2021): LDAP merging Shared Cache conf: shm=0x7fd29a5478c0 rmm=0x7fd29a547918 for VHOST: example.com [info] APR LDAP: Built with OpenLDAP LDAP SDK [info] LDAP: SSL support available [info] Init: Seeding PRNG with 256 bytes of entropy [info] Init: Generating temporary RSA private keys (512/1024 bits) [info] Init: Generating temporary DH parameters (512/1024 bits) [debug] ssl_scache_shmcb.c(374): shmcb_init allocated 512000 bytes of shared memory [debug] ssl_scache_shmcb.c(554): entered shmcb_init_memory() [debug] ssl_scache_shmcb.c(576): for 512000 bytes, recommending 4265 indexes [debug] ssl_scache_shmcb.c(619): shmcb_init_memory choices follow [debug] ssl_scache_shmcb.c(621): division_mask = 0x1F [debug] ssl_scache_shmcb.c(623): division_offset = 96 [debug] ssl_scache_shmcb.c(625): division_size = 15997 [debug] ssl_scache_shmcb.c(627): queue_size = 2136 [debug] ssl_scache_shmcb.c(629): index_num = 133 [debug] ssl_scache_shmcb.c(631): index_offset = 8 [debug] ssl_scache_shmcb.c(633): index_size = 16 [debug] ssl_scache_shmcb.c(635): cache_data_offset = 8 [debug] ssl_scache_shmcb.c(637): cache_data_size = 13853 [debug] ssl_scache_shmcb.c(650): leaving shmcb_init_memory()

    Read the article

  • which package i should choose, if i want to install virtualenv for python?

    - by hugemeow
    pip search just returns so many matches, i am confused about which package i should choose to install .. should i only install virtualenv? or i'd better also install virtualenv-commands and virtualenv-commands, etc, but i really don't know exactly what virtualenv-commands is ... mirror0@lab:~$ pip search virtualenv virtualenvwrapper - Enhancements to virtualenv virtualenv - Virtual Python Environment builder veh - virtualenv for hg pyutilib.virtualenv - PyUtilib utility for building custom virtualenv bootstrap scripts. envbuilder - A package for automatic generation of virtualenvs virtstrap-core - A bootstrapping mechanism for virtualenv+pip and shell scripts tox - virtualenv-based automation of test activities virtualenvwrapper-win - Port of Doug Hellmann's virtualenvwrapper to Windows batch scripts everyapp.bootstrap - Enhanced virtualenv bootstrap script creation. orb - pip/virtualenv shell script wrapper monupco-virtualenv-python - monupco.com registration agent for stand-alone Python virtualenv applications virtualenvwrapper-powershell - Enhancements to virtualenv (for Windows). A clone of Doug Hellmann's virtualenvwrapper RVirtualEnv - relocatable python virtual environment virtualenv-clone - script to clone virtualenvs. virtualenvcontext - switch virtualenvs with a python context manager lessrb - Wrapper for ruby less so that it's in a virtualenv. carton - make self-extracting virtualenvs virtualenv5 - Virtual Python 3 Environment builder clever-alexis - Clever redhead girl that builds and packs Python project with Virtualenv into rpm, deb, etc. kforgeinstall - Virtualenv bootstrap script for KForge pypyenv - Install PyPy in virtualenv virtualenv-distribute - Virtual Python Environment builder virtualenvwrapper.project - virtualenvwrapper plugin to manage a project work directory virtualenv-commands - Additional commands for virtualenv. rjm.recipe.venv - zc.buildout recipe to turn the entire buildout tree into a virtualenv virtualenvwrapper.bitbucket - virtualenvwrapper plugin to manage a project work directory based on a BitBucket repository tg_bootstrap - Bootstrap a TurboGears app in a VirtualEnv django-env - Automaticly manages virtualenv for django project virtual-node - Install node.js into your virtualenv django-environment - A plugin for virtualenvwrapper that makes setting up and creating new Django environments easier. vip - vip is a simple library that makes your python aware of existing virtualenv underneath. virtualenvwrapper.django - virtualenvwrapper plugin to create a Django project work directory terrarium - Package and ship relocatable python virtualenvs venv_dependencies - Easy to install any dependencies in a virtualenviroment(without making symlinks by hand and etc...) virtualenv-sh - Convenient shell interface to virtualenv virtualenvwrapper.github - Plugin for virtualenvwrapper to automatically create projects based on github repositories. virtualenvwrapper.configvar - Plugin for virtualenvwrapper to automatically export config vars found in your project level .env file. virtualenvwrapper-emacs-desktop - virtualenvwrapper plugin to control emacs desktop mode bootstrapper - Bootstrap Python projects with virtualenv and pip. virtualenv3 - Obsolete fork of virtualenv isotoma.depends.zope2_13_8 - Running zope in a virtualenv virtual-less - Install lessc into your virtualenv virtualenvwrapper.tmpenv - Temporary virtualenvs are automatically deleted when deactivated isotoma.plone.heroku - Tooling for running Plone on heroku in a virtualenv gae-virtualenv - Using virtualenv with zipimport on Google App Engine pinvenv - VirtualEnv plugins for pin isotoma.depends.plone4_1 - Running plone in a virtualenv virtualenv-tools - A set of tools for virtualenv virtualenvwrapper.npm - Plugin for virtualenvwrapper to automatically encapsulate inside the virtual environment any npm installed globaly when the venv is activated d51.django.virtualenv.test_runner - Simple package for running isolated Django tests from within virtualenv difio-virtualenv-python - Difio registration agent for stand-alone Python virtualenv applications VirtualEnvManager - A package to manage various virtual environments. virtualenvwrapper.gem - Plugin for virtualenvwrapper to automatically encapsulate inside the virtual environment any gems installed when the venv is activated

    Read the article

  • Graphite SQLite3 DatabaseError: attempt to write a readonly database

    - by Anadi Misra
    Running graphite under apache httpd, with slqite database, I have the correct folder permissions [root@liaan55 httpd]# ls -ltr /var/lib | grep graphite drwxr-xr-x. 2 apache apache 4096 Aug 23 19:36 graphite-web and [root@liaan55 httpd]# ls -ltr /var/lib/graphite-web/ total 68 -rw-r--r--. 1 apache apache 65536 Aug 23 19:46 graphite.db syncdb also seems to have gone fine [root@liaan55 httpd]# sudo -su apache bash-4.1$ whoami apache bash-4.1$ python /usr/lib/python2.6/site-packages/graphite/manage.py syncdb /usr/lib/python2.6/site-packages/graphite/settings.py:231: UserWarning: SECRET_KEY is set to an unsafe default. This should be set in local_settings.py for better security warn('SECRET_KEY is set to an unsafe default. This should be set in local_settings.py for better security') /usr/lib/python2.6/site-packages/django/conf/__init__.py:75: DeprecationWarning: The ADMIN_MEDIA_PREFIX setting has been removed; use STATIC_URL instead. "use STATIC_URL instead.", DeprecationWarning) /usr/lib/python2.6/site-packages/django/core/cache/__init__.py:82: DeprecationWarning: settings.CACHE_* is deprecated; use settings.CACHES instead. DeprecationWarning Creating tables ... Creating table account_profile Creating table account_variable Creating table account_view Creating table account_window Creating table account_mygraph Creating table dashboard_dashboard_owners Creating table dashboard_dashboard Creating table events_event Creating table auth_permission Creating table auth_group_permissions Creating table auth_group Creating table auth_user_user_permissions Creating table auth_user_groups Creating table auth_user Creating table django_session Creating table django_admin_log Creating table django_content_type Creating table tagging_tag Creating table tagging_taggeditem You just installed Django's auth system, which means you don't have any superusers defined. Would you like to create one now? (yes/no): yes Username (leave blank to use 'apache'): root E-mail address: [email protected] Password: Password (again): Superuser created successfully. Installing custom SQL ... Installing indexes ... Installed 0 object(s) from 0 fixture(s) bash-4.1$ exit and the local-settings.py file is as follows STORAGE_DIR = '/var/lib/graphite-web' INDEX_FILE = '/var/lib/graphite-web/index' DATABASES = { 'default': { 'NAME': '/var/lib/graphite-web/graphite.db', 'ENGINE': 'django.db.backends.sqlite3', 'USER': '', 'PASSWORD': '', 'HOST': '', 'PORT': '' } } I still get this error [Sat Aug 23 19:47:17 2014] [error] [client 10.42.33.238] File "/usr/lib/python2.6/site-packages/django/db/backends/sqlite3/base.py", line 344, in execute [Sat Aug 23 19:47:17 2014] [error] [client 10.42.33.238] return Database.Cursor.execute(self, query, params) [Sat Aug 23 19:47:17 2014] [error] [client 10.42.33.238] DatabaseError: attempt to write a readonly database not sure what is missing in this configuration

    Read the article

  • Configuring OpenLDAP as a Active Directory Proxy

    - by vadensumbra
    We try to set up an Active Directory server for company-wide authentication. Some of the servers that should authenticate against the AD are placed in a DMZ, so we thought of using a LDAP-server as a proxy, so that only 1 server in the DMZ has to connect to the LAN where the AD-server is placed). With some googling it was no problem to configure the slapd (see slapd.conf below) and it seemed to work when using the ldapsearch tool, so we tried to use it in apache2 htaccess to authenticate the user over the LDAP-proxy. And here comes the problem: We found out the username in the AD is stored in the attribute 'sAMAccountName' so we configured it in .htaccess (see below) but the login didn't work. In the syslog we found out that the filter for the ldapsearch was not (like it should be) '(&(objectClass=*)(sAMAccountName=authtest01))' but '(&(objectClass=*)(?=undefined))' which we found out is slapd's way to show that the attribute do not exists or the value is syntactically wrong for this attribute. We thought of a missing schema and found the microsoft.schema (and the .std / .ext ones of it) and tried to include them in the slapd.conf. Which does not work. We found no working schemata so we just picked out the part about the sAMAccountName and build a microsoft.minimal.schema (see below) that we included. Now we get the more precise log in the syslog: Jun 16 13:32:04 breauthsrv01 slapd[21229]: get_ava: illegal value for attributeType sAMAccountName Jun 16 13:32:04 breauthsrv01 slapd[21229]: conn=0 op=1 SRCH base="ou=oraise,dc=int,dc=oraise,dc=de" scope=2 deref=3 filter="(&(objectClass=\*)(?sAMAccountName=authtest01))" Jun 16 13:32:04 breauthsrv01 slapd[21229]: conn=0 op=1 SRCH attr=sAMAccountName Jun 16 13:32:04 breauthsrv01 slapd[21229]: conn=0 op=1 SEARCH RESULT tag=101 err=0 nentries=0 text= Using our Apache htaccess directly with the AD via LDAP works though. Anyone got a working setup? Thanks for any help in advance: slapd.conf: allow bind_v2 include /etc/ldap/schema/core.schema ... include /etc/ldap/schema/microsoft.minimal.schema ... backend ldap database ldap suffix "ou=xxx,dc=int,dc=xxx,dc=de" uri "ldap://80.156.177.161:389" acl-bind bindmethod=simple binddn="CN=authtest01,ou=GPO-Test,ou=xxx,dc=int,dc=xxx,dc=de" credentials=xxxxx .htaccess: AuthBasicProvider ldap AuthType basic AuthName "AuthTest" AuthLDAPURL "ldap://breauthsrv01.xxx.de:389/OU=xxx,DC=int,DC=xxx,DC=de?sAMAccountName?sub" AuthzLDAPAuthoritative On AuthLDAPGroupAttribute member AuthLDAPBindDN CN=authtest02,OU=GPO-Test,OU=xxx,DC=int,DC=xxx,DC=de AuthLDAPBindPassword test123 Require valid-user microsoft.minimal.schema: attributetype ( 1.2.840.113556.1.4.221 NAME 'sAMAccountName' SYNTAX '1.3.6.1.4.1.1466.115.121.1.15' SINGLE-VALUE )

    Read the article

  • Django rewrites URL as IP address in browser - why?

    - by Mitch
    I am using django, nginx and apache. When I access my site with a URL (e.g., http://www.foo.com/) what appears in my browser address is the IP address with admin appended (e.g., http://123.45.67.890/admin/). When I access the site by IP, it is redirected as expected by django's urls.py (e.g., http://123.45.67.890/ - http://123.45.67.890/accounts/login/?next=/) I would like to have the name URL act the same way as the IP. That is, if the URL goes to a new view, the host in the browser address should remain the same and not change to the IP address. Where should I be looking to fix this? My files: ; cpa.com (apache) NameVirtualHost *:8080 <VirtualHost *:8080> AddOutputFilterByType DEFLATE text/html text/plain text/xml text/css text/javascript application/javascript application/x-javascript BrowserMatch ^Mozilla/4 gzip-only-text/html BrowserMatch ^Mozilla/4\.0[678] no-gzip BrowserMatch \bMSIE !no-gzip !gzip-only-text/htm DocumentRoot /path/to/root ServerName www.foo.com <IfModule mod_rpaf.c> RPAFenable On RPAFsethostname On RPAFproxy_ips 127.0.0.1 </IfModule> <Directory /public/static> AllowOverride None AddHandler mod_python .py PythonHandler mod_python.publisher </Directory> Alias / /dj <Location /> SetHandler python-program PythonPath "['/usr/lib/python2.5/site-packages/django', '/usr/lib/python2.5/site-packages/django/forms'] + sys.path" PythonHandler django.core.handlers.modpython SetEnv DJANGO_SETTINGS_MODULE dj.settings PythonDebug On </Location> </VirtualHost> ; ; ports.conf (apache) Listen 127.0.0.1:8080 ; ; cpa.conf (nginx) server { listen 80; server_name www.foo.com; location /static { root /var/public; index index.html; } location /cpa/js { root /var/public/js; } location /cpa/css { root /var/public/css; } location /djmedia { alias "/usr/lib/python2.5/site-packages/django/contrib/admin/media/"; } location / { include /etc/nginx/proxy.conf; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://127.0.0.1:8080; } } ; ; proxy.conf (nginx) proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; client_max_body_size 10m; client_body_buffer_size 128k; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 500; proxy_buffers 32 4k;

    Read the article

  • ActiveMQ Pure Master / Slave - Out of sync

    - by pico
    What i have : 1 master broker and 1 slave broker both in ActiveMQ 5.4.0 What i use : waitForSlave on master side and failover uri on slave side (in the master connector URI) What i want to do : I want to wait some delay (like 5 seconds) in case of a tiny network failures between master and slave before starting slave transpôrt connectors So i put this in slave config : <broker xmlns="http://activemq.apache.org/schema/core" brokerName="slave" dataDirectory="${activemq.base}/data" useJmx="true" persistent="true" populateJMSXUserID="true" masterConnectorURI="failover://(tcp://master:61616)?initialReconnectDelay=1000&amp;maxReconnectDelay=30000" shutdownOnMasterFailure="false" advisorySupport="false"> It seems to work but after a network hang between master and slave, the slave reconnect successfully and then the master logs a lot of : 2010-10-18 17:08:44,421 | ERROR | Slave Failed | org.apache.activemq.broker.ft.MasterBroker | ActiveMQ Task java.lang.IllegalStateException: Cannot lookup a connection that had not been registered: ID:master-1040-634226732611718750-0:0 at org.apache.activemq.broker.MapTransportConnectionStateRegister.lookupConnectionState(MapTransportConnectionStateRegister.java:93) at org.apache.activemq.broker.TransportConnection.lookupConnectionState(TransportConnection.java:1412) at org.apache.activemq.broker.TransportConnection.processRemoveConsumer(TransportConnection.java:561) at org.apache.activemq.command.RemoveInfo.visit(RemoveInfo.java:76) at org.apache.activemq.broker.TransportConnection.service(TransportConnection.java:309) at org.apache.activemq.broker.TransportConnection$1.onCommand(TransportConnection.java:185) at org.apache.activemq.transport.ResponseCorrelator.onCommand(ResponseCorrelator.java:116) at org.apache.activemq.transport.TransportFilter.onCommand(TransportFilter.java:69) at org.apache.activemq.transport.vm.VMTransport.iterate(VMTransport.java:218) at org.apache.activemq.thread.DedicatedTaskRunner.runTask(DedicatedTaskRunner.java:98) at org.apache.activemq.thread.DedicatedTaskRunner$1.run(DedicatedTaskRunner.java:36) On the slave side everything is fine. So after that, i've tried to stop the master to see if the slave is capable of turning master after these "network hangs". The master took long time to shutdown (10 seconds) and then some error message appears in slave logs : 2010-10-18 17:09:32,915 | WARN | Async error occurred: java.lang.IllegalStateException: Cannot lookup a connection that had not been registered: ID:master-1049-634226732657812500-0:3 | org.apache.activemq.broker.TransportConnection.Service | VMTransport: vm://slave#5 java.lang.IllegalStateException: Cannot lookup a connection that had not been registered: ID:master-1049-634226732657812500-0:3 at org.apache.activemq.broker.MapTransportConnectionStateRegister.lookupConnectionState(MapTransportConnectionStateRegister.java:93) at org.apache.activemq.broker.TransportConnection.lookupConnectionState(TransportConnection.java:1412) at org.apache.activemq.broker.TransportConnection.processRemoveSession(TransportConnection.java:600) at org.apache.activemq.command.RemoveInfo.visit(RemoveInfo.java:74) at org.apache.activemq.broker.TransportConnection.service(TransportConnection.java:309) at org.apache.activemq.broker.TransportConnection$1.onCommand(TransportConnection.java:185) at org.apache.activemq.transport.ResponseCorrelator.onCommand(ResponseCorrelator.java:116) at org.apache.activemq.transport.TransportFilter.onCommand(TransportFilter.java:69) at org.apache.activemq.transport.vm.VMTransport.iterate(VMTransport.java:218) at org.apache.activemq.thread.DedicatedTaskRunner.runTask(DedicatedTaskRunner.java:98) at org.apache.activemq.thread.DedicatedTaskRunner$1.run(DedicatedTaskRunner.java:36) Are they any ways to keep my kaha stores (they are individual stores) synchronised? The main problem is that the slave never turn master after a master failure, it stay block on this message : 2010-10-18 17:09:33,681 | WARN | Transport (master/172.21.60.61:61616) failed to tcp://master:61616 , attempting to automatically reconnect due to: java.net.SocketException: Software caused connection abort: socket write error | org.apache.activemq.transport.failover.FailoverTransport | ActiveMQ Transport: tcp://master/172.21.60.61:61616 I'm totally stuck with these syncs problems, any help welcome! Regards

    Read the article

< Previous Page | 417 418 419 420 421 422 423 424 425 426 427 428  | Next Page >