Search Results

Search found 594 results on 24 pages for 'serverfault'.

Page 23/24 | < Previous Page | 19 20 21 22 23 24  | Next Page >

  • Why won't IE let users login to a website unless in In Private mode?

    - by Richard Fawcett
    I'm not entirely sure this belongs on SuperUser.com. I also considered ServerFault.com and StackOverflow.com, but on balance, I think it should belong here? We host a website which has the same code responding to multiple domain names. On 28th December (without any changes deployed to the website) a percentage of users suddenly could not login, and the blank login page was just rendered again even when the correct credentials were entered. The issue is still ongoing. After remote controlling an affected user's PC, we've found the following: The issue affects Internet Explorer 9. The user can login from the same machine on Chrome. The user can login from an In Private browser session using IE9. The user can login if the website is added to the Trusted Sites security zone. The user can NOT login from an IE session in safe mode (started with iexplore -extoff). Only one hostname that the website responds to prevents login, the same user account on the other hostname works fine (note that this is identical code and database running server side), even though that site is not in trusted sites zone. Series of HTTP requests in the failure case: GET request to protected page, returns a 302 FOUND response to login page. GET request to login page. POST to login page, containing credentials, returns redirect to protected page. GET request to protected page... for some reason auth fails and browser is redirected to login page, as in step 1. Other information: Operating system is Windows 7 Ultimate Edition. AV system is AVG Internet Security 2012. I can think of lots of things that could be going wrong, but in every case, one of the findings above is incompatible with the theory. Any ideas what is causing login to fail? Update 06-Jan-2012 Enhanced logging has shown that the .ASPXAUTH cookie is being set in step 3. Its expiry date is 28 days in the future, its path is /, the domain is mysite.com, and its value is an encrypted forms ticket, as expected. However, the cookie is not being received by the web server during step 4. Other cookies are being presented to the server during step 4, it's just this one that is missing. I've seen that cookies are usually set with a domain starting with a period, but mine isn't. Should it be .mysite.com instead of mysite.com? However, if this was wrong, it would presumably affect all users?

    Read the article

  • Windows Server 2008 R2 bare metal restore to different hardware

    - by S Falken
    Scenario: I have a Windows Server 2008 R2 x64 installation whose main disk drive is now 7 years old and showing signs of age. For the last couple of months it's been displaying increased errors and requirements to run checkdisk. I have successfully created a bare metal restore (BMR) image on a separate data drive on the server, which can be seen from the Windows Recovery console; I tested it by booting to and using the Windows Server installation DVD's recovery utilities. The BMR image includes the system drive with boot partition, system state, and the D:\ drive of the server, which is where I have followed the practice of installing any program that does not require a C:\ installation path. Therefore, the BMR includes both the C:\ and D:\ drives, system state and boot partition. The C:\ drive is a 7-year old Seagate 160GB. The D:\ drive is a rather newer 120GB Western Digital. I have purchased a 128GB solid state Samsung 830 that I want to restore these partitions to, using the BMR. Questions: In the above-referenced article, Microsoft seems to be indicating that I am only able to restore to like-kind hardware, which doesn't help at all and is difficult to believe. Is this really true? I've cleaned these drives up and minimized the size of partition they require. C:\ will need about a 70GB partition, and the data on D:\ will need about 50GB. Will Windows Server backup allow me to restore the BMR to newly-created partitions on the SSD, discarding extra space? I don't need a "how-to": I just need an "is it possible". Justification: Before posting this question, I checked ServerFault articles with the following titles, but none of them were about this exact scenario: Restore SBS 2008 Backup to Same Hardware but Different Disk Configuration Restoring Windows Server 2008 to different hardware - OEM License Restoring II6 server after a hardware failure windows 2008 r2 fail to restore Domain controller failed to restore using windows backup tools How does restore to dissimilar hardware work? Migrating Windows 2008 R2 from a PC to a different PC TFS 2005 Server restore from one hardware to another I also researched Microsoft but only received an oblique answer which was not precisely aimed at my question, at the following URL: http://support.microsoft.com/kb/249694#method3

    Read the article

  • Serving protected files using Nginx's X-Accel-Redirect header

    - by andybak
    I'm trying to serve protected files using this directive in my nginx.conf: location /secure/ { internal; alias /home/ldr/webapps/nginx/app/secure/; } I'm passing in paths in the form: "/myfile.doc" and the file's path would be: /home/ldr/webapps/nginx/app/secure/myfile.doc I just get 404's when I access "http: //myserver/secure/myfile.doc" (space inserted after http to stop ServerFault converting it to a link) I've tried taking the trailing / off the location directive and that makes no difference. Two questions: How do I fix it! How can I debug problems like this myself? How can I get Nginx to report which path it's looking for? error.log shows nothing and access.log just tells me which url is being requested - this is the bit I already know! It's no fun trying things randomly without any feedback. Here's my entire nginx.conf: daemon off; worker_processes 2; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; server { listen 21534; server_name my.server.com; client_max_body_size 5m; location /media/ { alias /home/ldr/webapps/nginx/app/media/; } location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; fastcgi_pass unix:/home/ldr/webapps/nginx/app/myproject/django.sock; fastcgi_pass_header Authorization; fastcgi_hide_header X-Accel-Redirect; fastcgi_hide_header X-Sendfile; fastcgi_intercept_errors off; include fastcgi_params; } location /secure { internal; alias /home/ldr/webapps/nginx/app/secure/; } } } EDIT: I'm trying some of the suggestions here So I've tried: location /secure/ { internal; alias /home/ldr/webapps/nginx/app/; } both with and without the trailing slash on location. I've also tried moving this block before the "location /" directive. The page I linked to has ^~ after 'location' giving: location ^~ /secure/ { ...etc... Not sure what that signifies but it didn't work either!

    Read the article

  • FTP script needs blank line

    - by Ones and Zeroes
    I am trying to determine the reason for some FTP servers requiring a blank line in the script as follows: open server.com username ftp_commands bye Refer to blank line required after username credentials. Example from: FTP from batch file another reference to the same: http://newsgroups.derkeiler.com/Archive/Comp/comp.sys.ibm.as400.misc/2008-05/msg00227.html Also discussed here: archive.midrange.com/midrange-l/200601/msg00048.html "The behavior I'm observing is the same as if I didn't specify the password to login." with an answer referring to our same fix... archive.midrange.com/midrange-l/200601/msg00053.html and archive.midrange.com/midrange-l/200601/msg00065.html Note: It is my experience that FTP questions attract uncouth responses. Admittedly FTP is outdated, but many clients still have legacy systems, which they cannot upgrade or replace. The reason thereof should not be discussed here. The intention of this question is to invite a positive response. Please do not respond if you disagree with the above. If you have never encountered this same issue, please do not respond. I suspect this may be limited to FTP scripts executed from Windows machines, but have been told that this happens often and with many different servers. My specific interest is to understand what may cause this as I have a real world example of a production system suddenly requiring this as a workaround fix, after running for many years without issue. The server belongs to a third party who claims no change on their end. Server details unknown and cannot be determined. Any help or encouragement from someone who has come across the same, would be appreciated. ps. Sorry for the many words and references to painful responses, but I have asked similar questions on serverfault and elsewhere and unfortunately got back kneejerk responses to FTP and respondents debating the validity of the question. I would truly not ask, or re-post this question online if I had a better understanding of the issue. I know of people who have seen this issue, but don't know what causes it. I am wary that this question would again turn into another irrelevant discussion. Please, I ask very nicely: Please do not respond if you have not encountered a similar issue. FURTHER EDIT: Please do not suggest changing the product. The problem is not the blank line requirement. We know this fixes the issue. The problem is not being able to explain the reason for the blank line in the first place. Slight difference, but a critical point to note wrt the answering of this question.

    Read the article

  • Nginx config - serving index.html not working

    - by Bill
    I can't figure out how to redirect / to index.html. I've gone through the threads on serverfault and I think I've tried every suggestion including: rewrite statements within location / index index.html at the server level, within location / and within static content moving node.js proxy statements to location ~ /i instead of within location / Obviously something is wrong somewhere else in my configuration. Here is my nginx.conf: worker_processes 1; pid /home/logs/nginx.pid; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; error_log /home/logs/error.log; access_log /home/logs/access.log combined; include sites-enabled/*; } and my server config located in sites-enabled server { root /home/www/public; listen 80; server_name localhost; # proxy request to node location / { index index.html index.htm; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-NginX-Proxy true; proxy_pass http://127.0.0.1:3010; proxy_redirect off; break; } # static content location ~ \.(?:ico|jpe?g|jpeg|gif|css|png|js|swf|xml|woff|eot|svg|ttf|html)$ { access_log off; add_header Pragma public; add_header Cache-Control public; expires 30d; } gzip on; gzip_vary on; gzip_http_version 1.0; gzip_comp_level 2; gzip_proxied any; gzip_min_length 1000; gzip_disable "msie6"; gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript; } Everything else is working just fine. Requests get proxied to node correctly and static content is served correctly. I just need to be able to forward requests made to / to /index.html.

    Read the article

  • Subversion vision and roadmap

    - by gbjbaanb
    Recently C Michael Pilato of the core subversion team posted a mail to the subversion dev mailing list suggesting a vision and roadmap for the future of Subversion. Naturally, he wanted as much feedback and response as possible which is why I'm posting this here - to elicit some suggestions and contributions from you, the users of Subversion. Any comments are welcome, and I shall feedback a synopsis with a link to this question to the dev mailing list. Similarly, I've created a post on ServerFault to get feedback from the administator side of things too. So, without further ado: Vision The first thing on his "vision statement" is: Subversion has no future as a DVCS tool. Let's just get that out there. At least two very successful such tools exist already, and to squeeze another horse into that race would be a poor investment of energy and talent. There's no need to suggest distributed features for subversion. If you want a DVCS, there should be no ill-feeling if you migrate to Git, Mercurial or Bazaar. As he says, its pointless trying to make SVN like them when they already exist, especially when there are different usage patterns that SVN should be targetting. The vision for Subversion is: Subversion exists to be universally recognized and adopted as an open-source, centralized version control system characterized by its reliability as a safe haven for valuable data; the simplicity of its model and usage; and its ability to support the needs of a wide variety of users and projects, from individuals to large-scale enterprise operations. Roadmap Several ideas were suggested as being "very nice to have" and are offered as the starting point of a future roadmap. These are: Obliterate Shelve/Checkpoint Repository-dictated Configuration Rename Tracking Improved Merging Improved Tree Conflict Handling Enterprise Authentication Mechanisms Forward History Searching Log Message Templates If anyone has suggestions to add, or comments on these, the subversion community would welcome all of them. Community And lastly, there was a call for more people to become involved with Subversion development. As with most OSS projects it can be daunting to join, but there is now a push for more to be done to help. If you feel like you can contribute, please do so.

    Read the article

  • Oracle 9i Session Disconnections

    - by mlaverd
    [Cross-Posting from ServerFault] I am in a development environment, and our test Oracle 9i server has been misbehaving for a few days now. What happens is that we have our JDBC connections disconnecting after a few successful connections. We got this box set up by our IT department and handed over to. It is 'our problem', so options like 'ask you DBA' isn't going to help me. :( Our server is set up with 3 plain databases (one is the main dev db, the other is the 'experimental' dev db). We use the Oracle 10 ojdbc14.jar thin JDBC driver (because of some bug in the version 9 of the driver). We're using Hibernate to talk to the DB. The only thing that I can see that changed is that we now have more users connecting to the server. Instead of one developer, we now have 3. With the Hibernate connection pools, I'm thinking that maybe we're hitting some limit? Anyone has any idea what's going on? Here's the stack trace on the client: Caused by: org.hibernate.exception.GenericJDBCException: could not execute query at org.hibernate.exception.SQLStateConverter.handledNonSpecificException(SQLStateConverter.java:126) [hibernate3.jar:na] at org.hibernate.exception.SQLStateConverter.convert(SQLStateConverter.java:114) [hibernate3.jar:na] at org.hibernate.exception.JDBCExceptionHelper.convert(JDBCExceptionHelper.java:66) [hibernate3.jar:na] at org.hibernate.loader.Loader.doList(Loader.java:2235) [hibernate3.jar:na] at org.hibernate.loader.Loader.listIgnoreQueryCache(Loader.java:2129) [hibernate3.jar:na] at org.hibernate.loader.Loader.list(Loader.java:2124) [hibernate3.jar:na] at org.hibernate.loader.hql.QueryLoader.list(QueryLoader.java:401) [hibernate3.jar:na] at org.hibernate.hql.ast.QueryTranslatorImpl.list(QueryTranslatorImpl.java:363) [hibernate3.jar:na] at org.hibernate.engine.query.HQLQueryPlan.performList(HQLQueryPlan.java:196) [hibernate3.jar:na] at org.hibernate.impl.SessionImpl.list(SessionImpl.java:1149) [hibernate3.jar:na] at org.hibernate.impl.QueryImpl.list(QueryImpl.java:102) [hibernate3.jar:na] ... Caused by: java.sql.SQLException: Io exception: Connection reset at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:112) [ojdbc14.jar:Oracle JDBC Driver version - "10.2.0.4.0"] at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:146) [ojdbc14.jar:Oracle JDBC Driver version - "10.2.0.4.0"] at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:255) [ojdbc14.jar:Oracle JDBC Driver version - "10.2.0.4.0"] at oracle.jdbc.driver.T4CPreparedStatement.executeForDescribe(T4CPreparedStatement.java:829) [ojdbc14.jar:Oracle JDBC Driver version - "10.2.0.4.0"] at oracle.jdbc.driver.OracleStatement.executeMaybeDescribe(OracleStatement.java:1049) [ojdbc14.jar:Oracle JDBC Driver version - "10.2.0.4.0"] at oracle.jdbc.driver.T4CPreparedStatement.executeMaybeDescribe(T4CPreparedStatement.java:854) [ojdbc14.jar:Oracle JDBC Driver version - "10.2.0.4.0"] at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1154) [ojdbc14.jar:Oracle JDBC Driver version - "10.2.0.4.0"] at oracle.jdbc.driver.OraclePreparedStatement.executeInternal(OraclePreparedStatement.java:3370) [ojdbc14.jar:Oracle JDBC Driver version - "10.2.0.4.0"] at oracle.jdbc.driver.OraclePreparedStatement.executeQuery(OraclePreparedStatement.java:3415) [ojdbc14.jar:Oracle JDBC Driver version - "10.2.0.4.0"] at org.hibernate.jdbc.AbstractBatcher.getResultSet(AbstractBatcher.java:208) [hibernate3.jar:na] at org.hibernate.loader.Loader.getResultSet(Loader.java:1812) [hibernate3.jar:na] at org.hibernate.loader.Loader.doQuery(Loader.java:697) [hibernate3.jar:na] at org.hibernate.loader.Loader.doQueryAndInitializeNonLazyCollections(Loader.java:259) [hibernate3.jar:na] at org.hibernate.loader.Loader.doList(Loader.java:2232) [hibernate3.jar:na]

    Read the article

  • Asp.net MVC and MOSS 2010 integration

    - by Robert Koritnik
    Just a sidenote: I'm not sure whether I should post this to serverfault as well, because some MOSS admin may have some info for me as well? A bit of explanation first (without Asp.net MVC) Is it possible to integrate the two? Is it possible to write an application that would share at least credential information with MOSS? I have to write a MOSS application that has to do with these technologies: MOSS 2010 Personal client certificates authentication (most probably on USB keys) Active Directory Federation Services Separate SQL DB that would serve application specific data (separate as not being part of MOSS DB) How should it work? Users should authenticate using personal certificates into MOSS 2010 There would be a certain part of MOSS that would be related to my custom application This application should only authorize certain users via AD FS - I guess these users should have a certain security claim attached to them This application should manage users (that have access to this app) with additional (app specific) security claims related to this application (as additional application level authorization rights for individual application parts) This application should use custom SQL 2008 DB heavily with its own data This application should have the possibility to integrate with external systems as well (Exchange for instance to inject calendar entries, ERP systems etc) This application should be able to export its data (from its DB) to files. I don't know if it's possible, but it would be nice if the app could add these files to MOSS and attach authorization info to them so only users with sufficient rights would be able to view/open these files. Why Asp.net MVC then? I'm very well versed in Asp.net MVC (also with the latest version) and I haven't done anything on Sharepoint since version 2003 (which doesn't do me no good or prepare me for the latest version in any way shape or form). This project will most probably be a death march project so I would rather write my application as a UI rich Asp.net MVC application and somehow integrate it into MOSS. But not only via a link, because I would like to at least share credentials, so users wouldn't need to re-login when accessing my app. Using Asp.net MVC I would at least have the possibility to finish on time or be less death marching. Is this at all possible? Questions Is it possible to integrate Asp.net MVC into MOSS as described above? If integration is not possible, would it be possible to create a completely MOSS based application that would work as described? Which parts of MOSS 2010 should I use to accomplish what I need?

    Read the article

  • Help me stabilize this jRun configuration (CF9/Win2k3/IIS6)

    - by jfrobishow
    Not sure if this would be better suited for ServerFault, but since I am not an admin but a developer I figured I would try SO. We've been struggling to keep our multi-server configuration stable for quite some time now. At the end of last month we were running under CF 7.0.2 on a two servers setup (one instance each). At that point we managed to get our uptime to around 1 week per instance before they would restart by themselves. Since the beginning of the month we upgraded to CF 9 and we're back to square one with multi-restart a day. Our current configuration is 2 Win2k3 servers, running a cluster of 4 instances, 2 instances per server. At this point we are pretty certain this is due to improper JVM settings. We've been toying with them and while some are more stable than others we never quite got it right. From the default: java.args=-server -Xmx512m -Dsun.io.useCanonCaches=false -XX:MaxPermSize=192m -XX:+UseParallelGC -Dcoldfusion.rootDir={application.home}/ To currently: java.args=-server -Xmx896m -Dsun.io.useCanonCaches=false -XX:MaxPermSize=512m -XX:SurvivorRatio=8 -XX:TargetSurvivorRatio=90 -XX:+UseParallelGC -Dcoldfusion.rootDir={application.home}/ -verbose:gc -Xloggc:c:/Jrun4/logs/gc/gcInstance1b.log We have determined that we do need more than the default 512MB simply by monitoring with FusionReactor, on average our amount of memory consumed is hovering in the mid 300MB and can go up to low 700MB under heavy load. Most of the crash will be logged in jrun4/bin/hs_err_pid*.log always an "Out of swap space" I've attached links to the hs_err and garbage collector log file from yesterday at the bottom of the post. The relevant part is (I think) this: Heap PSYoungGen total 89856K, used 19025K [0x55490000, 0x5b6f0000, 0x5b810000) eden space 79232K, 16% used [0x55490000,0x561a64c0,0x5a1f0000) from space 10624K, 52% used [0x5ac90000,0x5b20e2f8,0x5b6f0000) to space 10752K, 0% used [0x5a1f0000,0x5a1f0000,0x5ac70000) PSOldGen total 460416K, used 308422K [0x23810000, 0x3f9b0000, 0x55490000) object space 460416K, 66% used [0x23810000,0x36541bb8,0x3f9b0000) PSPermGen total 107520K, used 106079K [0x03810000, 0x0a110000, 0x23810000) object space 107520K, 98% used [0x03810000,0x09fa7e40,0x0a110000) From it, I gather that its the PSPermGen that is full (most logs will show the same before a crash), which is why we increased MaxPermSize but the total still show as 107520K!??! No one here is a jRun expert, so any help or even ideas on what to try next would be greatly appreciated!! The log files: Sorry I know sendspace isn't the friendliest of places - if you have other host suggestion for log files let me know and I'll update the post (SO doesn't like them inline, it blows up the format of the post). The hs_err log file: http://www.sendspace.com/file/fgak8l The gc log: http://www.sendspace.com/file/w0r2ct

    Read the article

  • Internet Explorer buggy when accessing a custom weblogic provider

    - by James
    I've created a custom Weblogic Security Authentication Provider on version 10.3 that includes a custom login module to validate users. As part of the provider, I've implemented the ServletAuthenticationFilter and added one filter. The filter acts as a common log on page for all the applications within the domain. When we access any secured URLs by entering them in the address bar, this works fine in IE and Firefox. But when we bookmark the link in IE an odd thing happens. If I click the bookmark, you will see our log on page, then after you've successfully logged into the system the basic auth page will display, even though the user is already authenticated. This never happens in Firefox, only IE. It's also intermittent. 1 time out of 5 IE will correctly redirect and not show the basic auth window. Firefox and Opera will correctly redirect everytime. We've captured the response headers and compared the success and failures, they are identical. final boolean isAuthenticated = authenticateUser(userName, password, req); // Send user on to the original URL if (isAuthenticated) { res.sendRedirect(targetURL); return; } As you can see, once the user is authenticated I do a redirect to the original URL. Is there a step I'm missing? The authenticateUser() method is taken verbatim from an example in Oracle's documents. private boolean authenticateUser(final String userName, final String password, HttpServletRequest request) { boolean results; try { ServletAuthentication.login(new CallbackHandler() { @Override public void handle(Callback[] callbacks) throws IOException, UnsupportedCallbackException { for (Callback callback : callbacks) { if (callback instanceof NameCallback) { NameCallback nameCallback = (NameCallback) callback; nameCallback.setName(userName); } if (callback instanceof PasswordCallback) { PasswordCallback passwordCallback = (PasswordCallback) callback; passwordCallback.setPassword(password.toCharArray()); } } } }, request); results = true; } catch (LoginException e) { results = false; } return results; I am asking the question here because I don't know if the issue is with the Weblogic config or the code. If this question is more suited to ServerFault please let me know and I will post there. It is odd that it works everytime in Firefox and Opera but not in Internet Explorer. I wish that not using Internet Explorer was an option but it is currently the company standard. Any help or direction would be appreciated. I have tested against IE 6 & 8 and deployed the custom provider on 3 different environments and I can still reproduce the bug.

    Read the article

  • Difference between Cloud and Virtualization

    - by Akash Kava
    Ops: This does not belong to ServerFault because it focuses on Programing Architecture. I have following questions regarding differences between Cloud and Virtualization.. How Cloud is different then Virtualization? Currently I tried to find out pricing of Rackspace, Amazone and all similar cloud providers, I found that our current 6 dedicated servers came cheaper then their pricing. So how one can claim cloud is cheaper? Is it cheaper only in comparison of normal hosting? We re organized our infrastructure in virtual environment to reduce or configuration overhead at time of failure, we did not have to rewrite any peice of code that is already written for earlier setup. So moving to virtualization does not require any re programming. But cloud is absoltely different and it will require entire reprogramming right? Is it really worth to recode when our current IT costs are 3-4 times lower then cloud hosting including raid backups and all sort of clustering for high availability? New programming architecture means new overheads of training staff, new methods of testing and new deployment schemes, does it justify over "on demand resource usage" words of cloud? We are having current development architecture with simple Server side ASP.NET WebServices with no local context and on client side Flex/Silverlight which offers pretty good REST architecture and its highly scalable. How does cloud differs from REST model of deployment? On storage, SQL Server or MySQL offers pretty good replication and high availibility then what is advantage in cloud? Data guarantee, one of our vendor hosting some other customer's app on cloud (one of most used), lost Entire Hard Disk (the virtual) and entire module in first 6 months. Second provider said its your duty to take backup, fine I agree, but no provider gives SLA for data guarantee, they give 99% uptime. However in most business apps, uptime is less important then data integrity. In our 10 years of dedicated hosting experience we had only one hard disk crash. This makes me little skeptical to go for cloud and loosing control over data. And I feel its just a big marketing buzz to sell virtulization in different form. Size of data, currently all providers charge very heavy for large data, if you are hosting only below 100GB cloud can be good alternative, but I think virtual servers and dedicated servers above 100GB to few TBs are still cheaper. Why would want to pay so high on cloud when there is no data guarentee as well as it doesnt say anything about redundancy. (I wish SO had something for spell check for Internet Explorer, sorry for wrong spellings in my post)

    Read the article

  • Is MySQL Replication Appropriate in this case?

    - by MJB
    I have a series of databases, each of which is basically standalone. It initially seemed like I needed a replication solution, but the more I researched it, the more it felt like replication was overkill and not useful anyway. I have not done MySQL replication before, so I have been reading up on the online docs, googling, and searching SO for relevant questions, but I can't find a scenario quite like mine. Here is a brief description of my issue: The various databases almost never have a live connection to each other. They need to be able to "sync" by copying files to a thumb drive and then moving them to the proper destination. It is OK for the data to not match exactly, but they should have the same parent-child relationships. That is, if a generated key differs between databases, no big deal. But the visible data must match. Timing is not critical. Updates can be done a week later, or even a month later, as long as they are done eventually. Updates cannot be guaranteed to be in the proper order, or in any order for that matter. They will be in order from each database; just not between databases. Rather than a set of master-slave relationships, it is more like a central database (R/W) and multiple remote databases (also R/W). I won't know how many remote databases I have until they are created. And the central DB won't know that a database exists until data arrives from it. (To me, this implies I cannot use the method of giving each its own unique identity range to guarantee uniqueness in the central database.) It appears to me that the bottom line is that I don't want "replication" so much as I want "awareness". I want the central database to know what happened in the remote databases, but there is no time requirement. I want the remote databases to be aware of the central database, but they don't need to know about each other. WTH is my question? It is this: Does this scenario sound like any of the typical replication scenarios, or does it sound like I have to roll my own? Perhaps #7 above is the only one that matters, and given that requirement, out-of-the-box replication is impossible. EDIT: I realize that this question might be more suited to ServerFault. I also searched there and found no answers to my questions. And based on the replication questions I did find both on SO and SF, it seemed that the decision was 50-50 over where to put my question. Sorry if I guessed wrong.

    Read the article

  • Roaming user profile issues on Server 2008

    - by Alicia White
    I thought I cleared a user's profile from 2008, but it keeps coming back. So, I was looking for the best way to clear a roaming profile in Server 2008, but I have been unable to find anything. But, I did see the post here: http://serverfault.com/questions/18724/user-profile-keeps-loading-temp-profile I wanted to add a comment to that post, but it was closed as not being related to sysadmin. But, I think it IS related because I dealt with precisely this same problem on our Wndows 2008 terminal server. Here was the issue: we have a user who was getting an "unable to load your roaming profile" type of error at logon in Windows 2008. Looking at the server, we could see her temp profile listed in the profile list while she was loggged (listed as a "temporary" and not a "roaming" profile). While she was logged on, a folder called C:\Users\Temp.DOMAIN existed in the users folder, but that disappeared as soon as she logged out. When this thing happened in 2003, we would clear the contents of the roaming profile folder & delete the temp folder in C:\Documents and Settings. The thing is, 2008 behaves a bit differently. Server 2008 created a new roaming profile folder in the roaming profile folder share: \SERVER\ProfileShare\UserName.V2 The local profile disappears from the profile list in System Properties, so there is no profile to clear Also the local profile folder, C:\Users\Temp.DOMAIN doesn't stay on the server when the user logs out, so we can't delete that as we would normally do when this sort of thing happens in Windows 2003 Despite all of this, every time the user logs back on, the frickin' Temp profile always comes back. One of my team-mates, who is much more experienced with 2008, said I should check the registry for the user's profile in this key (the users are listed by SID): HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\WindowsNT\CurrentVersion\ProfileList I saw the user's SID listed there, but it ended in .BAK. I checked several other servers where she is having the same profile errors: in all cases, her SID ended with .BAK. For example (xxx replacing the LONG SID): HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\ProfileList\S-1-5-21-xxxxx-xxxx.bak On the server she was logged on to, there were two keys for her profile in the registry: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\ProfileList\S-1-5-21-xxxxx-xxxx and HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\ProfileList\S-1-5-21-xxxxx-xxxx.bak So, here is how I cleared up the issue. I had the user log off. I deleted the apparently bad profiles ending in .BAK from the ProfileList key on each server where it appeared. I made sure her roaming profile folder was empty I made sure that all the TEMP profile folders were gone The user logged back on: no more profile errors! Anyway, I wanted to make a comment on that closed question, but I didn't see any way to re-open the question so I could add it. But, I also would like to know if this is the best practice to clear out a bad roaming profile for Server 2008? I'm having a hard time finding any instructions on line on how best to do this, but this method I used seemed to work. I'd like to find some documentation to give to our Level 1 support staff so they will know how to clear user profiles on 2008 since this seems to be more involved that clearing user profiles in server 2003. Thanks, Alicia

    Read the article

  • pecl-ssh2-0.11 Freebsd Compile error after upgrading to php 5.3.2

    - by penfold45
    Hi I've been looking for answers for this all day and can find nothing to solve my issue. I also came across a question about this port on serverfault that I just answered and will hopefully help someone else. however my problem is this. While running "make" in /usr/ports/security/pecl-ssh2 I get this error === Building for pecl-ssh2-0.11 /bin/sh /usr/ports/security/pecl-ssh2/work/ssh2-0.11/libtool --mode=compile cc -I. -I/usr/ports/security/pecl-ssh2/work/ssh2-0.11 -DPHP_ATOM_INC -I/usr/ports/security/pecl-ssh2/work/ssh2-0.11/include -I/usr/ports/security/pecl-ssh2/work/ssh2-0.11/main -I/usr/ports/security/pecl-ssh2/work/ssh2-0.11 -I/usr/local/include/php -I/usr/local/include/php/main -I/usr/local/include/php/TSRM -I/usr/local/include/php/Zend -I/usr/local/include/php/ext -I/usr/local/include/php/ext/date/lib -I/usr/local/include -I/usr/local/include -DHAVE_CONFIG_H -O2 -pipe -fno-strict-aliasing -c /usr/ports/security/pecl-ssh2/work/ssh2-0.11/ssh2.c -o ssh2.lo cc -I. -I/usr/ports/security/pecl-ssh2/work/ssh2-0.11 -DPHP_ATOM_INC -I/usr/ports/security/pecl-ssh2/work/ssh2-0.11/include -I/usr/ports/security/pecl-ssh2/work/ssh2-0.11/main -I/usr/ports/security/pecl-ssh2/work/ssh2-0.11 -I/usr/local/include/php -I/usr/local/include/php/main -I/usr/local/include/php/TSRM -I/usr/local/include/php/Zend -I/usr/local/include/php/ext -I/usr/local/include/php/ext/date/lib -I/usr/local/include -I/usr/local/include -DHAVE_CONFIG_H -O2 -pipe -fno-strict-aliasing -c /usr/ports/security/pecl-ssh2/work/ssh2-0.11/ssh2.c -fPIC -DPIC -o .libs/ssh2.o /usr/ports/security/pecl-ssh2/work/ssh2-0.11/ssh2.c: In function 'zif_ssh2_methods_negotiated': /usr/ports/security/pecl-ssh2/work/ssh2-0.11/ssh2.c:502: warning: passing argument 4 of 'add_assoc_string_ex' discards qualifiers from pointer target type /usr/ports/security/pecl-ssh2/work/ssh2-0.11/ssh2.c:503: warning: passing argument 4 of 'add_assoc_string_ex' discards qualifiers from pointer target type /usr/ports/security/pecl-ssh2/work/ssh2-0.11/ssh2.c:507: warning: passing argument 4 of 'add_assoc_string_ex' discards qualifiers from pointer target type /usr/ports/security/pecl-ssh2/work/ssh2-0.11/ssh2.c:508: warning: passing argument 4 of 'add_assoc_string_ex' discards qualifiers from pointer target type /usr/ports/security/pecl-ssh2/work/ssh2-0.11/ssh2.c:509: warning: passing argument 4 of 'add_assoc_string_ex' discards qualifiers from pointer target type /usr/ports/security/pecl-ssh2/work/ssh2-0.11/ssh2.c:510: warning: passing argument 4 of 'add_assoc_string_ex' discards qualifiers from pointer target type /usr/ports/security/pecl-ssh2/work/ssh2-0.11/ssh2.c:515: warning: passing argument 4 of 'add_assoc_string_ex' discards qualifiers from pointer target type /usr/ports/security/pecl-ssh2/work/ssh2-0.11/ssh2.c:516: warning: passing argument 4 of 'add_assoc_string_ex' discards qualifiers from pointer target type /usr/ports/security/pecl-ssh2/work/ssh2-0.11/ssh2.c:517: warning: passing argument 4 of 'add_assoc_string_ex' discards qualifiers from pointer target type /usr/ports/security/pecl-ssh2/work/ssh2-0.11/ssh2.c:518: warning: passing argument 4 of 'add_assoc_string_ex' discards qualifiers from pointer target type /usr/ports/security/pecl-ssh2/work/ssh2-0.11/ssh2.c: In function 'zif_ssh2_poll': /usr/ports/security/pecl-ssh2/work/ssh2-0.11/ssh2.c:891: error: 'zval' has no member named 'is_ref' /usr/ports/security/pecl-ssh2/work/ssh2-0.11/ssh2.c:891: error: 'zval' has no member named 'refcount' /usr/ports/security/pecl-ssh2/work/ssh2-0.11/ssh2.c:901: error: 'zval' has no member named 'is_ref' /usr/ports/security/pecl-ssh2/work/ssh2-0.11/ssh2.c:902: error: 'zval' has no member named 'refcount' /usr/ports/security/pecl-ssh2/work/ssh2-0.11/ssh2.c: In function 'zif_ssh2_publickey_add': /usr/ports/security/pecl-ssh2/work/ssh2-0.11/ssh2.c:1011: error: 'zval' has no member named 'is_ref' /usr/ports/security/pecl-ssh2/work/ssh2-0.11/ssh2.c:1012: error: 'zval' has no member named 'refcount' /usr/ports/security/pecl-ssh2/work/ssh2-0.11/ssh2.c:1044: warning: passing argument 1 of '_efree' discards qualifiers from pointer target type /usr/ports/security/pecl-ssh2/work/ssh2-0.11/ssh2.c: In function 'zif_ssh2_publickey_list': /usr/ports/security/pecl-ssh2/work/ssh2-0.11/ssh2.c:1103: warning: passing argument 4 of 'add_assoc_stringl_ex' discards qualifiers from pointer target type /usr/ports/security/pecl-ssh2/work/ssh2-0.11/ssh2.c:1104: warning: passing argument 4 of 'add_assoc_stringl_ex' discards qualifiers from pointer target type *** Error code 1 Stop in /usr/ports/security/pecl-ssh2/work/ssh2-0.11. *** Error code 1 Stop in /usr/ports/security/pecl-ssh2. I am trying to recompile this port after upgrading from php 5.2.12 to php 5.3.2 which was released on freebsd over the weekend. I have run out of ideas and steam with this so if anyone has any ideas on what this might be I would be truly grateful.

    Read the article

  • Network authentication + roaming home directory - which technology should I look into using?

    - by Brian
    I'm looking into software which provides a user with a single identity across multiple computers. That is, a user should have the same permissions on each computer, and the user should have access to all of his or her files (roaming home directory) on each computer. There seem to be many solutions for this general idea, but I'm trying to determine the best one for me. Here are some details along with requirements: The network of machines are Amazon EC2 instances running Ubuntu. We access the machines with SSH. Some machines on this LAN may have different uses, but I am only discussing machines for a certain use (running a multi-tenancy platform). The system will not necessarily have a constant amount of machines. We may have to permanently or temporarily alter the amount of machines running. This is the the reason why I'm looking into centralized authentication/storage. The implementation of this effect should be a secure one. We're unsure if users will have direct shell access, but their software will potentially be running (under restricted Linux user names, of course) on our systems, which is as good as direct shell access. Let's assume that their software could potentially be malicious for the sake of security. I have heard of several technologies/combinations to achieve my goal, but I'm unsure of the ramifications of each. An older ServerFault post recommended NFS & NIS, though the combination has security problems according to this old article by Symantec. The article suggests moving to NIS+, but, as it is old, this Wikipedia article has cited statements suggesting a trending away from NIS+ by Sun. The recommended replacement is another thing I have heard of... LDAP. It looks like LDAP can be used to save user information in a centralized location on a network. NFS would still need to be used to cover the 'roaming home folder' requirement, but I see references of them being used together. Since the Symantec article pointed out security problems in both NIS and NFS, is there software to replace NFS, or should I heed that article's suggestions for locking it down? I'm tending toward LDAP because another fundamental piece of our architecture, RabbitMQ, has a authentication/authorization plugin for LDAP. RabbitMQ will be accessible in a restricted manner to users on the system, so I would like to tie the security systems together if possible. Kerberos is another secure authentication protocol that I have heard of. I learned a bit about it some years ago in a cryptography class but don't remember much about it. I have seen suggestions online that it can be combined with LDAP in several ways. Is this necessary? What are the security risks of LDAP without Kerberos? I also remember Kerberos being used in another piece of software developed by Carnegie Mellon University... Andrew File System, or AFS. OpenAFS is available for use, though its setup seems a bit complicated. At my university, AFS provides both requirements... I can log in to any machine, and my "AFS folder" is always available (at least when I acquire an AFS token). Along with suggestions for which path I should look into, does anybody have any guides which were particularly helpful? As the bold text pointed out, LDAP looks to be the best choice, but I'm particularly interested in the implementation details (Keberos? NFS?) with respect to security.

    Read the article

  • Consistent PHP _SERVER variables between Apache and nginx?

    - by Alix Axel
    I'm not sure if this should be asked here or on ServerFault, but here it goes... I am trying to get started on nginx with PHP-FPM, but I noticed that the server block setup I currently have (gathered from several guides including the nginx Pitfalls wiki page) produces $_SERVER variables that are different from what I'm used to seeing in Apache setups. After spending the last evening trying to "fix" this, I decided to install Apache on my local computer and gather the variables that I'm interested in under different conditions so that I could try and mimic them on nginx. The Apache setup I've on my computer has only one mod_rewrite rule: RewriteEngine On RewriteCond %{SCRIPT_FILENAME} !-f RewriteCond %{SCRIPT_FILENAME} !-d RewriteRule ^(.*)$ /index.php/$1 [L] And these are the values I get for different request URIs (left is Apache, right is nginx): localhost/ - http://www.mergely.com/GnzBHRV1/ localhost/foo/bar/baz/?foo=bar - http://www.mergely.com/VwsT8oTf/ localhost/index.php/foo/bar/baz/?foo=bar - http://www.mergely.com/VGEFehfT/ What configuration directives would allow me to get similar values on requests handled by nginx? My current configuration in nginx is: server { listen 80; listen 443 ssl; server_name default; ssl_certificate /etc/nginx/certificates/dummy.crt; ssl_certificate_key /etc/nginx/certificates/dummy.key; root /var/www/default/html; index index.php index.html; autoindex on; location / { try_files $uri $uri/ /index.php; } location ~ /(?:favicon[.]ico|robots[.]txt)$ { log_not_found off; } location ~* [.]php { #try_files $uri =404; include fastcgi_params; fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index index.php; fastcgi_split_path_info ^(.+[.]php)(/.+)$; } location ~* [.]ht { deny all; } } And my fastcgi_params file looks like this: fastcgi_param PATH_INFO $fastcgi_path_info; fastcgi_param PATH_TRANSLATED $document_root$fastcgi_path_info; fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param REQUEST_URI $request_uri; fastcgi_param DOCUMENT_URI $document_uri; fastcgi_param DOCUMENT_ROOT $document_root; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param GATEWAY_INTERFACE CGI/1.1; fastcgi_param SERVER_SOFTWARE nginx/$nginx_version; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; fastcgi_param HTTPS $https; I know that the try_files $uri =404; directive is commented and that it is a security vulnerability but, if I uncomment it, the third request (localhost/index.php/foo/bar/baz/?foo=bar) will return a 404. It's also worth noting that my PHP cgi.fix_pathinfo in On (contrary to what some of the guides recommend), if I try to set it to Off, I'm presented with a "Access denied." message on every PHP request. I'm running PHP 5.4.8 and nginx/1.1.19. I don't know what else to try... Help?

    Read the article

  • Cisco ASA not forwarding traffic from one interface to another

    - by Antoine Benkemoun
    Hello ServerFault, I am needing help in the configuration process of my Cisco ASA 5510. I have set up 4 Cisco ASA interconnected together via a big LAN. Each Cisco ASA has 3 or 4 LANs attached to them. The IP routing part is taken care of by OSPF. My problem is on another level. A computer connected to one of the LANs attached to an ASA has no problem communicating with the outside world. The outside world being anything "after" the ASA. My problem is that I am completely unable to have them communicate with another LAN connected to the same ASA. To rephrase this, I am unable to send traffic from one interface of a given ASA to another interface of the same ASA. My configuration is the following : ! hostname Fuji ! interface Ethernet0/0 speed 100 duplex full nameif outside security-level 0 ip address 10.0.0.2 255.255.255.0 no shutdown ! interface Ethernet0/1 speed 100 duplex full nameif cs4 no shutdown security-level 100 ip address 10.1.4.1 255.255.255.0 ! interface Ethernet0/2 speed 100 duplex full no shutdown ! interface Ethernet0/2.15 vlan 15 nameif cs5 security-level 100 ip address 10.1.5.1 255.255.255.0 ! interface Ethernet0/2.16 vlan 16 nameif cs6 security-level 100 ip address 10.1.6.1 255.255.255.0 ! interface Management0/0 speed 100 duplex full nameif management security-level 100 ip address 10.6.0.252 255.255.255.0 ! access-list nat_cs4 extended permit ip 10.1.4.0 255.255.255.0 any access-list acl_cs4 extended permit ip 10.1.4.0 255.255.255.0 any access-list nat_cs5 extended permit ip 10.1.5.0 255.255.255.0 any access-list acl_cs5 extended permit ip 10.1.5.0 255.255.255.0 any access-list nat_cs6 extended permit ip 10.1.6.0 255.255.255.0 any access-list acl_cs6 extended permit ip 10.1.6.0 255.255.255.0 any ! access-list nat_outside extended permit ip any any access-list acl_outside extended permit ip any 10.1.4.0 255.255.255.0 access-list acl_outside extended permit ip any 10.1.5.0 255.255.255.0 access-list acl_outside extended permit ip any 10.1.6.0 255.255.255.0 ! nat (outside) 0 access-list nat_outside nat (cs4) 0 access-list nat_cs4 nat (cs5) 0 access-list nat_cs5 nat (cs6) 0 access-list nat_cs6 ! static (outside,cs4) 0.0.0.0 0.0.0.0 netmask 0.0.0.0 static (outside,cs5) 0.0.0.0 0.0.0.0 netmask 0.0.0.0 static (outside,cs6) 0.0.0.0 0.0.0.0 netmask 0.0.0.0 ! static (cs4,outside) 10.1.4.0 10.1.4.0 netmask 255.255.255.0 static (cs4,cs5) 10.1.4.0 10.1.4.0 netmask 255.255.255.0 static (cs4,cs6) 10.1.4.0 10.1.4.0 netmask 255.255.255.0 ! static (cs5,outside) 10.1.5.0 10.1.5.0 netmask 255.255.255.0 static (cs5,cs4) 10.1.5.0 10.1.5.0 netmask 255.255.255.0 static (cs5,cs6) 10.1.5.0 10.1.5.0 netmask 255.255.255.0 ! static (cs6,outside) 10.1.6.0 10.1.6.0 netmask 255.255.255.0 static (cs6,cs4) 10.1.6.0 10.1.6.0 netmask 255.255.255.0 static (cs6,cs5) 10.1.6.0 10.1.6.0 netmask 255.255.255.0 ! access-group acl_outside in interface outside access-group acl_cs4 in interface cs4 access-group acl_cs5 in interface cs5 access-group acl_cs6 in interface cs6 ! router ospf 1 network 10.0.0.0 255.255.255.0 area 1 network 10.1.4.0 255.255.255.0 area 1 network 10.1.5.0 255.255.255.0 area 1 network 10.1.6.0 255.255.255.0 area 1 log-adj-changes ! There is nothing really complicated in this configuration. It just NATs from one interface to another and that's it. I have tried enabling same-security-traffic permit inter-interface but that doesn't help. I therefore must be missing something a little bit more complicated. Does anyone know why I cannot foward traffic from one interface to another ? Thank you in advance for your help, Antoine

    Read the article

  • Automatically starting svnserve on Snow Leopard

    - by Cleggy
    Note: I originally asked this question on Server Fault (http://serverfault.com/questions/148052/automatically-starting-svnserve-on-snow-leopard), but I thought this may be a more appropriate place to ask. I have installed Subversion onto my iMac running Snow Leopard, but am having trouble getting svnserve to start up automatically. As I understand it (I'm still fairly green with OSX), the best way to do that is to utilize launchd. To that end, I have created the following .plist file in the /Library/LaunchDaemons folder. If I use launchctl to execute this file, svnserve starts as expected, but it doesn't automatically start when the system starts up or I log in. <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict> <key>Disabled</key> <false/> <key>Label</key> <string>org.tigris.subversion.svnserve</string> <key>UserName</key> <string>Dave</string> <key>ProgramArguments</key> <array> <string>/opt/subversion/bin/svnserve</string> <string>--inetd</string> <string>--root=/Users/Shared/SVNrep</string> </array> <key>ServiceDescription</key> <string>Subversion Standalone Server</string> <key>Sockets</key> <dict> <key>Listeners</key> <array> <dict> <key>SockFamily</key> <string>IPv4</string> <key>SockServiceName</key> <string>svn</string> <key>SockType</key> <string>stream</string> </dict> <dict> <key>SockFamily</key> <string>IPv6</string> <key>SockServiceName</key> <string>svn</string> <key>SockType</key> <string>stream</string> </dict> </array> </dict> <key>inetdCompatibility</key> <dict> <key>Wait</key> <false/> </dict> </dict> </plist> I have tried many different configs in the .plist, including auto-starting, simplifying the listeners section, removing dependence on inetd, but they all show the same symptom. The files work when started using launchctl load, but do not automatically start up svnserve if the iMac is rebooted. If anyone here could provide any suggestions as to how to get this to work, I'd really appreciate it.

    Read the article

  • How do you re-mount an ext3 fs readwrite after it gets mounted readonly from a disk error?

    - by cagenut
    Its a relatively common problem when something goes wrong in a SAN for ext3 to detect the disk write errors and remount the filesystem read-only. Thats all well and good, only when the SAN is fixed I can't figure out how to re-re-mount the filesystem read-write without rebooting. Behold: [root@localhost ~]# multipath -ll mpath0 (36001f93000a310000299000200000000) dm-2 XIOTECH,ISE1400 [size=1.1T][features=1 queue_if_no_path][hwhandler=0][rw] \_ round-robin 0 [prio=2][active] \_ 1:0:0:1 sdb 8:16 [active][ready] \_ 2:0:0:1 sdc 8:32 [active][ready] [root@localhost ~]# mount /dev/mapper/mpath0 /mnt/foo [root@localhost ~]# touch /mnt/foo/blah All good, now I yank the LUN out from under it. [root@localhost ~]# touch /mnt/foo/blah [root@localhost ~]# touch /mnt/foo/blah touch: cannot touch `/mnt/foo/blah': Read-only file system [root@localhost ~]# tail /var/log/messages Mar 18 13:17:33 localhost multipathd: sdb: tur checker reports path is down Mar 18 13:17:34 localhost multipathd: sdc: tur checker reports path is down Mar 18 13:17:35 localhost kernel: Aborting journal on device dm-2. Mar 18 13:17:35 localhost kernel: Buffer I/O error on device dm-2, logical block 1545 Mar 18 13:17:35 localhost kernel: lost page write due to I/O error on dm-2 Mar 18 13:17:36 localhost kernel: ext3_abort called. Mar 18 13:17:36 localhost kernel: EXT3-fs error (device dm-2): ext3_journal_start_sb: Detected aborted journal Mar 18 13:17:36 localhost kernel: Remounting filesystem read-only It only thinks its read-only, in reality its not even there. [root@localhost ~]# multipath -ll sdb: checker msg is "tur checker reports path is down" sdc: checker msg is "tur checker reports path is down" mpath0 (36001f93000a310000299000200000000) dm-2 XIOTECH,ISE1400 [size=1.1T][features=0][hwhandler=0][rw] \_ round-robin 0 [prio=0][enabled] \_ 1:0:0:1 sdb 8:16 [failed][faulty] \_ 2:0:0:1 sdc 8:32 [failed][faulty] [root@localhost ~]# ll /mnt/foo/ ls: reading directory /mnt/foo/: Input/output error total 20 -rw-r--r-- 1 root root 0 Mar 18 13:11 bar How it still remembers that 'bar' file being there... mystery, but not important right now. Now I re-present the LUN: [root@localhost ~]# tail /var/log/messages Mar 18 13:23:58 localhost multipathd: sdb: tur checker reports path is up Mar 18 13:23:58 localhost multipathd: 8:16: reinstated Mar 18 13:23:58 localhost multipathd: mpath0: queue_if_no_path enabled Mar 18 13:23:58 localhost multipathd: mpath0: Recovered to normal mode Mar 18 13:23:58 localhost multipathd: mpath0: remaining active paths: 1 Mar 18 13:23:58 localhost multipathd: dm-2: add map (uevent) Mar 18 13:23:58 localhost multipathd: dm-2: devmap already registered Mar 18 13:23:59 localhost multipathd: sdc: tur checker reports path is up Mar 18 13:23:59 localhost multipathd: 8:32: reinstated Mar 18 13:23:59 localhost multipathd: mpath0: remaining active paths: 2 Mar 18 13:23:59 localhost multipathd: dm-2: add map (uevent) Mar 18 13:23:59 localhost multipathd: dm-2: devmap already registered [root@localhost ~]# multipath -ll mpath0 (36001f93000a310000299000200000000) dm-2 XIOTECH,ISE1400 [size=1.1T][features=1 queue_if_no_path][hwhandler=0][rw] \_ round-robin 0 [prio=2][enabled] \_ 1:0:0:1 sdb 8:16 [active][ready] \_ 2:0:0:1 sdc 8:32 [active][ready] Great right? It says [rw] right there. Not so fast: [root@localhost ~]# touch /mnt/foo/blah touch: cannot touch `/mnt/foo/blah': Read-only file system OK, doesn't do it automatically, I'll just give it a little push: [root@localhost ~]# mount -o remount /mnt/foo mount: block device /dev/mapper/mpath0 is write-protected, mounting read-only Noooooooooo. I have tried all sorts of different mount/tune2fs/dmsetup commands and I cannot figure out how to get it to un-flag the block device as write-protected. Rebooting will fix it, but I'd much rather do it on-line. An hour of googling has gotten me nowhere either. Save me ServerFault.

    Read the article

  • Install Intel USB 3.0 eXtensible Host Controller Driver for Windows Server 2008 R2 x64

    - by ffrugone
    According to Intel and Dell, by board is technically a 'desktop' board and they therefore do not support Intel USB 3.0 eXtensible Host Controller drivers for Windows Server 2008 (R2 x64). I'm trying to find a workaround. I found an entry on someone who tried to tackle this, but I can't make his fix work for me. Below, I have copied both his entry, and my reply. I'm a loyal stackoverflow user, and hopefully the people here at serverfault can help me: anyforumuser Re: GA-Z77X-UD5H USB3 Drivers not installing? « Reply #6 on: July 05, 2012, 04:12:59 am » Thanks to JoeMiner , his process for the network drivers gave me the clues to figure out to get the USB3 drivers working. I have got the intel USB3 drivers working at full speed in win server 2008r2 you have to edit the following file : 1. mup.xml in change the "Windows7" to "W2K8" 2. in setup.if2 under [groups] line starting with "HSCSDRIVER " change the "IsOS( ... )" entry to "IsOS(WIN2008_R2,WIN2008_R2_MAXSP)" inf files for all copy the content of the [Intel.NTAMD64.6.1] group to the [Intel.NTAMD64.6.2] group driver folders. here i am not entirely sure which is correct so there are some double up's. in the drivers folder copy the "Win7" folder to "win2008" , "win2008_r2" and "x64" ie your drivers folder should now contain the "win2008" , "win2008_r2" and "x64" folders and they contain contents of the win7 folder (the inf files should of already been fixed) Run install , It should install properly and work now. You will have to reboot If it doesn't work remove the intel usb3 controllers from device manager and get it to "scan for hardware changes" Good luck !!! benevida Re: GA-Z77X-UD5H Intel Network Drivers not installing? « Reply #7 on: August 13, 2012, 02:21:14 pm » Thank you anyforumuser! A process for getting this driver installed was exactly what I needed. However, I've hit a snag. I believe I've followed every step exactly as written, but I'm getting an error during installation. I get the message "One or more files that are required for installation are either missing or corrupted. Setup will exit." Behind the error, the 'Setup Progress' shows the current step as "Copying File: C:\Program Files (x86)\Intel\Intel(R) USB 3.0 eXtensible Host Controller Driver\Drivers\iusb3xhc.man". I've checked the installation files, and iusb3xhc.man seems to be a viable file in all of the Windows 2008 sub-directories of the Drivers folder. Therefore I don't see how the file could be missing and I doubt that it is corrupted, (although it does NOT exist in the \Drivers\HCSwitch folder). I opened 'Setup.if2', and there are two aspects to the step of copying iusb3xhc.man that caught my eye. First, the steps immediately preceding are set to 'error=ignore'. If they hadn't completed successfully, this is the first step where we'd hear about it. Second, this is the first step where the relative path '%source%\drivers\%_os%\%_ia%\' is used. If I haven't named the Windows 2008 sub-directories correctly, I could see where things are fouling up. In any event, if someone could take a look and make suggestions I'd appreciate it. Thank you.

    Read the article

  • Cannot Start MySQL Server on Fresh MAMP Install

    - by alexpelan
    I'm using Mac OS X 10.6.2 on my Macbook Pro. I can get the apache server to start, but not the mysql server, on both the default apache and default MAMP ports. When I try to go to my start page, I get the message "Error: Could not connect to MySQL server!" . Here's what's in my mysql error log: 00513 02:00:07 mysqld_safe mysqld from pid file /Applications/MAMP/tmp/mysql/mysql.pid ended 100513 02:00:16 mysqld_safe Starting mysqld daemon with databases from /Applications/MAMP/db/mysql 100513 2:00:16 [Warning] The syntax '--log_slow_queries' is deprecated and will be removed in a future release. Please use '--slow_query_log'/'--slow_query_log_file' instead. 100513 2:00:16 [Warning] You have forced lower_case_table_names to 0 through a command-line option, even though your file system '/Applications/MAMP/db/mysql/' is case insensitive. This means that you can corrupt a MyISAM table by accessing it with different cases. You should consider changing lower_case_table_names to 1 or 2 100513 2:00:16 [Warning] One can only use the --user switch if running as root 100513 2:00:16 [Note] Plugin 'FEDERATED' is disabled. 100513 2:00:16 [Note] Plugin 'ndbcluster' is disabled. InnoDB: Error: log file /usr/local/mysql/data/ib_logfile0 is of different size 0 5242880 bytes InnoDB: than specified in the .cnf file 0 16777216 bytes! 100513 2:00:16 [ERROR] Plugin 'InnoDB' init function returned error. 100513 2:00:16 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed. 100513 2:00:16 [ERROR] /Applications/MAMP/Library/libexec/mysqld: unknown option '--skip-bdb' 100513 2:00:16 [ERROR] Aborting 100513 2:00:16 [Note] /Applications/MAMP/Library/libexec/mysqld: Shutdown complete 100513 02:00:16 mysqld_safe mysqld from pid file /Applications/MAMP/tmp/mysql/mysql.pid ended A couple of things: 1) There are a bunch of different .cnf files that come with MAMP (my-huge, my-medium, etc.)...how can I tell which one is actually being used? 2) I deleted the ib_logfile0 and ib_logfile1 as recommended by another post on serverfault, and then ended up with more errors: 100519 16:01:30 InnoDB: Log file /usr/local/mysql/data/ib_logfile0 did not exist: new to be created InnoDB: Setting log file /usr/local/mysql/data/ib_logfile0 size to 16 MB InnoDB: Database physically writes the file full: wait... 100519 16:01:30 InnoDB: Log file /usr/local/mysql/data/ib_logfile1 did not exist: new to be created InnoDB: Setting log file /usr/local/mysql/data/ib_logfile1 size to 16 MB InnoDB: Database physically writes the file full: wait... InnoDB: The log sequence number in ibdata files does not match InnoDB: the log sequence number in the ib_logfiles! 100519 16:01:31 InnoDB: Database was not shut down normally! InnoDB: Starting crash recovery. InnoDB: Reading tablespace information from the .ibd files... InnoDB: Restoring possible half-written data pages from the doublewrite InnoDB: buffer... 100519 16:01:31 InnoDB: Started; log sequence number 0 44556 100519 16:01:31 [ERROR] /Applications/MAMP/Library/libexec/mysqld: unknown option '--skip-bdb' 100519 16:01:31 [ERROR] Aborting And then I got this the next time I tried to run it: InnoDB: Unable to lock /usr/local/mysql/data/ibdata1, error: 35 InnoDB: Check that you do not already have another mysqld process InnoDB: using the same InnoDB data or log files. Sorry that this is a lot of information, but I don't want to leave anything out. Thanks.

    Read the article

  • How to disable proxy requests once a server has been added to spammers "open proxy" list?

    - by Matt
    Hello all, I've just started in a new company, and have been going over the setup of their Apache webserver conf files... only to find that they've had their apache servers set up as open proxies available to all the world for the last two months. I've already set ProxyRequests Off in the httpd.conf file and restarted the web server, but the access log file is still growing at a horrendous rate (about a gig a day). I noticed that another question was posted on here about this (http://serverfault.com/questions/63715/apache-hit-with-proxy-request), but their access log was supposedly returning 404 errors, while mine appears to be returning 403 and 404 codes... Is this correct? Here are a few lines out of my access log: 87.118.118.124 - - [16/Mar/2010:10:56:36 -0400] "GET http://www.c5interlude.ru/torrent/viewtopic.php?p=2501 HTTP/1.0" 404 219 "http://www.c5interlude.ru/torrent/viewtopic.php?p=2501" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; .NET CLR 1.1.4322)" 117.41.184.27 - - [16/Mar/2010:10:56:36 -0400] "GET http://ad.xtendmedia.com/st?ad_type=iframe&ad_size=300x250&section=790074 HTTP/1.0" 404 200 "http://www.newbiegamer.com" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.0; Alexa Toolbar)" 122.224.55.222 - - [16/Mar/2010:10:56:36 -0400] "GET http://www.188woool.net/\xb4\xf3\xd4\xcb\xb4\xab\xca\xc0.rar HTTP/1.1" 403 214 "http://www.188woool.net/\xb4\xf3\xd4\xcb\xb4\xab\xca\xc0.rar" "Mozilla/4.0" 58.55.21.40 - - [16/Mar/2010:10:56:36 -0400] "GET http://www.cpx24.com/ad1.js HTTP/1.0" 404 204 "http://thebighits.com/?id=aibux" "Mozilla/4.0 (compatible; MSIE 5.01; Windows NT 5.0)" 122.226.223.188 - - [16/Mar/2010:10:56:36 -0400] "GET http://ad.reduxmedia.com/st?ad_type=iframe&ad_size=160x600&section=798636 HTTP/1.0" 404 200 "http://www.gvvu.com" "Mozilla/4.0 (compatible; MSIE 5.5; AOL 6.0; Windows 98; Win 9x 4.90)" 84.51.109.31 - - [16/Mar/2010:10:56:36 -0400] "GET http://www.kslp.ru/forum/index.php HTTP/1.0" 404 213 "http://www.kslp.ru/forum/index.php" "Mozilla/4.0 (compatible; MSIE 7.0b; Windows NT 6.0 ; .NET CLR 2.0.50215; SL Commerce Client v1.0; Tablet PC 2.0" 122.224.48.49 - - [16/Mar/2010:10:56:36 -0400] "GET http://www1.vip218.com/\xb2\xca\xba\xe7\xb4\xab\xca\xc0.exe HTTP/1.1" 403 214 "http://www1.vip218.com/\xb2\xca\xba\xe7\xb4\xab\xca\xc0.exe" "Mozilla/4.0" 117.41.184.27 - - [16/Mar/2010:10:56:36 -0400] "GET http://ad.xtendmedia.com/st?ad_type=iframe&ad_size=728x90&section=657624 HTTP/1.0" 404 200 "http://www.raiseanimals.com" "Mozilla/4.0 (compatible; MSIE 6.0; Windows 98; Alexa Toolbar)" And my corresponding error log entries: [Tue Mar 16 10:56:36 2010] [error] [client 87.118.118.124] File does not exist: C:/public_html/torrent, referer: http://www.c5interlude.ru/torrent/viewtopic.php?p=2501 [Tue Mar 16 10:56:36 2010] [error] [client 117.41.184.27] File does not exist: C:/public_html/st, referer: http://www.newbiegamer.com [Tue Mar 16 10:56:36 2010] [error] [client 122.224.55.222] (22)Invalid argument: Cannot map GET http://www.188woool.net/\xb4\xf3\xd4\xcb\xb4\xab\xca\xc0.rar HTTP/1.1 to file, referer: http://www.188woool.net/\xb4\xf3\xd4\xcb\xb4\xab\xca\xc0.rar [Tue Mar 16 10:56:36 2010] [error] [client 58.55.21.40] File does not exist: C:/public_html/ad1.js, referer: http://thebighits.com/?id=aibux [Tue Mar 16 10:56:36 2010] [error] [client 122.226.223.188] File does not exist: C:/public_html/st, referer: http://www.gvvu.com [Tue Mar 16 10:56:36 2010] [error] [client 84.51.109.31] File does not exist: C:/public_html/forum, referer: http://www.kslp.ru/forum/index.php [Tue Mar 16 10:56:36 2010] [error] [client 122.224.48.49] (22)Invalid argument: Cannot map GET http://www1.vip218.com/\xb2\xca\xba\xe7\xb4\xab\xca\xc0.exe HTTP/1.1 to file, referer: http://www1.vip218.com/\xb2\xca\xba\xe7\xb4\xab\xca\xc0.exe [Tue Mar 16 10:56:36 2010] [error] [client 117.41.184.27] File does not exist: C:/public_html/st, referer: http://www.raiseanimals.com Does this in fact look like the server is blocking them correctly, and is there anything else that I could do better to cut down on my access log size? (perhaps block these requests from the server completely?) Thanks! Matt

    Read the article

  • iCloud stuff stops working while connected to OpenVPN

    - by Taco Bob
    I have a fairly simple OpenVPN setup on an OpenVZ VPS with Ubuntu 11.10. Client is the Viscosity client on Mac OS X 10.8.2, and after some testing, we can rule out the client as being part of the problem. Everything has been working fine except for Apple's iCloud stuff. Web surfing, email, FTP, NNTP, and Skype are all working as expected. It's ONLY the iCloud services that cease to function. If I connect to the VPN, iCloud stuff stops working. I no longer get anything in Messages, Calendar items don't get updated, and Notifications stop working. If I disconnect, the iCloud stuff all starts working. Connect again, iCloud stops working. Here's the server.conf: status openvpn-status.log log /var/log/openvpn.log verb 4 port 1194 proto udp dev tun ca /etc/openvpn/ca.crt cert /etc/openvpn/server.crt key /etc/openvpn/server.key dh /etc/openvpn/dh1024.pem server 10.9.8.0 255.255.255.0 ifconfig-pool-persist ipp.txt push "redirect-gateway def1" push “dhcp-option DNS 10.9.8.1? keepalive 10 120 duplicate-cn cipher BF-CBC comp-lzo user nobody group nogroup persist-key persist-tun tun-mtu 1500 mssfix 1400 I'm using iptables in a script, and it's also fairly simplistic. iptables -F iptables -t nat -F iptables -t mangle -F iptables -A FORWARD -i tun0 -o venet0 -j ACCEPT iptables -A FORWARD -i venet0 -o tun0 -j ACCEPT iptables -A INPUT -p tcp --dport 22 -j ACCEPT iptables -A INPUT -p tcp --dport 1194 -j ACCEPT iptables -A INPUT -p udp --dport 1194 -j ACCEPT iptables -t nat -A POSTROUTING -s 10.9.8.0/24 -j SNAT --to-source <server's public ip> echo 1 > /proc/sys/net/ipv4/ip_forward I tried forwarding ports as well, with no success. iptables -A FORWARD -p tcp -d 10.9.8.0/24 --dport 5222:5230 -j ACCEPT iptables -t nat -A PREROUTING -p tcp --dport 5222:5230 -j DNAT --to-destination 10.9.8.6 I am also sometimes behind a double-NAT situation that I have no control over. Client -> work VPN -> my OpenVPN box -> Internet. Client -> Airport Express -> ISP (which is doing NAT) -> my OpenVPN box -> Internet. Those two situations are just the fact of life where I am, and I cannot change them. I do have full control over my client and the OpenVPN server. I am completely out of ideas. I have posted a similar query at the OpenVPN forums, but it hasn't posted yet and seems to be in their moderation queue still. Tried on freenode irc channels, but nobody is awake, so here I am. I have Googled extensively for this, and can find nothing that is related. Help me get iCloud stuff working again! (I tried serverfault, it was closed as off-topic. I'm trying here and the Unix site as well. Here because it's a more general audience that might know more about OpenVPN based on the number of questions I see asked about it) EDIT: -I have also tried upgrading to Version: 2.3-beta1-debian0 - issue persists. -Removed all iptables rules except for the ones that flush -left this rule:iptables -t nat -A POSTROUTING -s 10.9.8.0/24 -j SNAT --to-source (server ip) -added iptables -A FORWARD -m state --state RELATED,ESTABLISHED -j ACCEPT still, nothing works. I can see traffic in tcpdump on the server if i watch the tunnel: 20:03:48.702835 IP nk11p01st-courier105-bz.push.apple.com.5223 10.9.8.6.60772: Flags [F.], seq 2635, ack 1218, win 76, options [nop,nop,TS val 914984811 ecr 745921298], length 0 20:03:48.911244 IP 10.9.8.6.60772 nk11p01st-courier105-bz.push.apple.com.5223: Flags [R], seq 3621143451, win 0, length 0 But still, no push messages/notifications are ever delivered. :/ EDIT: * Further testing indicates that it might actually be the client after all.

    Read the article

  • Publishing an Excel spreadsheet using Microsoft SBS 2008 to a web page that is viewable by mobile ph

    - by Dave Heath
    I am getting well out of my “superuser” depth here and would love some support. At work we have an Excel workbook (*.xls format circa Office 2003) which maintains our “engineers” timesheet. This handles what events we are doing across the year and how many “work units” it is. As far as a workbook goes, it is fairly simple with just a few =SUM(range) cells and some linked across sheets (12 sheets, one for each month) It is stored on a server, in a folder that provides “management” with full access and “engineers” with read-only access. The workbook itself is read-only for “engineers” and full access for “management”. I think these permissions are controlled through Active Directory. The workbook is protected with a password, assumingly to allow “management” to edit it even if they are working at a terminal logged in as an “engineer”. This protection prevents “engineers” from going to certain cells to see formulae and therefore editing them. The workbook has a macro which saves and closes it ten minutes after opening. This is to stop other “management” from being locked out by any one person who has logged in with editing privileges. I hope this is making sense to someone... :S Now then, we have Microsoft Small Business Server 2008. We have a shiny new web-based login for when we are offsite so we can get to Exchange webmail and our internal site (which uses Sharepoint 3.0). “Management” would like to be able to publish this timesheet automatically after changes (they don’t want to have to do anything different to what they are currently doing) so that using an iPhone “engineers” can check on it while out of the office. I am currently having a look at “Excel Services” for Office 2007 on TechNet but I am not sure if I am running down the right garden path at the moment. < EDIT This seems to suggest that I have to have Sharepoint Server 2007, with no mention of Sharepoint 3.0... ... "MOSS builds on WSS by adding both core features as well as end user web parts" - Wikipedia entry for Microsoft Office SharePoint Server (MOSS) this is not good news... "...and using the ASP.NET APIs, web parts can be written to extend the functionality of WSS." Wikipedia entry for Windows Sharepoint Services. Could this bring back what I need? Is this good news? Do I need to start learning ASP.NET? This link here implies that we need MOSS to do what I want and the bosses say we aint' getting it. http://serverfault.com/questions/20198/what-is-some-cool-things-you-can-do-with-sharepoint-2007/22128#22128 Back to the drawing board. < /EDIT Please could someone suggest some “further reading” for me to help point me in the right direction or to put me back on the right track. Many thanks. I will try to keep this up to date with how I get on.

    Read the article

  • Email forwarding from my domain to gmail - FAIL

    - by pitosalas
    [There are numerous similar questions on ServerFault but I couldn't find one that was exactly on point] Background: I use Gmail for my email client. My email is [email protected]. However the email that people communicate to me with is [email protected]. I run the server that hosts www.example.com and other domains, at ServerBeach. Up to yesterday, I had SENDMAIL painlessly just forward emails to [email protected] to [email protected] and everything was fine, for several years in fact. Suddenly my email stopped working - that is, my gmail account stopped receiving emails via the forward from my server. Looking into it I found a bunch of emails sitting on my server with content like this: ... while talking to gmail-smtp-in.l.google.com.: RCPT To: <<< 450-4.2.1 The user you are trying to contact is receiving mail at a rate that <<< 450-4.2.1 prevents additional messages from being delivered. Please resend your <<< 450-4.2.1 message at a later time. If the user is able to receive mail at that <<< 450-4.2.1 time, your message will be delivered. For more information, please <<< 450 4.2.1 visit xxxxxx://mail.google.com/support/bin/answer.py?answer=6592 u15si37138086qco.76 [email protected]... Deferred: 450-4.2.1 The user you are trying to contact is receiving mail at a rate that DATA <<< 550-5.7.1 [64.34.168.137 1] Our system has detected an unusual rate of <<< 550-5.7.1 unsolicited mail originating from your IP address. To protect our <<< 550-5.7.1 users from spam, mail sent from your IP address has been blocked. <<< 550-5.7.1 Please visit xxxxx://www.google.com/mail/help/bulk_mail.html to review <<< 550 5.7.1 our Bulk Email Senders Guidelines. u15si37138086qco.76 554 5.0.0 Service unavailable ... while talking to alt1.gmail-smtp-in.l.google.com.: From what I've been researching, I think somehow someone has/is hijacking my domain name or something and this somehow has caused gmail's servers to notice and cut me off. But I don't know really what's going on nor do I see whatever emails might be involved. I've read stuff on zoneedit.com that sounds like they might have a solution in their service for what I am trying to do. I also read a lot about admining DNS and SENDMAIL and tried various things, but nothing works. Can you tell from my description what is going on that caused GMail's server to stop accepting email from my server and is there a way to stop it? What is the 'correct' way to configure things so that emails to [email protected] behave as if they were sent to [email protected]? Thanks so much!

    Read the article

< Previous Page | 19 20 21 22 23 24  | Next Page >