Search Results

Search found 3089 results on 124 pages for 'lock'.

Page 118/124 | < Previous Page | 114 115 116 117 118 119 120 121 122 123 124  | Next Page >

  • IIS 7.0 informational HTTP status codes

    - by Samir R. Bhogayta
    1xx - Informational These HTTP status codes indicate a provisional response. The client computer receives one or more 1xx responses before the client computer receives a regular response. IIS 7.0 uses the following informational HTTP status codes: 100 - Continue. 101 - Switching protocols. 2xx - Success These HTTP status codes indicate that the server successfully accepted the request. IIS 7.0 uses the following success HTTP status codes: 200 - OK. The client request has succeeded. 201 - Created. 202 - Accepted. 203 - Nonauthoritative information. 204 - No content. 205 - Reset content. 206 - Partial content. 3xx - Redirection These HTTP status codes indicate that the client browser must take more action to fulfill the request. For example, the client browser may have to request a different page on the server. Or, the client browser may have to repeat the request by using a proxy server. IIS 7.0 uses the following redirection HTTP status codes: 301 - Moved permanently. 302 - Object moved. 304 - Not modified. 307 - Temporary redirect. 4xx - Client error These HTTP status codes indicate that an error occurred and that the client browser appears to be at fault. For example, the client browser may have requested a page that does not exist. Or, the client browser may not have provided valid authentication information. IIS 7.0 uses the following client error HTTP status codes: 400 - Bad request. The request could not be understood by the server due to malformed syntax. The client should not repeat the request without modifications. IIS 7.0 defines the following HTTP status codes that indicate a more specific cause of a 400 error: 400.1 - Invalid Destination Header. 400.2 - Invalid Depth Header. 400.3 - Invalid If Header. 400.4 - Invalid Overwrite Header. 400.5 - Invalid Translate Header. 400.6 - Invalid Request Body. 400.7 - Invalid Content Length. 400.8 - Invalid Timeout. 400.9 - Invalid Lock Token. 401 - Access denied. IIS 7.0 defines several HTTP status codes that indicate a more specific cause of a 401 error. The following specific HTTP status codes are displayed in the client browser but are not displayed in the IIS log: 401.1 - Logon failed. 401.2 - Logon failed due to server configuration. 401.3 - Unauthorized due to ACL on resource. 401.4 - Authorization failed by filter. 401.5 - Authorization failed by ISAPI/CGI application. 403 - Forbidden. IIS 7.0 defines the following HTTP status codes that indicate a more specific cause of a 403 error: 403.1 - Execute access forbidden. 403.2 - Read access forbidden. 403.3 - Write access forbidden. 403.4 - SSL required. 403.5 - SSL 128 required. 403.6 - IP address rejected. 403.7 - Client certificate required. 403.8 - Site access denied. 403.9 - Forbidden: Too many clients are trying to connect to the Web server. 403.10 - Forbidden: Web server is configured to deny Execute access. 403.11 - Forbidden: Password has been changed. 403.12 - Mapper denied access. 403.13 - Client certificate revoked. 403.14 - Directory listing denied. 403.15 - Forbidden: Client access licenses have exceeded limits on the Web server. 403.16 - Client certificate is untrusted or invalid. 403.17 - Client certificate has expired or is not yet valid. 403.18 - Cannot execute requested URL in the current application pool. 403.19 - Cannot execute CGI applications for the client in this application pool. 403.20 - Forbidden: Passport logon failed. 403.21 - Forbidden: Source access denied. 403.22 - Forbidden: Infinite depth is denied. 404 - Not found. IIS 7.0 defines the following HTTP status codes that indicate a more specific cause of a 404 error: 404.0 - Not found. 404.1 - Site Not Found. 404.2 - ISAPI or CGI restriction. 404.3 - MIME type restriction. 404.4 - No handler configured. 404.5 - Denied by request filtering configuration. 404.6 - Verb denied. 404.7 - File extension denied. 404.8 - Hidden namespace. 404.9 - File attribute hidden. 404.10 - Request header too long. 404.11 - Request contains double escape sequence. 404.12 - Request contains high-bit characters. 404.13 - Content length too large. 404.14 - Request URL too long. 404.15 - Query string too long. 404.16 - DAV request sent to the static file handler. 404.17 - Dynamic content mapped to the static file handler via a wildcard MIME mapping. 404.18 - Querystring sequence denied. 404.19 - Denied by filtering rule. 405 - Method Not Allowed. 406 - Client browser does not accept the MIME type of the requested page. 408 - Request timed out. 412 - Precondition failed. 5xx - Server error These HTTP status codes indicate that the server cannot complete the request because the server encounters an error. IIS 7.0 uses the following server error HTTP status codes: 500 - Internal server error. IIS 7.0 defines the following HTTP status codes that indicate a more specific cause of a 500 error: 500.0 - Module or ISAPI error occurred. 500.11 - Application is shutting down on the Web server. 500.12 - Application is busy restarting on the Web server. 500.13 - Web server is too busy. 500.15 - Direct requests for Global.asax are not allowed. 500.19 - Configuration data is invalid. 500.21 - Module not recognized. 500.22 - An ASP.NET httpModules configuration does not apply in Managed Pipeline mode. 500.23 - An ASP.NET httpHandlers configuration does not apply in Managed Pipeline mode. 500.24 - An ASP.NET impersonation configuration does not apply in Managed Pipeline mode. 500.50 - A rewrite error occurred during RQ_BEGIN_REQUEST notification handling. A configuration or inbound rule execution error occurred. Note Here is where the distributed rules configuration is read for both inbound and outbound rules. 500.51 - A rewrite error occurred during GL_PRE_BEGIN_REQUEST notification handling. A global configuration or global rule execution error occurred. Note Here is where the global rules configuration is read. 500.52 - A rewrite error occurred during RQ_SEND_RESPONSE notification handling. An outbound rule execution occurred. 500.53 - A rewrite error occurred during RQ_RELEASE_REQUEST_STATE notification handling. An outbound rule execution error occurred. The rule is configured to be executed before the output user cache gets updated. 500.100 - Internal ASP error. 501 - Header values specify a configuration that is not implemented. 502 - Web server received an invalid response while acting as a gateway or proxy. IIS 7.0 defines the following HTTP status codes that indicate a more specific cause of a 502 error: 502.1 - CGI application timeout. 502.2 - Bad gateway. 503 - Service unavailable. IIS 7.0 defines the following HTTP status codes that indicate a more specific cause of a 503 error: 503.0 - Application pool unavailable. 503.2 - Concurrent request limit exceeded.

    Read the article

  • I Know What I Did This Summer: Put Down Trex Decking

    - by thatjeffsmith
    If you’re wondering why I would bore everyone with my pictures and frequent status updates/tweets from the past week – it’s so I could document the process of refurbishing my deck, or what some would call a porch. When we go to take a vacation, buy a car, do anything – we also read personal blogs to get the real story. So, if you’re curious about what it takes to tackle this sort of project, read on. Skills/Equipment/Manpower We Possessed I took the old decking out by myself. I’m about 230 lbs, more than 6′ tall, and I’m pretty healthy. This took about 8 hours over two afternoons. Three of us put the deck back together. My wife has two engineering degrees. Her father also has two engineering degrees. Lots of brainpower available here. Also, her dad ran the public works department for a country for more than 20 years – so lots and lots of practical experience on hand. We had a compound mitre saw, a skilsaw, 2-3 crowbars, a framing hammer, 3 cordless drills, a corded drill, lots of sawhorses, a power sander, an angle grinder, a 10×10 Coleman canopy tent, a Ford F-150 pickup truck, outdoor speakers and lots of iTunes playlists, plenty of water and cold beer. Why We Did This Our deck was relatively young – it was built in 2005. However, the pressure treated boards must not have been adequately maintained before we bought the house. I had powerwashed the deck every other year and had it stained a few times. The boards just rotted. We’re going to be in the house for a long time, and we wanted something that would look nice and require little maintenance. More bad deck boards The deck boards were in bad shape Things We Learned The two most important things: The hidden fasteners have to be put in JUST right. Wedge them into the grooved board, then bend down the bit that is screwed down. We didn’t do this on the first board and couldn’t get the second board to fit nearly close enough. Watching the official TREX YouTube video helped immensely, and we should have watched that first. When pre-drilling holes for the boards that need screwed down – DO NOT pre-drill through the underlying framing wood. ONLY pre-drill through the TREX itself. The screw won’t seat in the board properly. Instead of sitting down flush with the board, it will stop at the top of the board and just spin. I had to call the the place that sold me the screws to find this out. So about a third of our screws look like crap. If it doesn’t look or feel right – stop everything and pick up your computer or your phone. It’s not right, and it will be much easier to stop and find out why. We didn’t do this, and now I’m going to see every screw that’s not flush with the boards and get upset. Oh well. The Process How much time did it take? Well I spent about 8 hours taking the deck apart. And then the 3 of use spent 8 hours the first day, 10 hours the second day, 8 hours the third, and another 6 hours on the fourth day. That’s like 104 man-hours. We supposedly saved four or five thousand dollars in labor, but don’t do the math here or you might get a bit upset. The main thing is that we got what we wanted, and there won’t be any surprises later. Now for some pictures… This 6”+ pry bar made the destruction of the old deck much easier Most of the joists, once exposed, were OK. This joist wasn’t sitting on ANYTHING before. We think a lazy gas person cut the board to sneak a gas line in. Awesome… These monster lag bolts had to be accounted for when putting in the additional framing The border pattern Sheri wanted to put in required a lot more framing. These were the first boards to go down – we screwed them in as there was no way to attach clips I sat, kicked in the boards, and then drilled these clips in – but my wife was able to go MUCH faster by using her hands to lock the boards in and drill on her knees. I liked locking the board in with my feet when they needed to be ‘encouraged’ to go straight. The first board took FOREVER to go in, but then when we got rolling, we were able to put in a 20′ board in less than 10 minutes. This was end of construction day #2 – we got much further than we thought we would. Ah, the dreaded last 10% – what to do here? Remember those ‘floating’ stringers? Yeah, we fixed that up a bit, too. My wife used a website (and her brain) to calculate exactly how to cut the stringers to give us the rise/run we needed with the proper clearance and all that jazz. The stairs with stringers and toe kicks – this was worth the effort It started raining on us as I screwed down the steps – this we managed to get our shade tent up on the deck to protect us from the rain too The stairs, finished Finished, mostly Good corner shot The top of the stairs Stairs, looking down Celebratory beer In Summary There are a few things we’re not happy with. I think we can fix them up – but later. I have a few things left to finish, rewire the lighting, get the gas grille put back in, and rehang some screen doors. I was expecting this to be a lot worse than it was. If I didn’t have the help, I would have never done it myself. But I’m glad that I did have that help and did do that project. It’s not often you get to spend that kind of qualify time with family and building cool stuff.

    Read the article

  • Secure Your Wireless Router: 8 Things You Can Do Right Now

    - by Chris Hoffman
    A security researcher recently discovered a backdoor in many D-Link routers, allowing anyone to access the router without knowing the username or password. This isn’t the first router security issue and won’t be the last. To protect yourself, you should ensure that your router is configured securely. This is about more than just enabling Wi-Fi encryption and not hosting an open Wi-Fi network. Disable Remote Access Routers offer a web interface, allowing you to configure them through a browser. The router runs a web server and makes this web page available when you’re on the router’s local network. However, most routers offer a “remote access” feature that allows you to access this web interface from anywhere in the world. Even if you set a username and password, if you have a D-Link router affected by this vulnerability, anyone would be able to log in without any credentials. If you have remote access disabled, you’d be safe from people remotely accessing your router and tampering with it. To do this, open your router’s web interface and look for the “Remote Access,” “Remote Administration,” or “Remote Management” feature. Ensure it’s disabled — it should be disabled by default on most routers, but it’s good to check. Update the Firmware Like our operating systems, web browsers, and every other piece of software we use, router software isn’t perfect. The router’s firmware — essentially the software running on the router — may have security flaws. Router manufacturers may release firmware updates that fix such security holes, although they quickly discontinue support for most routers and move on to the next models. Unfortunately, most routers don’t have an auto-update feature like Windows and our web browsers do — you have to check your router manufacturer’s website for a firmware update and install it manually via the router’s web interface. Check to be sure your router has the latest available firmware installed. Change Default Login Credentials Many routers have default login credentials that are fairly obvious, such as the password “admin”. If someone gained access to your router’s web interface through some sort of vulnerability or just by logging onto your Wi-Fi network, it would be easy to log in and tamper with the router’s settings. To avoid this, change the router’s password to a non-default password that an attacker couldn’t easily guess. Some routers even allow you to change the username you use to log into your router. Lock Down Wi-Fi Access If someone gains access to your Wi-Fi network, they could attempt to tamper with your router — or just do other bad things like snoop on your local file shares or use your connection to downloaded copyrighted content and get you in trouble. Running an open Wi-Fi network can be dangerous. To prevent this, ensure your router’s Wi-Fi is secure. This is pretty simple: Set it to use WPA2 encryption and use a reasonably secure passphrase. Don’t use the weaker WEP encryption or set an obvious passphrase like “password”. Disable UPnP A variety of UPnP flaws have been found in consumer routers. Tens of millions of consumer routers respond to UPnP requests from the Internet, allowing attackers on the Internet to remotely configure your router. Flash applets in your browser could use UPnP to open ports, making your computer more vulnerable. UPnP is fairly insecure for a variety of reasons. To avoid UPnP-based problems, disable UPnP on your router via its web interface. If you use software that needs ports forwarded — such as a BitTorrent client, game server, or communications program — you’ll have to forward ports on your router without relying on UPnP. Log Out of the Router’s Web Interface When You’re Done Configuring It Cross site scripting (XSS) flaws have been found in some routers. A router with such an XSS flaw could be controlled by a malicious web page, allowing the web page to configure settings while you’re logged in. If your router is using its default username and password, it would be easy for the malicious web page to gain access. Even if you changed your router’s password, it would be theoretically possible for a website to use your logged-in session to access your router and modify its settings. To prevent this, just log out of your router when you’re done configuring it — if you can’t do that, you may want to clear your browser cookies. This isn’t something to be too paranoid about, but logging out of your router when you’re done using it is a quick and easy thing to do. Change the Router’s Local IP Address If you’re really paranoid, you may be able to change your router’s local IP address. For example, if its default address is 192.168.0.1, you could change it to 192.168.0.150. If the router itself were vulnerable and some sort of malicious script in your web browser attempted to exploit a cross site scripting vulnerability, accessing known-vulnerable routers at their local IP address and tampering with them, the attack would fail. This step isn’t completely necessary, especially since it wouldn’t protect against local attackers — if someone were on your network or software was running on your PC, they’d be able to determine your router’s IP address and connect to it. Install Third-Party Firmwares If you’re really worried about security, you could also install a third-party firmware such as DD-WRT or OpenWRT. You won’t find obscure back doors added by the router’s manufacturer in these alternative firmwares. Consumer routers are shaping up to be a perfect storm of security problems — they’re not automatically updated with new security patches, they’re connected directly to the Internet, manufacturers quickly stop supporting them, and many consumer routers seem to be full of bad code that leads to UPnP exploits and easy-to-exploit backdoors. It’s smart to take some basic precautions. Image Credit: Nuscreen on Flickr     

    Read the article

  • Why JSF Matters (to You)

    - by reza_rahman
          "Those who have knowledge, don’t predict. Those who predict, don’t have knowledge."                                                                                                    – Lao Tzu You may have noticed Thoughtworks recently crowned the likes AngularJS, etc imminent successors to server-side web frameworks. They apparently also deemed it necessary to single out JSF for righteous scorn. I have to say as I was reading the analysis I couldn't help but remember they also promptly jumped on the Ruby, Rails, Clojure, etc bandwagon a good few years ago seemingly similarly crowing these dynamic languages imminent successors to Java. I remember thinking then as I do now whether the folks at Thoughtworks are really that much smarter than me or if they are simply more prone to the Hipster buzz of the day. I'll let you make the final call on that one. I also noticed mention of "J2EE" in the context of JSF and had to wonder how up-to-date or knowledgeable the person writing the analysis actually was given that the term was basically retired almost a decade ago. There's one thing that I am absolutely sure about though - as a long time pretty happy user of JSF, I had no choice but to speak up on what I believe JSF offers. If you feel the same way, I would encourage you to support the team behind JSF whose hard work you may have benefited from over the years. True to his outspoken character PrimeFaces lead Cagatay Civici certainly did not mince words making the case for the JSF ecosystem - his excellent write-up is well worth a read. He specifically pointed out the practical problems in going whole hog with bare metal JavaScript, CSS, HTML for many development teams. I'll admit I had to smile when I read his closing sentence as well as the rather cheerful comments to the post from actual current JSF/PrimeFaces users that are apparently supposed to be on a gloomy death march. In a similar vein, OmniFaces developer Arjan Tijms did a great job pointing out the fact that despite the extremely competitive server-side Java Web UI space, JSF seems to manage to always consistently come out in either the number one or number two spot over many years and many data sources - do give his well-written message in the JAX-RS user forum a careful read. I don't think it's really reasonable to expect this to be the case for so many years if JSF was not at least a capable if not outstanding technology. If fact if you've ever wondered, Oracle itself is one of the largest JSF users on the planet. As Oracle's Shay Shmeltzer explains in a recent JSF Central interview, many of Oracle's strategic products such as ADF, ADF Mobile and Fusion Applications itself is built on JSF. There are well over 3,000 active developers working on these codebases. I don't think anyone can think of a more compelling reason to make sure that a technology is as effective as possible for practical development under real world conditions. Standing on the shoulders of the above giants, I feel like I can be pretty brief in making my own case for JSF: JSF is a powerful abstraction that brings the original Smalltalk MVC pattern to web development. This means cutting down boilerplate code to the bare minimum such that you really can think of just writing your view markup and then simply wire up some properties and event handlers on a POJO. The best way to see what this really means is to compare JSF code for a pretty small case to other approaches. You should then multiply the additional work for the typical enterprise project to try to understand what the productivity trade-offs are. This is reason alone for me to personally never take any other approach seriously as my primary web UI solution unless it can match the sheer productivity of JSF. Thanks to JSF's focus on components from the ground-up JSF has an extremely strong ecosystem that includes projects like PrimeFaces, RichFaces, OmniFaces, ICEFaces and of course ADF Faces/Mobile. These component libraries taken together constitute perhaps the largest widget set ever developed and optimized for a single web UI technology. To begin to grasp what this really means, just briefly browse the excellent PrimeFaces showcase and think about the fact that you can readily use the widgets on that showcase by just using some simple markup and knowing near to nothing about AJAX, JavaScript or CSS. JSF has the fair and legitimate advantage of being an open vendor neutral standard. This means that no single company, individual or insular clique controls JSF - openness, transparency, accountability, plurality, collaboration and inclusiveness is virtually guaranteed by the standards process itself. You have the option to choose between compatible implementations, escape any form of lock-in or even create your own compatible implementation! As you might gather from the quote at the top of the post, I am not a fan of crystal ball gazing and certainly don't want to engage in it myself. Who knows? However far-fetched it may seem maybe AngularJS is the only future we all have after all. If that is the case, so be it. Unlike what you might have been told, Java EE is about choice at heart and it can certainly work extremely well as a back-end for AngularJS. Likewise, you are also most certainly not limited to just JSF for working with Java EE - you have a rich set of choices like Struts 2, Vaadin, Errai, VRaptor 4, Wicket or perhaps even the new action-oriented web framework being considered for Java EE 8 based on the work in Jersey MVC... Please note that any views expressed here are my own only and certainly does not reflect the position of Oracle as a company.

    Read the article

  • Can't get the L2TP IPSEC up and running

    - by Maciej Swic
    i have an Ubuntu 11.10 (oneiric) server running on a ReadyNAS. Im planning to use this to accept ipsec+l2tp connections through a router. However, the connection is failing somewhere half through. Using Openswan IPsec U2.6.28/K3.0.0-12-generic and trying to connect with an iOS 5 iPhone 4S. This is how far i can get: auth.log: Jan 19 13:54:11 ubuntu pluto[1990]: added connection description "PSK" Jan 19 13:54:11 ubuntu pluto[1990]: added connection description "L2TP-PSK-NAT" Jan 19 13:54:11 ubuntu pluto[1990]: added connection description "L2TP-PSK-noNAT" Jan 19 13:54:11 ubuntu pluto[1990]: added connection description "passthrough-for-non-l2tp" Jan 19 13:54:11 ubuntu pluto[1990]: listening for IKE messages Jan 19 13:54:11 ubuntu pluto[1990]: NAT-Traversal: Trying new style NAT-T Jan 19 13:54:11 ubuntu pluto[1990]: NAT-Traversal: ESPINUDP(1) setup failed for new style NAT-T family IPv4 (errno=19) Jan 19 13:54:11 ubuntu pluto[1990]: NAT-Traversal: Trying old style NAT-T Jan 19 13:54:11 ubuntu pluto[1990]: adding interface eth0/eth0 192.168.19.99:500 Jan 19 13:54:11 ubuntu pluto[1990]: adding interface eth0/eth0 192.168.19.99:4500 Jan 19 13:54:11 ubuntu pluto[1990]: adding interface lo/lo 127.0.0.1:500 Jan 19 13:54:11 ubuntu pluto[1990]: adding interface lo/lo 127.0.0.1:4500 Jan 19 13:54:11 ubuntu pluto[1990]: adding interface lo/lo ::1:500 Jan 19 13:54:11 ubuntu pluto[1990]: adding interface eth0/eth0 2001:470:28:81:a00:27ff:* Jan 19 13:54:11 ubuntu pluto[1990]: loading secrets from "/etc/ipsec.secrets" Jan 19 13:54:11 ubuntu pluto[1990]: loading secrets from "/var/lib/openswan/ipsec.secrets.inc" Jan 19 14:04:31 ubuntu pluto[1990]: packet from 95.*.*.233:500: received Vendor ID payload [RFC 3947] method set to=109 Jan 19 14:04:31 ubuntu pluto[1990]: packet from 95.*.*.233:500: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike] method set to=110 Jan 19 14:04:31 ubuntu pluto[1990]: packet from 95.*.*.233:500: ignoring unknown Vendor ID payload [8f8d83826d246b6fc7a8a6a428c11de8] Jan 19 14:04:31 ubuntu pluto[1990]: packet from 95.*.*.233:500: ignoring unknown Vendor ID payload [439b59f8ba676c4c7737ae22eab8f582] Jan 19 14:04:31 ubuntu pluto[1990]: packet from 95.*.*.233:500: ignoring unknown Vendor ID payload [4d1e0e136deafa34c4f3ea9f02ec7285] Jan 19 14:04:31 ubuntu pluto[1990]: packet from 95.*.*.233:500: ignoring unknown Vendor ID payload [80d0bb3def54565ee84645d4c85ce3ee] Jan 19 14:04:31 ubuntu pluto[1990]: packet from 95.*.*.233:500: ignoring unknown Vendor ID payload [9909b64eed937c6573de52ace952fa6b] Jan 19 14:04:31 ubuntu pluto[1990]: packet from 95.*.*.233:500: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-03] meth=108, but already using method 110 Jan 19 14:04:31 ubuntu pluto[1990]: packet from 95.*.*.233:500: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-02] meth=107, but already using method 110 Jan 19 14:04:31 ubuntu pluto[1990]: packet from 95.*.*.233:500: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-02_n] meth=106, but already using method 110 Jan 19 14:04:31 ubuntu pluto[1990]: packet from 95.*.*.233:500: received Vendor ID payload [Dead Peer Detection] Jan 19 14:04:31 ubuntu pluto[1990]: "PSK"[1] 95.*.*.233 #1: responding to Main Mode from unknown peer 95.*.*.233 Jan 19 14:04:31 ubuntu pluto[1990]: "PSK"[1] 95.*.*.233 #1: transition from state STATE_MAIN_R0 to state STATE_MAIN_R1 Jan 19 14:04:31 ubuntu pluto[1990]: "PSK"[1] 95.*.*.233 #1: STATE_MAIN_R1: sent MR1, expecting MI2 Jan 19 14:04:33 ubuntu pluto[1990]: "PSK"[1] 95.*.*.233 #1: NAT-Traversal: Result using draft-ietf-ipsec-nat-t-ike (MacOS X): both are NATed Jan 19 14:04:33 ubuntu pluto[1990]: "PSK"[1] 95.*.*.233 #1: transition from state STATE_MAIN_R1 to state STATE_MAIN_R2 Jan 19 14:04:33 ubuntu pluto[1990]: "PSK"[1] 95.*.*.233 #1: STATE_MAIN_R2: sent MR2, expecting MI3 Jan 19 14:05:03 ubuntu pluto[1990]: ERROR: asynchronous network error report on eth0 (sport=500) for message to 95.*.*.233 port 500, complainant 95.*.*.233: Connection refused [errno 111, origin ICMP type 3 code 3 (not authenticated)] Router config UDP 500, 1701 and 4500 forwarded to 192.168.19.99 (Ubuntu server for ipsec). Ipsec passthrough enabled. /etc/ipsec.conf # /etc/ipsec.conf - Openswan IPsec configuration file # This file: /usr/share/doc/openswan/ipsec.conf-sample # # Manual: ipsec.conf.5 version 2.0 # conforms to second version of ipsec.conf specification config setup nat_traversal=yes #charonstart=yes #plutostart=yes protostack=netkey conn PSK authby=secret forceencaps=yes pfs=no auto=add keyingtries=3 dpdtimeout=60 dpdaction=clear rekey=no left=192.168.19.99 leftnexthop=192.168.19.1 leftprotoport=17/1701 right=%any rightprotoport=17/%any rightsubnet=vhost:%priv,%no dpddelay=10 #dpdtimeout=10 #dpdaction=clear include /etc/ipsec.d/l2tp-psk.conf /etc/ipsec.d/l2tp-psk.conf conn L2TP-PSK-NAT rightsubnet=vhost:%priv also=L2TP-PSK-noNAT conn L2TP-PSK-noNAT # # PreSharedSecret needs to be specified in /etc/ipsec.secrets as # YourIPAddress %any: "sharedsecret" authby=secret pfs=no auto=add keyingtries=3 # we cannot rekey for %any, let client rekey rekey=no # Set ikelifetime and keylife to same defaults windows has ikelifetime=8h keylife=1h # l2tp-over-ipsec is transport mode type=transport # left=192.168.19.99 # # For updated Windows 2000/XP clients, # to support old clients as well, use leftprotoport=17/%any leftprotoport=17/1701 # # The remote user. # right=%any # Using the magic port of "0" means "any one single port". This is # a work around required for Apple OSX clients that use a randomly # high port, but propose "0" instead of their port. rightprotoport=17/%any dpddelay=10 dpdtimeout=10 dpdaction=clear conn passthrough-for-non-l2tp type=passthrough left=192.168.19.99 leftnexthop=192.168.19.1 right=0.0.0.0 rightsubnet=0.0.0.0/0 auto=route /etc/ipsec.secrets include /var/lib/openswan/ipsec.secrets.inc %any %any: PSK "my-key" 192.168.19.99 %any: PSK "my-key" /etc/xl2tpd/xl2tpd.conf [global] debug network = yes debug tunnel = yes ipsec saref = no listen-addr = 192.168.19.99 [lns default] ip range = 192.168.19.201-192.168.19.220 local ip = 192.168.19.99 require chap = yes refuse chap = no refuse pap = no require authentication = no ppp debug = yes pppoptfile = /etc/ppp/options.xl2tpd length bit = yes /etc/ppp/options.xl2tpd pcp-accept-local ipcp-accept-remote noccp auth crtscts idle 1800 mtu 1410 mru 1410 defaultroute debug lock proxyarp connect-delay 5000 ipcp-accept-local /etc/ppp/chap-secrets # Secrets for authentication using CHAP # client server secret IP addresses maciekish * my-secret * * maciekish my-secret * I can't seem to find the problem. Other ipsec connections to other hosts work from the network im currently at.

    Read the article

  • Juniper SSG-5 subinterface vlan routing to the internet

    - by catfish
    I'm unable to get a brand new Juniper SSG-5 with latest 6.3.0r05 firmware routing to the internet from a subinterface I created on bgroup0 setup as vlan2 (bgroup0.1 on "wifi" zone). When connected on the default vlan it gets on the internet just fine. When I switch to vlan2 I'm unable to get to the internet. I am able to get the correct ip address (10.150.0.0/24) from dhcp, able to get to the juniper management page, etc but nothing past the firewall, can't ping 4.2.2.2 or the internet gateway. Even setting up logging on the wifi-to-untrust policy and it does shows the attempts (it's it's timeouts). 172.31.16.0/24 is the untrusted lan, it's already nat'ed but works fine for testing. Can ping this ip from the default vlan but not from vlan2 192.168.1.0/24 is the trusted main lan 10.150.0.0/24 is the wifi isolated lan on vlan2 The idea is to setup an AP with lan and guest access (AP supports multiple ssid's on different vlans). I know I can setup the juniper to use different ports for the wifi lan and use their procurve switch to do the vlan separation, but I never used vlan'ing on a Juniper firewall and I would like to try it out this way. Here is the complete config file: unset key protection enable set clock timezone -5 set vrouter trust-vr sharable set vrouter "untrust-vr" exit set vrouter "trust-vr" unset auto-route-export exit set alg appleichat enable unset alg appleichat re-assembly enable set alg sctp enable set auth-server "Local" id 0 set auth-server "Local" server-name "Local" set auth default auth server "Local" set auth radius accounting port 1646 set admin name "netscreen" set admin password "xxxxxxxxxxxxxxxx" set admin auth web timeout 10 set admin auth dial-in timeout 3 set admin auth server "Local" set admin format dos set zone "Trust" vrouter "trust-vr" set zone "Untrust" vrouter "trust-vr" set zone "DMZ" vrouter "trust-vr" set zone "VLAN" vrouter "trust-vr" set zone id 100 "Wifi" set zone "Untrust-Tun" vrouter "trust-vr" set zone "Trust" tcp-rst set zone "Untrust" block unset zone "Untrust" tcp-rst set zone "MGT" block unset zone "V1-Trust" tcp-rst unset zone "V1-Untrust" tcp-rst set zone "DMZ" tcp-rst unset zone "V1-DMZ" tcp-rst unset zone "VLAN" tcp-rst unset zone "Wifi" tcp-rst set zone "Untrust" screen tear-drop set zone "Untrust" screen syn-flood set zone "Untrust" screen ping-death set zone "Untrust" screen ip-filter-src set zone "Untrust" screen land set zone "V1-Untrust" screen tear-drop set zone "V1-Untrust" screen syn-flood set zone "V1-Untrust" screen ping-death set zone "V1-Untrust" screen ip-filter-src set zone "V1-Untrust" screen land set interface "ethernet0/0" zone "Untrust" set interface "ethernet0/1" zone "Untrust" set interface "bgroup0" zone "Trust" set interface "bgroup0.1" tag 2 zone "Wifi" set interface "bgroup1" zone "DMZ" set interface bgroup0 port ethernet0/2 set interface bgroup0 port ethernet0/3 set interface bgroup0 port ethernet0/4 set interface bgroup0 port ethernet0/5 set interface bgroup0 port ethernet0/6 unset interface vlan1 ip set interface ethernet0/0 ip 172.31.16.243/24 set interface ethernet0/0 route set interface bgroup0 ip 192.168.1.1/24 set interface bgroup0 nat set interface bgroup0.1 ip 10.150.0.1/24 set interface bgroup0.1 nat set interface bgroup0.1 mtu 1500 unset interface vlan1 bypass-others-ipsec unset interface vlan1 bypass-non-ip set interface ethernet0/0 ip manageable set interface bgroup0 ip manageable set interface bgroup0.1 ip manageable set interface ethernet0/0 manage ping set interface ethernet0/1 manage ping set interface bgroup0.1 manage ping set interface bgroup0.1 manage telnet set interface bgroup0.1 manage web unset interface bgroup1 manage ping set interface bgroup0 dhcp server service set interface bgroup0.1 dhcp server service set interface bgroup0 dhcp server auto set interface bgroup0.1 dhcp server enable set interface bgroup0 dhcp server option gateway 192.168.1.1 set interface bgroup0 dhcp server option netmask 255.255.255.0 set interface bgroup0 dhcp server option dns1 8.8.8.8 set interface bgroup0.1 dhcp server option lease 1440 set interface bgroup0.1 dhcp server option gateway 10.150.0.1 set interface bgroup0.1 dhcp server option netmask 255.255.255.0 set interface bgroup0.1 dhcp server option dns1 8.8.8.8 set interface bgroup0 dhcp server ip 192.168.1.33 to 192.168.1.126 set interface bgroup0.1 dhcp server ip 10.150.0.50 to 10.150.0.100 unset interface bgroup0 dhcp server config next-server-ip unset interface bgroup0.1 dhcp server config next-server-ip set interface "serial0/0" modem settings "USR" init "AT&F" set interface "serial0/0" modem settings "USR" active set interface "serial0/0" modem speed 115200 set interface "serial0/0" modem retry 3 set interface "serial0/0" modem interval 10 set interface "serial0/0" modem idle-time 10 set flow tcp-mss unset flow no-tcp-seq-check set flow tcp-syn-check unset flow tcp-syn-bit-check set flow reverse-route clear-text prefer set flow reverse-route tunnel always set pki authority default scep mode "auto" set pki x509 default cert-path partial set crypto-policy exit set ike respond-bad-spi 1 set ike ikev2 ike-sa-soft-lifetime 60 unset ike ikeid-enumeration unset ike dos-protection unset ipsec access-session enable set ipsec access-session maximum 5000 set ipsec access-session upper-threshold 0 set ipsec access-session lower-threshold 0 set ipsec access-session dead-p2-sa-timeout 0 unset ipsec access-session log-error unset ipsec access-session info-exch-connected unset ipsec access-session use-error-log set url protocol websense exit set policy id 1 from "Trust" to "Untrust" "Any" "Any" "ANY" permit set policy id 1 exit set policy id 2 from "Wifi" to "Untrust" "Any" "Any" "ANY" permit log set policy id 2 exit set nsmgmt bulkcli reboot-timeout 60 set ssh version v2 set config lock timeout 5 unset license-key auto-update set telnet client enable set snmp port listen 161 set snmp port trap 162 set snmpv3 local-engine id "0162122009006149" set vrouter "untrust-vr" exit set vrouter "trust-vr" unset add-default-route set route 0.0.0.0/0 interface ethernet0/0 gateway 172.31.16.1 exit set vrouter "untrust-vr" exit set vrouter "trust-vr" exit

    Read the article

  • Openswan + xl2tpd connections time out after a while

    - by Halfgaar
    I have a non-NATed Openswan+xl2tpd server (Ubuntu 12.04), to which I connect with a Windows 8 behind NAT. The client loses its connection after a while of doing nothing (between 30 and 60 minutes, but I didn't time it). The client doesn't have enabled that it should kill inactive connections. Nor does it ever go into sleep mode. I also tried setting the kill-after-time to 24 hours, but that didn't help. The NAT router behind which the client located is Debian Linux, and its router is a Cisco which connects us directly to the data center where the server is. None of our other connections, like SSH, get dropped with inactivity (because of cheap routers). I did however try turning on the keepalives in /etc/ipsec.conf: config setup (...snip...) nat_traversal=yes force_keepalive=yes keep_alive=10 but that didn't help. As you can see in the config later, dead peer detection's action is clear. That would be a first suggestion to fix, but I need clear, because people will be connecting from everwhere but the kitchen sink. Besides, as I said, in the test setup I have now, I can't see any device killing its connection. (edit: 'restart' also has the same effect) These are of one time it happened: Jul 18 16:18:06 host xl2tpd[1918]: Maximum retries exceeded for tunnel 49070. Closing. Jul 18 16:18:06 host xl2tpd[1918]: Terminating pppd: sending TERM signal to pid 18359 Jul 18 16:18:06 host xl2tpd[1918]: Connection 4 closed to 89.188.x.y, port 1701 (Timeout) Jul 18 16:18:11 host xl2tpd[1918]: Unable to deliver closing message for tunnel 49070. Destroying anyway. and these on another: Jul 18 17:44:39 host xl2tpd[1918]: udp_xmit failed to 89.188.x.y:1701 with err=-1:Operation not permitted Jul 18 17:44:43 xl2tpd[1918]: last message repeated 4 times Jul 18 17:44:43 host xl2tpd[1918]: Maximum retries exceeded for tunnel 10918. Closing. Jul 18 17:44:43 host xl2tpd[1918]: udp_xmit failed to 89.188.x.y:1701 with err=-1:Operation not permitted Jul 18 17:44:43 host xl2tpd[1918]: Terminating pppd: sending TERM signal to pid 26338 Jul 18 17:44:43 host xl2tpd[1918]: Connection 6 closed to 89.188.x.y, port 1701 (Timeout) Jul 18 17:44:44 host xl2tpd[1918]: udp_xmit failed to 89.188.x.y:1701 with err=-1:Operation not permitted Jul 18 17:44:48 xl2tpd[1918]: last message repeated 3 times Jul 18 17:44:48 host xl2tpd[1918]: Unable to deliver closing message for tunnel 10918. Destroying anyway. Jul 18 17:44:59 host xl2tpd[1918]: Can not find tunnel 10918 (refhim=0) Jul 18 17:44:59 host xl2tpd[1918]: network_thread: unable to find call or tunnel to handle packet. call = 0, tunnel = 10918 Dumping. Jul 18 17:45:09 host xl2tpd[1918]: Can not find tunnel 10918 (refhim=0) Jul 18 17:45:09 host xl2tpd[1918]: network_thread: unable to find call or tunnel to handle packet. call = 0, tunnel = 10918 Dumping. Jul 18 17:45:19 host xl2tpd[1918]: Can not find tunnel 10918 (refhim=0) Jul 18 17:45:19 host xl2tpd[1918]: network_thread: unable to find call or tunnel to handle packet. call = 0, tunnel = 10918 Dumping. Jul 18 17:45:29 host xl2tpd[1918]: Can not find tunnel 10918 (refhim=0) Jul 18 17:45:29 host xl2tpd[1918]: network_thread: unable to find call or tunnel to handle packet. call = 0, tunnel = 10918 Dumping. Jul 18 17:45:39 host xl2tpd[1918]: Can not find tunnel 10918 (refhim=0) Jul 18 17:45:39 host xl2tpd[1918]: network_thread: unable to find call or tunnel to handle packet. call = 0, tunnel = 10918 Dumping. Jul 18 17:45:49 host xl2tpd[1918]: Can not find tunnel 10918 (refhim=0) Jul 18 17:45:49 host xl2tpd[1918]: network_thread: unable to find call or tunnel to handle packet. call = 0, tunnel = 10918 Dumping. Versions: Ubuntu 12.04 Openswan: 2.6.37-1 xl2tpd: 3.1+dfsg-1 kernel: 3.2.0-49-generic configs: /etc/ipsec.conf: version 2.0 # conforms to second version of ipsec.conf specification config setup nat_traversal=yes virtual_private=%v4:10.0.0.0/8,%v4:192.168.0.0/16,%v4:172.16.0.0/12,%v4:!10.152.2.0/24 oe=off protostack=netkey force_keepalive=yes keep_alive=10 conn L2TP-PSK-NAT rightsubnet=vhost:%priv also=L2TP-PSK-noNAT conn L2TP-PSK-noNAT authby=secret pfs=no auto=add keyingtries=2 rekey=no dpddelay=30 dpdtimeout=120 dpdaction=clear ikelifetime=8h keylife=1h type=transport left=%defaultroute leftprotoport=17/1701 right=%any rightprotoport=17/%any /etc/xl2tpd/xl2tpd.conf [global] ipsec saref = no [lns default] ip range = 10.152.2.2-10.152.2.254 local ip = 10.152.2.1 refuse chap = yes refuse pap = yes require authentication = yes ppp debug = no pppoptfile = /etc/ppp/options.xl2tpd length bit = yes /etc/ppp/options.xl2tpd: require-mschap-v2 refuse-mschap ms-dns 10.152.2.1 asyncmap 0 auth crtscts idle 1800 mtu 1200 mru 1200 lock hide-password local #debug name l2tpd proxyarp lcp-echo-interval 30 lcp-echo-failure 4

    Read the article

  • Combining HBase and HDFS results in Exception in makeDirOnFileSystem

    - by utrecht
    Introduction An attempt to combine HBase and HDFS results in the following: 2014-06-09 00:15:14,777 WARN org.apache.hadoop.hbase.HBaseFileSystem: Create Dir ectory, retries exhausted 2014-06-09 00:15:14,780 FATAL org.apache.hadoop.hbase.master.HMaster: Unhandled exception. Starting shutdown. java.io.IOException: Exception in makeDirOnFileSystem at org.apache.hadoop.hbase.HBaseFileSystem.makeDirOnFileSystem(HBaseFile System.java:136) at org.apache.hadoop.hbase.master.MasterFileSystem.checkRootDir(MasterFi leSystem.java:428) at org.apache.hadoop.hbase.master.MasterFileSystem.createInitialFileSyst emLayout(MasterFileSystem.java:148) at org.apache.hadoop.hbase.master.MasterFileSystem.<init>(MasterFileSyst em.java:133) at org.apache.hadoop.hbase.master.HMaster.finishInitialization(HMaster.j ava:572) at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:432) at java.lang.Thread.run(Thread.java:744) Caused by: org.apache.hadoop.security.AccessControlException: Permission denied: user=hbase, access=WRITE, inode="/":vagrant:supergroup:drwxr-xr-x at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPe rmissionChecker.java:224) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPe rmissionChecker.java:204) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermi ssion(FSPermissionChecker.java:149) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(F SNamesystem.java:4891) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(F SNamesystem.java:4873) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAncestorAcce ss(FSNamesystem.java:4847) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInternal(FS Namesystem.java:3192) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNames ystem.java:3156) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesyst em.java:3137) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameN odeRpcServer.java:669) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTra nslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:419) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$Cl ientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:4497 0) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.cal l(ProtobufRpcEngine.java:453) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1752) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1748) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInforma tion.java:1438) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1746) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstruct orAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingC onstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:408) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteExce ption.java:90) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteExc eption.java:57) at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2153) at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2122) at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSy stem.java:545) at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1915) at org.apache.hadoop.hbase.HBaseFileSystem.makeDirOnFileSystem(HBaseFile System.java:129) ... 6 more while configuration and system settings are as follows: [vagrant@localhost hadoop-hdfs]$ hadoop fs -ls hdfs://localhost/ Found 1 items -rw-r--r-- 3 vagrant supergroup 1010827264 2014-06-08 19:01 hdfs://localhost/u buntu-14.04-desktop-amd64.iso [vagrant@localhost hadoop-hdfs]$ /etc/hadoop/conf/core-site.xml <configuration> <property> <name>fs.defaultFS</name> <value>hdfs://localhost:8020</value> </property> </configuration> /etc/hbase/conf/hbase-site.xml <configuration> <property> <name>hbase.rootdir</name> <value>hdfs://localhost:8020/hbase</value> </property> <property> <name>hbase.cluster.distributed</name> <value>true</value> </property> </configuration> /etc/hadoop/conf/hdfs-site.xml <configuration> <property> <name>dfs.name.dir</name> <value>/var/lib/hadoop-hdfs/cache</value> </property> <property> <name>dfs.data.dir</name> <value>/tmp/hellodatanode</value> </property> </configuration> NameNode directory permissions [vagrant@localhost hadoop-hdfs]$ ls -ltr /var/lib/hadoop-hdfs/cache total 8 -rwxrwxrwx. 1 hbase hdfs 15 Jun 8 23:43 in_use.lock drwxrwxrwx. 2 hbase hdfs 4096 Jun 8 23:43 current [vagrant@localhost hadoop-hdfs]$ HMaster is able to start if fs.defaultFS property has been commented in core-site.xml NameNode is listening [vagrant@localhost hadoop-hdfs]$ netstat -nato | grep 50070 tcp 0 0 0.0.0.0:50070 0.0.0.0:* LIST EN off (0.00/0/0) tcp 0 0 33.33.33.33:50070 33.33.33.1:57493 ESTA BLISHED off (0.00/0/0) and accessible by navigating to http://33.33.33.33:50070/dfshealth.jsp. Question How to solve makeDirOnFileSystem exception and let HBase connect to HDFS?

    Read the article

  • localhost/phpmyadmin pulls blank page

    - by Atul Modi
    When I tried configuring local machine as a Internet Gateway with website development capabilities over it and I installed all required software into it. I also had disable the selinux into it. But PROBLEM is when I do http://localhost/phpMyAdmin or all lower case than the page shows it as a blank page. I am pasting code from httpd.conf file into this as well as from phpMyAdmin.conf file I am using Fedora 16 for this. httpd.conf ServerTokens OS ServerRoot "/etc/httpd" PidFile run/httpd.pid Timeout 60 KeepAlive Off MaxKeepAliveRequests 100 KeepAliveTimeout 5 StartServers 8 MinSpareServers 5 MaxSpareServers 20 ServerLimit 256 MaxClients 256 MaxRequestsPerChild 4000 StartServers 4 MaxClients 300 MinSpareThreads 25 MaxSpareThreads 75 ThreadsPerChild 25 MaxRequestsPerChild 0 Listen 80 LoadModule auth_basic_module modules/mod_auth_basic.so LoadModule auth_digest_module modules/mod_auth_digest.so LoadModule authn_file_module modules/mod_authn_file.so LoadModule authn_alias_module modules/mod_authn_alias.so LoadModule authn_anon_module modules/mod_authn_anon.so LoadModule authn_dbm_module modules/mod_authn_dbm.so LoadModule authn_default_module modules/mod_authn_default.so LoadModule authz_host_module modules/mod_authz_host.so LoadModule authz_user_module modules/mod_authz_user.so LoadModule authz_owner_module modules/mod_authz_owner.so LoadModule authz_groupfile_module modules/mod_authz_groupfile.so LoadModule authz_dbm_module modules/mod_authz_dbm.so LoadModule authz_default_module modules/mod_authz_default.so LoadModule authn_dbd_module modules/mod_authn_dbd.so LoadModule dbd_module modules/mod_dbd.so LoadModule ldap_module modules/mod_ldap.so LoadModule authnz_ldap_module modules/mod_authnz_ldap.so LoadModule include_module modules/mod_include.so LoadModule log_config_module modules/mod_log_config.so LoadModule logio_module modules/mod_logio.so LoadModule env_module modules/mod_env.so LoadModule ext_filter_module modules/mod_ext_filter.so LoadModule mime_magic_module modules/mod_mime_magic.so LoadModule expires_module modules/mod_expires.so LoadModule deflate_module modules/mod_deflate.so LoadModule headers_module modules/mod_headers.so LoadModule usertrack_module modules/mod_usertrack.so LoadModule setenvif_module modules/mod_setenvif.so LoadModule mime_module modules/mod_mime.so LoadModule dav_module modules/mod_dav.so LoadModule status_module modules/mod_status.so LoadModule autoindex_module modules/mod_autoindex.so LoadModule info_module modules/mod_info.so LoadModule dav_fs_module modules/mod_dav_fs.so LoadModule vhost_alias_module modules/mod_vhost_alias.so LoadModule negotiation_module modules/mod_negotiation.so LoadModule dir_module modules/mod_dir.so LoadModule actions_module modules/mod_actions.so LoadModule speling_module modules/mod_speling.so LoadModule userdir_module modules/mod_userdir.so LoadModule alias_module modules/mod_alias.so LoadModule substitute_module modules/mod_substitute.so LoadModule rewrite_module modules/mod_rewrite.so LoadModule proxy_module modules/mod_proxy.so LoadModule proxy_balancer_module modules/mod_proxy_balancer.so LoadModule proxy_ftp_module modules/mod_proxy_ftp.so LoadModule proxy_http_module modules/mod_proxy_http.so LoadModule proxy_ajp_module modules/mod_proxy_ajp.so LoadModule proxy_connect_module modules/mod_proxy_connect.so LoadModule cache_module modules/mod_cache.so LoadModule suexec_module modules/mod_suexec.so LoadModule disk_cache_module modules/mod_disk_cache.so LoadModule cgi_module modules/mod_cgi.so LoadModule version_module modules/mod_version.so Include conf.d/*.conf User apache Group apache ServerAdmin root@localhost UseCanonicalName Off DocumentRoot "/var/www/html" Options FollowSymLinks AllowOverride None Options Indexes FollowSymLinks AllowOverride None Order allow,deny Allow from all UserDir disabled DirectoryIndex index.html index.htm index.php AccessFileName .htaccess Order allow,deny Deny from all Satisfy All TypesConfig /etc/mime.types DefaultType text/plain MIMEMagicFile conf/magic HostnameLookups Off ErrorLog logs/error_log LogLevel warn LogFormat "%h %l %u %t \"%r\" %s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined LogFormat "%h %l %u %t \"%r\" %s %b" common LogFormat "%{Referer}i - %U" referer LogFormat "%{User-agent}i" agent CustomLog logs/access_log combined ServerSignature On Alias /icons/ "/var/www/icons/" Options Indexes MultiViews FollowSymLinks AllowOverride None Order allow,deny Allow from all # Location of the WebDAV lock database. DAVLockDB /var/lib/dav/lockdb ScriptAlias /cgi-bin/ "/var/www/cgi-bin/" AllowOverride None Options None Order allow,deny Allow from all IndexOptions FancyIndexing VersionSort NameWidth=* HTMLTable Charset=UTF-8 AddIconByEncoding (CMP,/icons/compressed.gif) x-compress x-gzip AddIconByType (TXT,/icons/text.gif) text/* AddIconByType (IMG,/icons/image2.gif) image/* AddIconByType (SND,/icons/sound2.gif) audio/* AddIconByType (VID,/icons/movie.gif) video/* AddIcon /icons/binary.gif .bin .exe AddIcon /icons/binhex.gif .hqx AddIcon /icons/tar.gif .tar AddIcon /icons/world2.gif .wrl .wrl.gz .vrml .vrm .iv AddIcon /icons/compressed.gif .Z .z .tgz .gz .zip AddIcon /icons/a.gif .ps .ai .eps AddIcon /icons/layout.gif .html .shtml .htm .pdf AddIcon /icons/text.gif .txt AddIcon /icons/c.gif .c AddIcon /icons/p.gif .pl .py AddIcon /icons/f.gif .for AddIcon /icons/dvi.gif .dvi AddIcon /icons/uuencoded.gif .uu AddIcon /icons/script.gif .conf .sh .shar .csh .ksh .tcl AddIcon /icons/tex.gif .tex AddIcon /icons/bomb.gif core AddIcon /icons/back.gif .. AddIcon /icons/hand.right.gif README AddIcon /icons/folder.gif ^^DIRECTORY^^ AddIcon /icons/blank.gif ^^BLANKICON^^ DefaultIcon /icons/unknown.gif ReadmeName README.html HeaderName HEADER.html IndexIgnore .??* *~ # HEADER README* RCS CVS *,v *,t AddLanguage ca .ca AddLanguage cs .cz .cs AddLanguage da .dk AddLanguage de .de AddLanguage el .el AddLanguage en .en AddLanguage eo .eo AddLanguage es .es AddLanguage et .et AddLanguage fr .fr AddLanguage he .he AddLanguage hr .hr AddLanguage it .it AddLanguage ja .ja AddLanguage ko .ko AddLanguage ltz .ltz AddLanguage nl .nl AddLanguage nn .nn AddLanguage no .no AddLanguage pl .po AddLanguage pt .pt AddLanguage pt-BR .pt-br AddLanguage ru .ru AddLanguage sv .sv AddLanguage zh-CN .zh-cn AddLanguage zh-TW .zh-tw LanguagePriority en ca cs da de el eo es et fr he hr it ja ko ltz nl nn no pl pt pt-BR ru sv zh-CN zh-TW ForceLanguagePriority Prefer Fallback AddDefaultCharset UTF-8 AddType application/x-tar .tgz AddType application/x-httpd-php .php AddType application/x-httpd-php .xml AddHandler application/x-httpd-php .xml AddType application/x-compress .Z AddType application/x-gzip .gz .tgz AddType application/x-x509-ca-cert .crt AddType application/x-pkcs7-crl .crl AddHandler type-map var AddType text/html .shtml AddOutputFilter INCLUDES .shtml Alias /error/ "/var/www/error/" AllowOverride None Options IncludesNoExec AddOutputFilter Includes html AddHandler type-map var Order allow,deny Allow from all LanguagePriority en ForceLanguagePriority Prefer Fallback ErrorDocument 400 /error/HTTP_BAD_REQUEST.html.var ErrorDocument 401 /error/HTTP_UNAUTHORIZED.html.var ErrorDocument 403 /error/HTTP_FORBIDDEN.html.var ErrorDocument 404 /error/HTTP_NOT_FOUND.html.var ErrorDocument 405 /error/HTTP_METHOD_NOT_ALLOWED.html.var ErrorDocument 408 /error/HTTP_REQUEST_TIME_OUT.html.var ErrorDocument 500 /error/HTTP_INTERNAL_SERVER_ERROR.html.var ErrorDocument 503 /error/HTTP_SERVICE_UNAVAILABLE.html.var BrowserMatch "Mozilla/2" nokeepalive BrowserMatch "MSIE 4.0b2;" nokeepalive downgrade-1.0 force-response-1.0 BrowserMatch "RealPlayer 4.0" force-response-1.0 BrowserMatch "Java/1.0" force-response-1.0 BrowserMatch "JDK/1.0" force-response-1.0 BrowserMatch "Microsoft Data Access Internet Publishing Provider" redirect-carefully BrowserMatch "MS FrontPage" redirect-carefully BrowserMatch "^WebDrive" redirect-carefully BrowserMatch "^WebDAVFS/1.[0123]" redirect-carefully BrowserMatch "^gnome-vfs/1.0" redirect-carefully BrowserMatch "^XML Spy" redirect-carefully BrowserMatch "^Dreamweaver-WebDAV-SCM1" redirect-carefully Order allow,deny Allow from all # phpMyAdmin.conf Alias /phpMyAdmin /usr/share/phpMyAdmin Alias /phpmyadmin /usr/share/phpMyAdmin Order Allow,Deny Allow from All Allow from 127.0.0.1 Allow from ::1 Order Allow,Deny Allow from All Allow from 127.0.0.1 Allow from ::1 Order Deny,Allow Deny from All Allow from None Order Deny,Allow Deny from All Allow from None Order Deny,Allow Deny from All Allow from None Can anyone help into this area please. Urgent reply will be appreciatable because i am struggling since one and half month for this matter. thank you, Atul

    Read the article

  • Create user in Oracle 11g with same priviledges as in Oracle 10g XE

    - by Álvaro G. Vicario
    I'm a PHP developer (not a DBA) and I've been working with Oracle 10g XE for a while. I'm used to XE's simplified user management: Go to Administration/ Users/ Create user Assign user name and password Roles: leave the default ones (connect and resource) Privileges: click on "Enable all" to select the 11 possible ones Create This way I get a user that has full access to its data and no access to everything else. This is fine since I only need it to develop my app. When the app is to be deployed, the client's DBAs configure the environment. Now I have to create users in a full Oracle 11g server and I'm completely lost. I have a new concept (profiles) and there're like 20 roles and hundreds of privileges in various categories. What steps do I need to complete in Oracle Enterprise Manager in order to obtain a user with the same privileges I used to assign in XE? ==== UPDATE ==== I think I'd better provide a detailed explanation so I make myself clearer. This is how I create a user in 10g XE: Roles: [X] CONNECT [X] RESOURCE [ ] DBA Direct Asignment System Privileges: [ ] CREATE DATABASE LINK [ ] CREATE MATERIALIZED VIEW [ ] CREATE PROCEDURE [ ] CREATE PUBLIC SYNONYM [ ] CREATE ROLE [ ] CREATE SEQUENCE [ ] CREATE SYNONYM [ ] CREATE TABLE [ ] CREATE TRIGGER [ ] CREATE TYPE [ ] CREATE VIEW I click on Enable All and I'm done. This is what I'm asked when doing the same in 11g: Profile: (*) DEFAULT ( ) WKSYS_PROF ( ) MONITORING_PROFILE Roles: CONNECT: [ ] Admin option [X] Default value Edit List: AQ_ADMINISTRATOR_ROLE AQ_USER_ROLE AUTHENTICATEDUSER CSW_USR_ROLE CTXAPP CWM_USER DATAPUMP_EXP_FULL_DATABASE DATAPUMP_IMP_FULL_DATABASE DBA DELETE_CATALOG_ROLE EJBCLIENT EXECUTE_CATALOG_ROLE EXP_FULL_DATABASE GATHER_SYSTEM_STATISTICS GLOBAL_AQ_USER_ROLE HS_ADMIN_ROLE IMP_FULL_DATABASE JAVADEBUGPRIV JAVAIDPRIV JAVASYSPRIV JAVAUSERPRIV JAVA_ADMIN JAVA_DEPLOY JMXSERVER LOGSTDBY_ADMINISTRATOR MGMT_USER OEM_ADVISOR OEM_MONITOR OLAPI_TRACE_USER OLAP_DBA OLAP_USER OLAP_XS_ADMIN ORDADMIN OWB$CLIENT OWB_DESIGNCENTER_VIEW OWB_USER RECOVERY_CATALOG_OWNER RESOURCE SCHEDULER_ADMIN SELECT_CATALOG_ROLE SPATIAL_CSW_ADMIN SPATIAL_WFS_ADMIN WFS_USR_ROLE WKUSER WM_ADMIN_ROLE XDBADMIN XDB_SET_INVOKER XDB_WEBSERVICES XDB_WEBSERVICES_OVER_HTTP XDB_WEBSERVICES_WITH_PUBLIC System Privileges: <Empty> Edit List: ACCESS_ANY_WORKSPACE ADMINISTER ANY SQL TUNING SET ADMINISTER DATABASE TRIGGER ADMINISTER RESOURCE MANAGER ADMINISTER SQL MANAGEMENT OBJECT ADMINISTER SQL TUNING SET ADVISOR ALTER ANY ASSEMBLY ALTER ANY CLUSTER ALTER ANY CUBE ALTER ANY CUBE DIMENSION ALTER ANY DIMENSION ALTER ANY EDITION ALTER ANY EVALUATION CONTEXT ALTER ANY INDEX ALTER ANY INDEXTYPE ALTER ANY LIBRARY ALTER ANY MATERIALIZED VIEW ALTER ANY MINING MODEL ALTER ANY OPERATOR ALTER ANY OUTLINE ALTER ANY PROCEDURE ALTER ANY ROLE ALTER ANY RULE ALTER ANY RULE SET ALTER ANY SEQUENCE ALTER ANY SQL PROFILE ALTER ANY TABLE ALTER ANY TRIGGER ALTER ANY TYPE ALTER DATABASE ALTER PROFILE ALTER RESOURCE COST ALTER ROLLBACK SEGMENT ALTER SESSION ALTER SYSTEM ALTER TABLESPACE ALTER USER ANALYZE ANY ANALYZE ANY DICTIONARY AUDIT ANY AUDIT SYSTEM BACKUP ANY TABLE BECOME USER CHANGE NOTIFICATION COMMENT ANY MINING MODEL COMMENT ANY TABLE CREATE ANY ASSEMBLY CREATE ANY CLUSTER CREATE ANY CONTEXT CREATE ANY CUBE CREATE ANY CUBE BUILD PROCESS CREATE ANY CUBE DIMENSION CREATE ANY DIMENSION CREATE ANY DIRECTORY CREATE ANY EDITION CREATE ANY EVALUATION CONTEXT CREATE ANY INDEX CREATE ANY INDEXTYPE CREATE ANY JOB CREATE ANY LIBRARY CREATE ANY MATERIALIZED VIEW CREATE ANY MEASURE FOLDER CREATE ANY MINING MODEL CREATE ANY OPERATOR CREATE ANY OUTLINE CREATE ANY PROCEDURE CREATE ANY RULE CREATE ANY RULE SET CREATE ANY SEQUENCE CREATE ANY SQL PROFILE CREATE ANY SYNONYM CREATE ANY TABLE CREATE ANY TRIGGER CREATE ANY TYPE CREATE ANY VIEW CREATE ASSEMBLY CREATE CLUSTER CREATE CUBE CREATE CUBE BUILD PROCESS CREATE CUBE DIMENSION CREATE DATABASE LINK CREATE DIMENSION CREATE EVALUATION CONTEXT CREATE EXTERNAL JOB CREATE INDEXTYPE CREATE JOB CREATE LIBRARY CREATE MATERIALIZED VIEW CREATE MEASURE FOLDER CREATE MINING MODEL CREATE OPERATOR CREATE PROCEDURE CREATE PROFILE CREATE PUBLIC DATABASE LINK CREATE PUBLIC SYNONYM CREATE ROLE CREATE ROLLBACK SEGMENT CREATE RULE CREATE RULE SET CREATE SEQUENCE CREATE SESSION CREATE SYNONYM CREATE TABLE CREATE TABLESPACE CREATE TRIGGER CREATE TYPE CREATE USER CREATE VIEW CREATE_ANY_WORKSPACE DEBUG ANY PROCEDURE DEBUG CONNECT SESSION DELETE ANY CUBE DIMENSION DELETE ANY MEASURE FOLDER DELETE ANY TABLE DEQUEUE ANY QUEUE DROP ANY ASSEMBLY DROP ANY CLUSTER DROP ANY CONTEXT DROP ANY CUBE DROP ANY CUBE BUILD PROCESS DROP ANY CUBE DIMENSION DROP ANY DIMENSION DROP ANY DIRECTORY DROP ANY EDITION DROP ANY EVALUATION CONTEXT DROP ANY INDEX DROP ANY INDEXTYPE DROP ANY LIBRARY DROP ANY MATERIALIZED VIEW DROP ANY MEASURE FOLDER DROP ANY MINING MODEL DROP ANY OPERATOR DROP ANY OUTLINE DROP ANY PROCEDURE DROP ANY ROLE DROP ANY RULE DROP ANY RULE SET DROP ANY SEQUENCE DROP ANY SQL PROFILE DROP ANY SYNONYM DROP ANY TABLE DROP ANY TRIGGER DROP ANY TYPE DROP ANY VIEW DROP PROFILE DROP PUBLIC DATABASE LINK DROP PUBLIC SYNONYM DROP ROLLBACK SEGMENT DROP TABLESPACE DROP USER ENQUEUE ANY QUEUE EXECUTE ANY ASSEMBLY EXECUTE ANY CLASS EXECUTE ANY EVALUATION CONTEXT EXECUTE ANY INDEXTYPE EXECUTE ANY LIBRARY EXECUTE ANY OPERATOR EXECUTE ANY PROCEDURE EXECUTE ANY PROGRAM EXECUTE ANY RULE EXECUTE ANY RULE SET EXECUTE ANY TYPE EXECUTE ASSEMBLY EXPORT FULL DATABASE FLASHBACK ANY TABLE FLASHBACK ARCHIVE ADMINISTER FORCE ANY TRANSACTION FORCE TRANSACTION FREEZE_ANY_WORKSPACE GLOBAL QUERY REWRITE GRANT ANY OBJECT PRIVILEGE GRANT ANY PRIVILEGE GRANT ANY ROLE IMPORT FULL DATABASE INSERT ANY CUBE DIMENSION INSERT ANY MEASURE FOLDER INSERT ANY TABLE LOCK ANY TABLE MANAGE ANY FILE GROUP MANAGE ANY QUEUE MANAGE FILE GROUP MANAGE SCHEDULER MANAGE TABLESPACE MERGE ANY VIEW MERGE_ANY_WORKSPACE ON COMMIT REFRESH QUERY REWRITE READ ANY FILE GROUP REMOVE_ANY_WORKSPACE RESTRICTED SESSION RESUMABLE ROLLBACK_ANY_WORKSPACE SELECT ANY CUBE SELECT ANY CUBE DIMENSION SELECT ANY DICTIONARY SELECT ANY MINING MODEL SELECT ANY SEQUENCE SELECT ANY TABLE SELECT ANY TRANSACTION UNDER ANY TABLE UNDER ANY TYPE UNDER ANY VIEW UNLIMITED TABLESPACE UPDATE ANY CUBE UPDATE ANY CUBE BUILD PROCESS UPDATE ANY CUBE DIMENSION UPDATE ANY TABLE Object Privileges: <Empty> Add: Clase Java Clases de Trabajos Cola Columna de Tabla Columna de Vista Espacio de Trabajo Función Instantánea Origen Java Paquete Planificaciones Procedimiento Programas Secuencia Sinónimo Tabla Tipos Trabajos Vista Consumer Group Privileges: <Empty> Default Consumer Group: (*) None Edit List: AUTO_TASK_CONSUMER_GROUP BATCH_GROUP DEFAULT_CONSUMER_GROUP INTERACTIVE_GROUP LOW_GROUP ORA$AUTOTASK_HEALTH_GROUP ORA$AUTOTASK_MEDIUM_GROUP ORA$AUTOTASK_SPACE_GROUP ORA$AUTOTASK_SQL_GROUP ORA$AUTOTASK_STATS_GROUP ORA$AUTOTASK_URGENT_GROUP ORA$DIAGNOSTICS SYS_GROUP And, of course, I wonder what options I should pick.

    Read the article

  • Active Directory Password Policy Problem

    - by Will
    To Clarify: my question is why isn't my password policy applying to people in the domain. Hey guys, having trouble with our password policy in Active Directory. Sometimes it just helps me to type out what I’m seeing It appears to not be applying properly across the board. I am new to this environment and AD in general but I think I have a general grasp of what should be going on. It’s a pretty simple AD setup without too many Group Policies being applied. It looks something like this DOMAIN Default Domain Policy (link enabled) Password Policy (link enabled and enforce) Personal OU Force Password Change (completely empty nothing in this GPO) IT OU Lockout Policy (link enabled and enforced) CS OU Lockout Policy Accouting OU Lockout Policy The password policy and default domain policy both define the same things under Computer ConfigWindows seetings sec settings Account Policies / Password Policy Enforce password History : 24 passwords remembered Maximum Password age : 180 days Min password age: 14 days Minimum Password Length: 6 characters Password must meet complexity requirements: Enabled Store Passwords using reversible encryption: Disabled Account Policies / Account Lockout Policy Account Lockout Duration 10080 Minutes Account Lockout Threshold: 5 invalid login attempts Reset Account Lockout Counter after : 30 minutes IT lockout This just sets the screen saver settings to lock computers when the user is Idle. After running Group Policy modeling it seems like the password policy and default domain policy is getting applied to everyone. Here is the results of group policy modeling on MO-BLANCKM using the mblanck account, as you can see the policies are both being applied , with nothing important being denied Group Policy Results NCLGS\mblanck on NCLGS\MO-BLANCKM Data collected on: 12/29/2010 11:29:44 AM Summary Computer Configuration Summary General Computer name NCLGS\MO-BLANCKM Domain NCLGS.local Site Default-First-Site-Name Last time Group Policy was processed 12/29/2010 10:17:58 AM Group Policy Objects Applied GPOs Name Link Location Revision Default Domain Policy NCLGS.local AD (15), Sysvol (15) WSUS-52010 NCLGS.local/WSUS/Clients AD (54), Sysvol (54) Password Policy NCLGS.local AD (58), Sysvol (58) Denied GPOs Name Link Location Reason Denied Local Group Policy Local Empty Security Group Membership when Group Policy was applied BUILTIN\Administrators Everyone S-1-5-21-507921405-1326574676-682003330-1003 BUILTIN\Users NT AUTHORITY\NETWORK NT AUTHORITY\Authenticated Users NCLGS\MO-BLANCKM$ NCLGS\Admin-ComputerAccounts-GP NCLGS\Domain Computers WMI Filters Name Value Reference GPO(s) None Component Status Component Name Status Last Process Time Group Policy Infrastructure Success 12/29/2010 10:17:59 AM EFS recovery Success (no data) 10/28/2010 9:10:34 AM Registry Success 10/28/2010 9:10:32 AM Security Success 10/28/2010 9:10:34 AM User Configuration Summary General User name NCLGS\mblanck Domain NCLGS.local Last time Group Policy was processed 12/29/2010 11:28:56 AM Group Policy Objects Applied GPOs Name Link Location Revision Default Domain Policy NCLGS.local AD (7), Sysvol (7) IT-Lockout NCLGS.local/Personal/CS AD (11), Sysvol (11) Password Policy NCLGS.local AD (5), Sysvol (5) Denied GPOs Name Link Location Reason Denied Local Group Policy Local Empty Force Password Change NCLGS.local/Personal Empty Security Group Membership when Group Policy was applied NCLGS\Domain Users Everyone BUILTIN\Administrators BUILTIN\Users NT AUTHORITY\INTERACTIVE NT AUTHORITY\Authenticated Users LOCAL NCLGS\MissingSkidEmail NCLGS\Customer_Service NCLGS\Email_Archive NCLGS\Job Ticket Users NCLGS\Office Staff NCLGS\CUSTOMER SERVI-1 NCLGS\Prestige_Jobs_Email NCLGS\Telecommuters NCLGS\Everyone - NCL WMI Filters Name Value Reference GPO(s) None Component Status Component Name Status Last Process Time Group Policy Infrastructure Success 12/29/2010 11:28:56 AM Registry Success 12/20/2010 12:05:51 PM Scripts Success 10/13/2010 10:38:40 AM Computer Configuration Windows Settings Security Settings Account Policies/Password Policy Policy Setting Winning GPO Enforce password history 24 passwords remembered Password Policy Maximum password age 180 days Password Policy Minimum password age 14 days Password Policy Minimum password length 6 characters Password Policy Password must meet complexity requirements Enabled Password Policy Store passwords using reversible encryption Disabled Password Policy Account Policies/Account Lockout Policy Policy Setting Winning GPO Account lockout duration 10080 minutes Password Policy Account lockout threshold 5 invalid logon attempts Password Policy Reset account lockout counter after 30 minutes Password Policy Local Policies/Security Options Network Security Policy Setting Winning GPO Network security: Force logoff when logon hours expire Enabled Default Domain Policy Public Key Policies/Autoenrollment Settings Policy Setting Winning GPO Enroll certificates automatically Enabled [Default setting] Renew expired certificates, update pending certificates, and remove revoked certificates Disabled Update certificates that use certificate templates Disabled Public Key Policies/Encrypting File System Properties Winning GPO [Default setting] Policy Setting Allow users to encrypt files using Encrypting File System (EFS) Enabled Certificates Issued To Issued By Expiration Date Intended Purposes Winning GPO SBurns SBurns 12/13/2007 5:24:30 PM File Recovery Default Domain Policy For additional information about individual settings, launch Group Policy Object Editor. Public Key Policies/Trusted Root Certification Authorities Properties Winning GPO [Default setting] Policy Setting Allow users to select new root certification authorities (CAs) to trust Enabled Client computers can trust the following certificate stores Third-Party Root Certification Authorities and Enterprise Root Certification Authorities To perform certificate-based authentication of users and computers, CAs must meet the following criteria Registered in Active Directory only Administrative Templates Windows Components/Windows Update Policy Setting Winning GPO Allow Automatic Updates immediate installation Enabled WSUS-52010 Allow non-administrators to receive update notifications Enabled WSUS-52010 Automatic Updates detection frequency Enabled WSUS-52010 Check for updates at the following interval (hours): 1 Policy Setting Winning GPO Configure Automatic Updates Enabled WSUS-52010 Configure automatic updating: 4 - Auto download and schedule the install The following settings are only required and applicable if 4 is selected. Scheduled install day: 0 - Every day Scheduled install time: 03:00 Policy Setting Winning GPO No auto-restart with logged on users for scheduled automatic updates installations Disabled WSUS-52010 Re-prompt for restart with scheduled installations Enabled WSUS-52010 Wait the following period before prompting again with a scheduled restart (minutes): 30 Policy Setting Winning GPO Reschedule Automatic Updates scheduled installations Enabled WSUS-52010 Wait after system startup (minutes): 1 Policy Setting Winning GPO Specify intranet Microsoft update service location Enabled WSUS-52010 Set the intranet update service for detecting updates: http://lavender Set the intranet statistics server: http://lavender (example: http://IntranetUpd01) User Configuration Administrative Templates Control Panel/Display Policy Setting Winning GPO Hide Screen Saver tab Enabled IT-Lockout Password protect the screen saver Enabled IT-Lockout Screen Saver Enabled IT-Lockout Screen Saver executable name Enabled IT-Lockout Screen Saver executable name sstext3d.scr Policy Setting Winning GPO Screen Saver timeout Enabled IT-Lockout Number of seconds to wait to enable the Screen Saver Seconds: 1800 System/Power Management Policy Setting Winning GPO Prompt for password on resume from hibernate / suspend Enabled IT-Lockout

    Read the article

  • Active directory authentication for Ubuntu Linux login and cifs mounting home directories...

    - by Jamie
    I've configured my Ubuntu 10.04 Server LTS Beta 2 residing on a windows network to authenticate logins using active directory, then mount a windows share to serve as there home directory. Here is what I did starting from the initial installation of Ubuntu. Download and install Ubuntu Server 10.04 LTS Beta 2 Get updates # sudo apt-get update && sudo apt-get upgrade Install an SSH server (sshd) # sudo apt-get install openssh-server Some would argue that you should "lock sshd down" by disabling root logins. I figure if your smart enough to hack an ssh session for a root password, you're probably not going to be thwarted by the addition of PermitRootLogin no in the /etc/ssh/sshd_config file. If your paranoid or not simply not convinced then edit the file or give the following a spin: # (grep PermitRootLogin /etc/ssh/sshd_conifg && sudo sed -ri 's/PermitRootLogin ).+/\1no/' /etc/ssh/sshd_conifg) || echo "PermitRootLogin not found. Add it manually." Install required packages # sudo apt-get install winbind samba smbfs smbclient ntp krb5-user Do some basic networking housecleaning in preparation for the specific package configurations to come. Determine your windows domain name, DNS server name, and IP address for the active directory server (for samba). For conveniance I set environment variables for the windows domain and DNS server. For me it was (my AD IP address was 192.168.20.11): # WINDOMAIN=mydomain.local && WINDNS=srv1.$WINDOMAIN If you want to figure out what your domain and DNS server is (I was contractor and didn't know the network) check out this helpful reference. The authentication and file sharing processes for the Windows and Linux boxes need to have their clocks agree. Do this with an NTP service, and on the server version of Ubuntu the NTP service comes installed and preconfigured. The network I was joining had the DNS server serving up the NTP service too. # sudo sed -ri "s/^(server[ \t]).+/\1$WINDNS/" /etc/ntp.conf Restart the NTP daemon # sudo /etc/init.d/ntp restart We need to christen the Linux box on the new network, this is done by editing the host file (replace the DNS of with the FQDN of the windows DNS): # sudo sed -ri "s/^(127\.0\.0\.1[ \t]).*/\1$(hostname).$WINDOMAIN localhost $(hostname)/" /etc/hosts Kerberos configuration. The instructions that follow here aren't to be taken literally: the values for MYDOMAIN.LOCAL and srv1.mydomain.local need to be replaced with what's appropriate for your network when you edit the files. Edit the (previously installed above) /etc/krb5.conf file. Find the [libdefaults] section and change (or add) the key value pair (and it is in UPPERCASE WHERE IT NEEDS TO BE): [libdefaults] default_realm = MYDOMAIN.LOCAL Add the following to the [realms] section of the file: MYDOMAIN.LOCAL = { kdc = srv1.mydomain.local admin_server = srv1.mydomain.local default_domain = MYDOMAIN.LOCAL } Add the following to the [domain_realm] section of the file: .mydomain.local = MYDOMAIN.LOCAL mydomain.local = MYDOMAIN.LOCAL Conmfigure samba. When it's all said done, I don't know where SAMBA fits in ... I used cifs to mount the windows shares ... regardless, my system works and this is how I did it. Replace /etc/samba/smb.conf (remember I was working from a clean distro of Ubuntu, so I wasn't worried about breaking anything): [global] security = ads realm = MYDOMAIN.LOCAL password server = 192.168.20.11 workgroup = MYDOMAIN idmap uid = 10000-20000 idmap gid = 10000-20000 winbind enum users = yes winbind enum groups = yes template homedir = /home/%D/%U template shell = /bin/bash client use spnego = yes client ntlmv2 auth = yes encrypt passwords = yes winbind use default domain = yes restrict anonymous = 2 Start and stop various services. # sudo /etc/init.d/winbind stop # sudo service smbd restart # sudo /etc/init.d/winbind start Setup the authentication. Edit the /etc/nsswitch.conf. Here are the contents of mine: passwd: compat winbind group: compat winbind shadow: compat winbind hosts: files dns networks: files protocols: db files services: db files ethers: db files rpc: db files Start and stop various services. # sudo /etc/init.d/winbind stop # sudo service smbd restart # sudo /etc/init.d/winbind start At this point I could login, home directories didn't exist, but I could login. Later I'll come back and add how I got the cifs automounting to work. Numerous resources were considered so I could figure this out. Here is a short list (a number of these links point to mine own questions on the topic): Samba Kerberos Active Directory WinBind Mounting Linux user home directories on CIFS server Authenticating OpenBSD against Active Directory How to use Active Directory to authenticate linux users Mounting windows shares with Active Directory permissions Using Active Directory authentication with Samba on Ubuntu 9.10 server 64bit How practical is to authenticate a Linux server against AD? Auto-mounting a windows share on Linux AD login

    Read the article

  • NetApp erroring with: STATUS_NOLOGON_WORKSTATION_TRUST_ACCOUNT

    - by Sobrique
    Since a sitewide upgrade to Windows 7 on desktop, I've started having a problem with virus checking. Specifically - when doing a rename operation on a (filer hosted) CIFS share. The virus checker seems to be triggering a set of messages on the filer: [filerB: auth.trace.authenticateUser.loginTraceIP:info]: AUTH: Login attempt by user server-wk8-r2$ of domain MYDOMAIN from client machine 10.1.1.20 (server-wk8-r2). [filerB: auth.dc.trace.DCConnection.statusMsg:info]: AUTH: TraceDC- attempting authentication with domain controller \\MYDC. [filerB: auth.trace.authenticateUser.loginRejected:info]: AUTH: Login attempt by user rejected by the domain controller with error 0xc0000199: STATUS_NOLOGON_WORKSTATION_TRUST_ACCOUNT. [filerB: auth.trace.authenticateUser.loginTraceMsg:info]: AUTH: Delaying the response by 5 seconds due to continuous failed login attempts by user server-wk8-r2$ of domain MYDOMAIN from client machine 10.1.1.20. This seems to specifically trigger on a rename so what we think is going on is the virus checker is seeing a 'new' file, and trying to do an on-access scan. The virus checker - previously running as LocalSystem and thus sending null as it's authentication request is now looking rather like a DOS attack, and causing the filer to temporarily black list. This 5s lock out each 'access attempt' is a minor nuisance most of the time, and really quite significant for some operations - e.g. large file transfers, where every file takes 5s Having done some digging, this seems to be related to NLTM authentication: Symptoms Error message: System error 1808 has occurred. The account used is a computer account. Use your global user account or local user account to access this server. A packet trace of the failure will show the error as: STATUS_NOLOGON_WORKSTATION_TRUST_ACCOUNT (0xC0000199) Cause Microsoft has changed the functionality of how a Local System account identifies itself during NTLM authentication. This only impacts NTLM authentication. It does not impact Kerberos Authentication. Solution On the host, please set the following group policy entry and reboot the host. Network Security: Allow Local System to use computer identity for NTLM: Disabled Defining this group policy makes Windows Server 2008 R2 and Windows 7 function like Windows Server 2008 SP1. So we've now got a couple of workaround which aren't particularly nice - one is to change this security option. One is to disable virus checking, or otherwise exempt part of the infrastructure. And here's where I come to my request for assistance from ServerFault - what is the best way forwards? I lack Windows experience to be sure of what I'm seeing. I'm not entirely sure why NTLM is part of this picture in the first place - I thought we were using Kerberos authentication. I'm not sure how to start diagnosing or troubleshooting this. (We are going cross domain - workstation machine accounts are in a separate AD and DNS domain to my filer. Normal user authentication works fine however.) And failing that, can anyone suggest other lines of enquiry? I'd like to avoid a site wide security option change, or if I do go that way I'll need to be able to supply detailed reasoning. Likewise - disabling virus checking works as a short term workaround, and applying exclusions may help... but I'd rather not, and don't think that solves the underlying problem. EDIT: Filers in AD ldap have SPNs for: nfs/host.fully.qualified.domain nfs/host HOST/host.fully.qualified.domain HOST/host (Sorry, have to obfuscate those). Could it be that without a 'cifs/host.fully.qualified.domain' it's not going to work? (or some other SPN? ) Edit: As part of the searching I've been doing I've found: http://itwanderer.wordpress.com/2011/04/14/tread-lightly-kerberos-encryption-types/ Which suggests that several encryption types were disabled by default in Win7/2008R2. This might be pertinent, as we've definitely had a similar problem with Keberized NFSv4. There is a hidden option which may help some future Keberos users: options nfs.rpcsec.trace on (This hasn't given me anything yet though, so may just be NFS specific). Edit: Further digging has me tracking it back to cross domain authentication. It looks like my Windows 7 workstation (in one domain) is not getting Kerberos tickets for the other domain, in which my NetApp filer is CIFS joined. I've done this separately against a standalone server (Win2003 and Win2008) and didn't get Kerberos tickets for those either. Which means I think Kerberos might be broken, but I've no idea how to troubleshoot further. Edit: A further update: It looks like this may be down Kerberos tickets not being issued cross domain. This then triggers NTLM fallback, which then runs into this problem (since Windows 7). First port of call will be to investigate the Kerberos side of things, but in neither case do we have anything pointing at the Filer being the root cause. As such - as the storage engineer - it's out of my hands. However, if anyone can point me in the direction of troubleshooting Kerberos spanning two Windows AD domains (Kerberos Realms) then that would be appreciated. Options we're going to be considering for resolution: Amend policy option on all workstations via GPO (as above). Talking to AV vendor about the rename triggering scanning. Talking to AV vendor regarding running AV as service account. investigating Kerberos authentication (why it's not working, whether it should be).

    Read the article

  • vSphere ESX 5.5 hosts cannot connect to NFS Server

    - by Gerald
    Summary: My problem is I cannot use the QNAP NFS Server as an NFS datastore from my ESX hosts despite the hosts being able to ping it. I'm utilising a vDS with LACP uplinks for all my network traffic (including NFS) and a subnet for each vmkernel adapter. Setup: I'm evaluating vSphere and I've got two vSphere ESX 5.5 hosts (node1 and node2) and each one has 4x NICs. I've teamed them all up using LACP/802.3ad with my switch and then created a distributed switch between the two hosts with each host's LAG as the uplink. All my networking is going through the distributed switch, ideally, I want to take advantage of DRS and the redundancy. I have a domain controller VM ("Central") and vCenter VM ("vCenter") running on node1 (using node1's local datastore) with both hosts attached to the vCenter instance. Both hosts are in a vCenter datacenter and a cluster with HA and DRS currently disabled. I have a QNAP TS-669 Pro (Version 4.0.3) (TS-x69 series is on VMware Storage HCL) which I want to use as the NFS server for my NFS datastore, it has 2x NICs teamed together using 802.3ad with my switch. vmkernel.log: The error from the host's vmkernel.log is not very useful: NFS: 157: Command: (mount) Server: (10.1.2.100) IP: (10.1.2.100) Path: (/VM) Label (datastoreNAS) Options: (None) cpu9:67402)StorageApdHandler: 698: APD Handle 509bc29f-13556457 Created with lock[StorageApd0x411121] cpu10:67402)StorageApdHandler: 745: Freeing APD Handle [509bc29f-13556457] cpu10:67402)StorageApdHandler: 808: APD Handle freed! cpu10:67402)NFS: 168: NFS mount 10.1.2.100:/VM failed: Unable to connect to NFS server. Network Setup: Here is my distributed switch setup (JPG). Here are my networks. 10.1.1.0/24 VM Management (VLAN 11) 10.1.2.0/24 Storage Network (NFS, VLAN 12) 10.1.3.0/24 VM vMotion (VLAN 13) 10.1.4.0/24 VM Fault Tolerance (VLAN 14) 10.2.0.0/24 VM's Network (VLAN 20) vSphere addresses 10.1.1.1 node1 Management 10.1.1.2 node2 Management 10.1.2.1 node1 vmkernel (For NFS) 10.1.2.2 node2 vmkernel (For NFS) etc. Other addresses 10.1.2.100 QNAP TS-669 (NFS Server) 10.2.0.1 Domain Controller (VM on node1) 10.2.0.2 vCenter (VM on node1) I'm using a Cisco SRW2024P Layer-2 switch (Jumboframes enabled) with the following setup: LACP LAG1 for node1 (Ports 1 through 4) setup as VLAN trunk for VLANs 11-14,20 LACP LAG2 for my router (Ports 5 through 8) setup as VLAN trunk for VLANs 11-14,20 LACP LAG3 for node2 (Ports 9 through 12) setup as VLAN trunk for VLANs 11-14,20 LACP LAG4 for the QNAP (Ports 23 and 24) setup to accept untagged traffic into VLAN 12 Each subnet is routable to another, although, connections to the NFS server from vmk1 shouldn't need it. All other traffic (vSphere Web Client, RDP etc.) goes through this setup fine. I tested the QNAP NFS server beforehand using ESX host VMs atop of a VMware Workstation setup with a dedicated physical NIC and it had no problems. The ACL on the NFS Server share is permissive and allows all subnet ranges full access to the share. I can ping the QNAP from node1 vmk1, the adapter that should be used to NFS: ~ # vmkping -I vmk1 10.1.2.100 PING 10.1.2.100 (10.1.2.100): 56 data bytes 64 bytes from 10.1.2.100: icmp_seq=0 ttl=64 time=0.371 ms 64 bytes from 10.1.2.100: icmp_seq=1 ttl=64 time=0.161 ms 64 bytes from 10.1.2.100: icmp_seq=2 ttl=64 time=0.241 ms Netcat does not throw an error: ~ # nc -z 10.1.2.100 2049 Connection to 10.1.2.100 2049 port [tcp/nfs] succeeded! The routing table of node1: ~ # esxcfg-route -l VMkernel Routes: Network Netmask Gateway Interface 10.1.1.0 255.255.255.0 Local Subnet vmk0 10.1.2.0 255.255.255.0 Local Subnet vmk1 10.1.3.0 255.255.255.0 Local Subnet vmk2 10.1.4.0 255.255.255.0 Local Subnet vmk3 default 0.0.0.0 10.1.1.254 vmk0 VM Kernel NIC info ~ # esxcfg-vmknic -l Interface Port Group/DVPort IP Family IP Address Netmask Broadcast MAC Address MTU TSO MSS Enabled Type vmk0 133 IPv4 10.1.1.1 255.255.255.0 10.1.1.255 00:50:56:66:8e:5f 1500 65535 true STATIC vmk0 133 IPv6 fe80::250:56ff:fe66:8e5f 64 00:50:56:66:8e:5f 1500 65535 true STATIC, PREFERRED vmk1 164 IPv4 10.1.2.1 255.255.255.0 10.1.2.255 00:50:56:68:f5:1f 1500 65535 true STATIC vmk1 164 IPv6 fe80::250:56ff:fe68:f51f 64 00:50:56:68:f5:1f 1500 65535 true STATIC, PREFERRED vmk2 196 IPv4 10.1.3.1 255.255.255.0 10.1.3.255 00:50:56:66:18:95 1500 65535 true STATIC vmk2 196 IPv6 fe80::250:56ff:fe66:1895 64 00:50:56:66:18:95 1500 65535 true STATIC, PREFERRED vmk3 228 IPv4 10.1.4.1 255.255.255.0 10.1.4.255 00:50:56:72:e6:ca 1500 65535 true STATIC vmk3 228 IPv6 fe80::250:56ff:fe72:e6ca 64 00:50:56:72:e6:ca 1500 65535 true STATIC, PREFERRED Things I've tried/checked: I'm not using DNS names to connect to the NFS server. Checked MTU. Set to 9000 for vmk1, dvSwitch and Cisco switch and QNAP. Moved QNAP onto VLAN 11 (VM Management, vmk0) and gave it an appropriate address, still had same issue. Changed back afterwards of course. Tried initiating the connection of NAS datastore from vSphere Client (Connected to vCenter or directly to host), vSphere Web Client and the host's ESX Shell. All resulted in the same problem. Tried a path name of "VM", "/VM" and "/share/VM" despite not even having a connection to server. I plugged in a linux system (10.1.2.123) into a switch port configured for VLAN 12 and tried mounting the NFS share 10.1.2.100:/VM, it worked successfully and I had read-write access to it I tried disabling the firewall on the ESX host esxcli network firewall set --enabled false I'm out of ideas on what to try next. The things I'm doing differently from my VMware Workstation setup is the use of LACP with a physical switch and a virtual distributed switch between the two hosts. I'm guessing the vDS is probably the source of my troubles but I don't know how to fix this problem without eliminating it.

    Read the article

  • OpenVPN and PPTP on XEN VPS

    - by amiv
    I have Debian based system (Ubuntu 11.10) on XEN VPS. I've installed OpenVPN and works great. I need to install PPTP too, so did it and clients can connect, but they have no internet on client side. If I connect to VPN over PPTP I can ping and access to only my VPS by its IP, but ony that. There's no "internet" on client side. It looks it's not DNS problems (I'm using 8.8.8.8) because I can't ping known IPs. I bet the solution is simple, but don't have any idea. Any guess? /etc/pptpd.conf option /etc/ppp/pptpd-options logwtmp localip 46.38.xx.xx remoteip 10.1.0.1-10 /etc/ppp/pptpd-options name pptpd refuse-pap refuse-chap refuse-mschap require-mschap-v2 require-mppe-128 ms-dns 8.8.8.8 ms-dns 8.8.4.4 proxyarp nodefaultroute lock nobsdcomp /etc/ppp/ip-up [...] ifconfig ppp0 mtu 1400 /etc/sysctl.conf [...] net.ipv4.ip_forward=1 Command which I run: iptables -t nat -A POSTROUTING -j SNAT --to-source 46.38.xx.xx (IP of my VPS) The client can connect, first one gets IP 10.1.0.1 and DNS from Google. I bet it's iptables problem, am I right? I'm iptables noob and I don't have idea what's wrong. And here's the ifconfig and route command before client connect via PPTP: root@vps3780:~# route Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface default xx.xx.tel.ru 0.0.0.0 UG 100 0 0 eth0 10.8.0.0 10.8.0.2 255.255.255.0 UG 0 0 0 tun0 10.8.0.2 * 255.255.255.255 UH 0 0 0 tun0 46.38.xx.0 * 255.255.255.0 U 0 0 0 eth0 root@vps3780:~# ifconfig eth0 Link encap:Ethernet HWaddr 00:16:3e:56:xx:xx inet addr:46.38.xx.xx Bcast:0.0.0.0 Mask:255.255.255.0 inet6 addr: fe80::216:xx:xx:dfb6/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:22671 errors:0 dropped:81 overruns:0 frame:0 TX packets:2266 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:1813358 (1.8 MB) TX bytes:667626 (667.6 KB) Interrupt:24 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:100 errors:0 dropped:0 overruns:0 frame:0 TX packets:100 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:10778 (10.7 KB) TX bytes:10778 (10.7 KB) tun0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 inet addr:10.8.0.1 P-t-P:10.8.0.2 Mask:255.255.255.255 UP POINTOPOINT RUNNING NOARP MULTICAST MTU:1500 Metric:1 RX packets:602 errors:0 dropped:0 overruns:0 frame:0 TX packets:612 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:100 RX bytes:90850 (90.8 KB) TX bytes:418904 (418.9 KB) And here's the ifconfig and route command after client connect via PPTP: root@vps3780:~# route Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface default xx.xx.tel.ru 0.0.0.0 UG 100 0 0 eth0 10.1.0.1 * 255.255.255.255 UH 0 0 0 ppp0 10.8.0.0 10.8.0.2 255.255.255.0 UG 0 0 0 tun0 10.8.0.2 * 255.255.255.255 UH 0 0 0 tun0 46.38.xx.0 * 255.255.255.0 U 0 0 0 eth0 root@vps3780:~# ifconfig eth0 Link encap:Ethernet HWaddr 00:16:3e:56:xx:xx inet addr:46.38.xx.xx Bcast:0.0.0.0 Mask:255.255.255.0 inet6 addr: fe80::216:xx:xx:dfb6/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:22989 errors:0 dropped:82 overruns:0 frame:0 TX packets:2352 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:1841310 (1.8 MB) TX bytes:678456 (678.4 KB) Interrupt:24 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:112 errors:0 dropped:0 overruns:0 frame:0 TX packets:112 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:12102 (12.1 KB) TX bytes:12102 (12.1 KB) ppp0 Link encap:Point-to-Point Protocol inet addr:46.38.xx.xx P-t-P:10.1.0.1 Mask:255.255.255.255 UP POINTOPOINT RUNNING NOARP MULTICAST MTU:1400 Metric:1 RX packets:66 errors:0 dropped:0 overruns:0 frame:0 TX packets:15 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:3 RX bytes:10028 (10.0 KB) TX bytes:660 (660.0 B) tun0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 inet addr:10.8.0.1 P-t-P:10.8.0.2 Mask:255.255.255.255 UP POINTOPOINT RUNNING NOARP MULTICAST MTU:1500 Metric:1 RX packets:602 errors:0 dropped:0 overruns:0 frame:0 TX packets:612 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:100 RX bytes:90850 (90.8 KB) TX bytes:418904 (418.9 KB) And ugly iptables --list output: root@vps3780:~# iptables --list Chain INPUT (policy ACCEPT) target prot opt source destination Chain FORWARD (policy ACCEPT) target prot opt source destination ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED ACCEPT all -- 10.8.0.0/24 anywhere REJECT all -- anywhere anywhere reject-with icmp-port-unreachable ACCEPT all -- 10.1.0.0/24 anywhere ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED ACCEPT all -- 10.1.0.0/24 anywhere REJECT all -- anywhere anywhere reject-with icmp-port-unreachable ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED ACCEPT all -- 10.8.0.0/24 anywhere REJECT all -- anywhere anywhere reject-with icmp-port-unreachable And ugly iptables -t nat -L output: root@vps3780:~# iptables -t nat -L Chain PREROUTING (policy ACCEPT) target prot opt source destination Chain INPUT (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination Chain POSTROUTING (policy ACCEPT) target prot opt source destination SNAT all -- 10.8.0.0/24 anywhere to:46.38.xx.xx MASQUERADE all -- 10.1.0.0/24 anywhere SNAT all -- 10.1.0.0/24 anywhere to:46.38.xx.xx SNAT all -- 10.8.0.0/24 anywhere to:46.38.xx.xx SNAT all -- 10.1.0.0/24 anywhere to:46.38.xx.xx MASQUERADE all -- anywhere anywhere SNAT all -- anywhere anywhere to:46.38.xx.xx SNAT all -- 10.8.0.0/24 anywhere to:46.38.xx.xx MASQUERADE all -- anywhere anywhere MASQUERADE all -- 10.1.0.0/24 anywhere MASQUERADE all -- anywhere anywhere MASQUERADE all -- 10.1.0.0/24 anywhere As I said - OpenVPN works very good. 10.8.0.0/24 for OpenVPN (on tun0). PPTP won't work. 10.1.0.0/24 for PPTP (on ppp0). Clients can connect, but they haven't "internet". Any suggestions will be appreciated. Second whole day fighting with no results. EDIT: iptables -t filter -F - it resolved my problem :-)

    Read the article

  • ESXi v5.5 is having random crashes

    - by Darkmage
    HW: Type: HP Proliant ML350 G5 RAM 22GB CPU 1 x Intel Xenon E5405 2.00GHz OP: ESXi 5.5 just updated from 5.1 to try and fix the crashes occurring on ESXi 5.1 on same hardware. I'm trying to find the error on why one of our servers is crashing, it has had two lock ups in 24 hours now. The internal error light on the front is blinking red, on the inside only "#5 and #6 page 76 manual" the "Processor 2" light "amber" and the "Power" light "green" is shining. in the logs the only errors i can see in the relevant time frame is in log under. Is this the reason? or is there anything else i can do to try and log/locate the error. from zcat syslog.6.gz | less 2014-05-26T11:55:47Z sfcbd[35064]: Error opening socket pair for getProviderContext: Too many open files 2014-05-26T11:55:47Z sfcbd[35064]: Failed to set recv timeout (30) for socket -1. Errno = 9 2014-05-26T11:55:47Z sfcbd[35064]: Failed to set timeout for local socket (e.g. provider) 2014-05-26T11:55:47Z sfcbd[35064]: spGetMsg receiving from -1 35064-9 Bad file descriptor 2014-05-26T11:55:47Z sfcbd[35064]: rcvMsg receiving from -1 35064-9 Bad file descriptor 2014-05-26T11:55:47Z sfcb-ProviderManager[34828]: SendMsg sending to 1 34828-9 Bad file descriptor 2014-05-26T11:55:47Z sfcb-ProviderManager[34828]: SendMsg sending to 1 34828-9 Bad file descriptor 2014-05-26T11:55:47Z sfcb-ProviderManager[34828]: SendMsg sending to 1 34828-9 Bad file descriptor 2014-05-26T11:55:47Z sfcbd[35064]: Error opening socket pair for getProviderContext: Too many open files 2014-05-26T11:55:47Z sfcbd[35064]: Failed to set recv timeout (30) for socket -1. Errno = 9 2014-05-26T11:55:47Z sfcbd[35064]: Failed to set timeout for local socket (e.g. provider) 2014-05-26T11:55:47Z sfcbd[35064]: spGetMsg receiving from -1 35064-9 Bad file descriptor 2014-05-26T11:55:47Z sfcbd[35064]: rcvMsg receiving from -1 35064-9 Bad file descriptor 2014-05-26T11:55:47Z sfcb-ProviderManager[34828]: SendMsg sending to 1 34828-9 Bad file descriptor 2014-05-26T11:55:47Z sfcb-ProviderManager[34828]: SendMsg sending to 1 34828-9 Bad file descriptor 2014-05-26T11:55:47Z sfcb-ProviderManager[34828]: SendMsg sending to 1 34828-9 Bad file descriptor 2014-05-26T11:55:53Z sfcb-ProviderManager[34828]: SendMsg sending to 1 34828-9 Bad file descriptor 2014-05-26T11:55:57Z sfcb-ProviderManager[34828]: SendMsg sending to 1 34828-9 Bad file descriptor 2014-05-26T11:56:01Z sfcb-ProviderManager[34828]: SendMsg sending to 1 34828-9 Bad file descriptor 2014-05-26T11:56:04Z sfcb-ProviderManager[34828]: SendMsg sending to 1 34828-9 Bad file descriptor 2014-05-26T11:56:15Z sfcb-ProviderManager[34828]: SendMsg sending to 1 34828-9 Bad file descriptor 2014-05-26T11:56:17Z sfcbd[35064]: Error opening socket pair for getProviderContext: Too many open files 2014-05-26T11:56:17Z sfcbd[35064]: Failed to set recv timeout (30) for socket -1. Errno = 9 2014-05-26T11:56:17Z sfcbd[35064]: Failed to set timeout for local socket (e.g. provider) 2014-05-26T11:56:17Z sfcbd[35064]: spGetMsg receiving from -1 35064-9 Bad file descriptor 2014-05-26T11:56:17Z sfcbd[35064]: rcvMsg receiving from -1 35064-9 Bad file descriptor 2014-05-26T11:56:17Z sfcb-ProviderManager[34828]: SendMsg sending to 1 34828-9 Bad file descriptor 2014-05-26T11:56:17Z sfcb-ProviderManager[34828]: SendMsg sending to 1 34828-9 Bad file descriptor 2014-05-26T11:56:17Z sfcb-ProviderManager[34828]: SendMsg sending to 1 34828-9 Bad file descriptor 2014-05-26T11:56:17Z sfcbd[35064]: Error opening socket pair for getProviderContext: Too many open files 2014-05-26T11:56:17Z sfcbd[35064]: Failed to set recv timeout (30) for socket -1. Errno = 9 2014-05-26T11:56:17Z sfcbd[35064]: Failed to set timeout for local socket (e.g. provider) 2014-05-26T11:56:17Z sfcbd[35064]: spGetMsg receiving from -1 35064-9 Bad file descriptor 2014-05-26T11:56:17Z sfcbd[35064]: rcvMsg receiving from -1 35064-9 Bad file descriptor 2014-05-26T11:56:17Z sfcb-ProviderManager[34828]: SendMsg sending to 1 34828-9 Bad file descriptor 2014-05-26T11:56:17Z sfcb-ProviderManager[34828]: SendMsg sending to 1 34828-9 Bad file descriptor 2014-05-26T11:56:17Z sfcb-ProviderManager[34828]: SendMsg sending to 1 34828-9 Bad file descriptor 2014-05-26T11:56:23Z sfcb-ProviderManager[34828]: SendMsg sending to 1 34828-9 Bad file descriptor 2014-05-26T11:56:27Z sfcb-ProviderManager[34828]: SendMsg sending to 1 34828-9 Bad file descriptor 2014-05-26T11:56:31Z sfcb-ProviderManager[34828]: SendMsg sending to 1 34828-9 Bad file descriptor 2014-05-26T11:56:34Z sfcb-ProviderManager[34828]: SendMsg sending to 1 34828-9 Bad file descriptor 2014-05-26T11:56:34Z sfcb-ProviderManager[34828]: SendMsg sending to 1 34828-9 Bad file descriptor 2014-05-26T11:56:34Z sfcb-ProviderManager[34828]: SendMsg sending to 1 34828-9 Bad file descriptor 2014-05-26T11:56:34Z sfcb-ProviderManager[34828]: SendMsg sending to 1 34828-9 Bad file descriptor 2014-05-26T11:56:34Z sfcb-ProviderManager[34828]: SendMsg sending to 1 34828-9 Bad file descriptor 2014-05-26T11:56:44Z sfcb-ProviderManager[34828]: SendMsg sending to 1 34828-9 Bad file descriptor 2014-05-26T11:56:44Z sfcb-ProviderManager[34828]: SendMsg sending to 1 34828-9 Bad file descriptor 2014-05-26T11:56:44Z sfcb-ProviderManager[34828]: SendMsg sending to 1 34828-9 Bad file descriptor 2014-05-26T11:56:44Z sfcb-ProviderManager[34828]: SendMsg sending to 1 34828-9 Bad file descriptor 2014-05-26T11:56:46Z sfcb-ProviderManager[34828]: SendMsg sending to 1 34828-9 Bad file descriptor 2014-05-26T11:56:48Z sfcbd[35064]: Error opening socket pair for getProviderContext: Too many open files -- manual

    Read the article

  • Linux/Apache performance very slow even on local network

    - by klausch
    I have an Ubuntu server machine running Apache and MYSQL. System and version info is as follows: Linux kernel 3.0.0.-12 Apache/2.2.20 MySQL Ver 14.14.Distrib 5.1.58 I am running a few websites on this server, some HTML only, some PHP/MySQL. THe [problem is that response time is very slow, both on static as well as the dynamic sites. Sometimes it takes more than 10 seconds before a response is given, this makes the sites very slow and almost unusable. The problem occurs even when requesting from the local network. I have added the involved subdomains to my /etc/hosts file, and abolve all the problem is not solved by using IP numbers instead of URL's. So there is no DNS lookup issue. I have modified the log format by showing the response times and sometimes a files takes 12 seconds to be served, see the jquery~.js file in the example screenshot. I have no explanation for this extremely long response time, but is is not even the only issue here, some other files takes a long time to be served too, but do not show a long response time in the log file. So probably different tissues are involved here. I cannot find a solution until now, any suggestions??? THanx in advance, Klaas link to screenshot picture from access logfile Some extra configuration info: apache2.conf (comment is removed) LockFile ${APACHE_LOCK_DIR}/accept.lock PidFile ${APACHE_PID_FILE} Timeout 300 KeepAlive On MaxKeepAliveRequests 100 KeepAliveTimeout 5 <IfModule mpm_prefork_module> StartServers 5 MinSpareServers 5 MaxSpareServers 10 MaxClients 150 MaxRequestsPerChild 0 </IfModule> <IfModule mpm_worker_module> StartServers 2 MinSpareThreads 25 MaxSpareThreads 75 ThreadLimit 64 ThreadsPerChild 25 MaxClients 150 MaxRequestsPerChild 0 </IfModule> <IfModule mpm_event_module> StartServers 2 MinSpareThreads 25 MaxSpareThreads 75 ThreadLimit 64 ThreadsPerChild 25 MaxClients 150 MaxRequestsPerChild 0 </IfModule> User ${APACHE_RUN_USER} Group ${APACHE_RUN_GROUP} AccessFileName .htaccess <Files ~ "^\.ht"> Order allow,deny Deny from all Satisfy all </Files> DefaultType text/plain HostnameLookups Off ErrorLog ${APACHE_LOG_DIR}/error.log LogLevel warn Include mods-enabled/*.load Include mods-enabled/*.conf Include httpd.conf Include ports.conf LogFormat "%v:%p %h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" vhost_combined LogFormat "%h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\" %T/%D" combined LogFormat "%h %l %u %t \"%r\" %>s %O" common LogFormat "%{Referer}i -> %U" referer LogFormat "%{User-agent}i" agent Include conf.d/ Include sites-enabled/ And the virtual hostfile for one of the slow sites, in fact it is pretty straightforward... <VirtualHost *:80> ServerAdmin [email protected] ServerSignature EMail ServerName toenjoy.drsklaus.nl DocumentRoot /var/www/toenjoy.drsklaus.nl <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/toenjoy.drsklaus.nl/> Options Indexes FollowSymLinks MultiViews AllowOverride AuthConfig AuthType Basic AuthName "To Enjoy" AuthUserFile /etc/.htpasswd Require user petraaa Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost> And the output of free -m: klaas@ubuntu-server:/etc/apache2$ free -m total used free shared buffers cached Mem: 1997 1401 595 0 144 1017 -/+ buffers/cache: 238 1758 Swap: 2035 0 2035 and I have no indication that swapping occurs on the moments the site is slow. I have runned top and it does not appear to be a CPU issue. I have the impression that the spawning of a apache thread could maybe be the bottleneck but it is just a suggestion. Maybe this gives some extra information! EDIT: The problem seemed to be gone for some time but occurs again! And not only with Apache, also connecting using SSH takes a tremendous time, sometimes it takes up to 15 seconds before the keyphrase is asked for. Also scp works very slowly. The behavious is really unpredoctable and makes the server very hard to use. Any ideas...?

    Read the article

  • HgWebDir push permission denied error

    - by Gregg
    I have a new Fedora 12 server that I am attempting to set up Mercurial on. I have yum installed mercurial, and most things seem to work fine. However, after setting up hgwebdir.cgi through apache, I am unable to do an hg push to the only repo currently being hosted. The error I get back is: searching for changes abort: HTTP Error 500: Permission denied: .hg/store/lock httpd is running as user apache UID PID PPID C STIME TTY TIME CMD root 1691 1 0 13:19 ? 00:00:00 /usr/sbin/httpd apache 1694 1691 0 13:19 ? 00:00:00 /usr/sbin/httpd apache 1695 1691 0 13:19 ? 00:00:00 /usr/sbin/httpd apache 1696 1691 0 13:19 ? 00:00:00 /usr/sbin/httpd apache 1697 1691 0 13:19 ? 00:00:00 /usr/sbin/httpd apache 1698 1691 0 13:19 ? 00:00:00 /usr/sbin/httpd apache 1699 1691 0 13:19 ? 00:00:00 /usr/sbin/httpd apache 1700 1691 0 13:19 ? 00:00:00 /usr/sbin/httpd apache 1701 1691 0 13:19 ? 00:00:00 /usr/sbin/httpd and I set permissions so that the apache user owns the whole repo and everything. In a last ditch attempt, I even made the repo globally writable. [root@builds .hg]# ll total 424K drwxrwxrwx. 3 apache apache 4.0K 2010-04-19 14:43 . drwxrwxrwx. 19 apache apache 4.0K 2010-04-15 13:33 .. -rw-rw-rw-. 2 apache apache 57 2010-04-13 11:42 00changelog.i -rw-rw-rw-. 1 apache apache 93 2010-04-16 15:33 branchheads.cache -rw-rw-rw-. 1 apache apache 192K 2010-04-15 13:33 dirstate -rw-r--r--. 1 apache apache 156 2010-04-19 14:43 hgrc -rw-rw-rw-. 1 apache apache 42 2010-04-15 13:33 last-message.txt -rw-rw-rw-. 2 apache apache 23 2010-04-13 11:42 requires drwxrwxrwx. 4 apache apache 4.0K 2010-04-19 11:26 store -rw-rw-rw-. 1 apache apache 45 2010-04-14 14:08 tags.cache -rw-rw-rw-. 1 apache apache 7 2010-04-16 15:33 undo.branch -rw-rw-rw-. 1 apache apache 192K 2010-04-16 15:33 undo.dirstate [root@builds .hg]# cd store [root@builds store]# ll total 308K drwxrwxrwx. 4 apache apache 4.0K 2010-04-19 11:26 . drwxrwxrwx. 3 apache apache 4.0K 2010-04-19 14:43 .. -rw-rw-rw-. 1 apache apache 20K 2010-04-16 15:33 00changelog.i -rw-rw-rw-. 1 apache apache 81K 2010-04-16 15:33 00manifest.i drwxrwxrwx. 17 apache apache 4.0K 2010-04-13 11:47 data drwxrwxrwx. 3 apache apache 4.0K 2010-04-13 11:43 dh -rw-rw-rw-. 2 apache apache 177K 2010-04-15 11:03 fncache -rw-rw-rw-. 1 apache apache 67 2010-04-16 15:33 undo I have a clone of the repo elsewhere on the machine running as a different user. If I set the the default value in the [paths] section of the clones hgrc file to the local filepath on the server, the push works fine, but if I switch it to use the url, I get the error every time. Some possible quirks in how I've set this up... hgwebdir.cgi is sitting in /var/www/cgi-bin and the repo is a child of /opt/hg. I turned off suexec as well, and this doesn't seem to clear up the issue. The only line I added in the apache config to get hgwebdir running is: ScriptAlias /hg "/var/www/cgi-bin/hgwebdir.cgi" The hgweb.config is also in /var/www/cgi-bin and it's contents are: [collections] /opt/hg = /opt/hg [trusted] users = * [web] baseurl = /hg push_ssl = false allow_push = * The repo browser is working fine, it's just push that doesn't work. Apache error_log doesn't have anything in about this error at all.

    Read the article

  • C# powershell output reader iterator getting modified when pipeline closed and disposed.

    - by scope-creep
    Hello, I'm calling a powershell script from C#. The script is pretty small and is "gps;$host.SetShouldExit(9)", which list process, and then send back an exit code to be captured by the PSHost object. The problem I have is when the pipeline has been stopped and disposed, the output reader PSHost collection still seems to be written to, and is filling up. So when I try and copy it to my own output object, it craps out with a OutOfMemoryException when I try to iterate over it. Sometimes it will except with a Collection was modified message. Here is the code. private void ProcessAndExecuteBlock(ScriptBlock Block) { Collection<PSObject> PSCollection = new Collection<PSObject>(); Collection<Object> PSErrorCollection = new Collection<Object>(); Boolean Error = false; int ExitCode=0; //Send for exection. ExecuteScript(Block.Script); // Process the waithandles. while (PExecutor.PLine.PipelineStateInfo.State == PipelineState.Running) { // Wait for either error or data waithandle. switch (WaitHandle.WaitAny(PExecutor.Hand)) { // Data case 0: Collection<PSObject> data = PExecutor.PLine.Output.NonBlockingRead(); if (data.Count > 0) { for (int cnt = 0; cnt <= (data.Count-1); cnt++) { PSCollection.Add(data[cnt]); } } // Check to see if the pipeline has been closed. if (PExecutor.PLine.Output.EndOfPipeline) { // Bring back the exit code. ExitCode = RHost.ExitCode; } break; case 1: Collection<object> Errordata = PExecutor.PLine.Error.NonBlockingRead(); if (Errordata.Count > 0) { Error = true; for (int count = 0; count <= (Errordata.Count - 1); count++) { PSErrorCollection.Add(Errordata[count]); } } break; } } PExecutor.Stop(); // Create the Execution Return block ExecutionResults ER = new ExecutionResults(Block.RuleGuid,Block.SubRuleGuid, Block.MessageIdentfier); ER.ExitCode = ExitCode; // Add in the data results. lock (ReadSync) { if (PSCollection.Count > 0) { ER.DataAdd(PSCollection); } } // Add in the error data if any. if (Error) { if (PSErrorCollection.Count > 0) { ER.ErrorAdd(PSErrorCollection); } else { ER.InError = true; } } // We have finished, so enque the block back. EnQueueOutput(ER); } and this is the PipelineExecutor class which setups the pipeline for execution. public class PipelineExecutor { private Pipeline pipeline; private WaitHandle[] Handles; public Pipeline PLine { get { return pipeline; } } public WaitHandle[] Hand { get { return Handles; } } public PipelineExecutor(Runspace runSpace, string command) { pipeline = runSpace.CreatePipeline(command); Handles = new WaitHandle[2]; Handles[0] = pipeline.Output.WaitHandle; Handles[1] = pipeline.Error.WaitHandle; } public void Start() { if (pipeline.PipelineStateInfo.State == PipelineState.NotStarted) { pipeline.Input.Close(); pipeline.InvokeAsync(); } } public void Stop() { pipeline.StopAsync(); } } An this is the DataAdd method, where the exception arises. public void DataAdd(Collection<PSObject> Data) { foreach (PSObject Ps in Data) { Data.Add(Ps); } } I put a for loop around the Data.Add, and the Collection filled up with 600k+ so feels like the gps command is still running, but why. Any ideas. Thanks in advance.

    Read the article

  • Application stopped unexpectedly at launch

    - by Chris Stryker
    I've run this on a device and on the emulator. The app stops unexpectedly on both. I have not a clue what is wrong currently. It uses Google API Maps I compiled with Google Api 7. I followed this tutorial http://developer.android.com/guide/tutorials/views/hello-mapview.html (made some alterations clearly) I did use the correct API Key That the final apk is signed with This is the source(If you compile it shouldn't work as it is unsigned) This is the compiled signed apk Log 03-21 00:30:38.912: INFO/ActivityManager(54): Starting activity: Intent { act=android.intent.action.MAIN flg=0x10000000 cmp=com.chris.stryker.worldly/.com.poppoob.WorldlyMap } 03-21 00:30:39.173: INFO/ActivityManager(54): Start proc com.chris.stryker.worldly for activity com.chris.stryker.worldly/.com.poppoob.WorldlyMap: pid=287 uid=10031 gids={3003, 1015} 03-21 00:30:39.532: DEBUG/ddm-heap(287): Got feature list request 03-21 00:30:40.185: WARN/dalvikvm(287): Unable to resolve superclass of Lcom/chris/stryker/worldly/com/poppoob/WorldlyMap; (17) 03-21 00:30:40.193: WARN/dalvikvm(287): Link of class 'Lcom/chris/stryker/worldly/com/poppoob/WorldlyMap;' failed 03-21 00:30:40.205: DEBUG/AndroidRuntime(287): Shutting down VM 03-21 00:30:40.223: WARN/dalvikvm(287): threadid=3: thread exiting with uncaught exception (group=0x4001b188) 03-21 00:30:40.223: ERROR/AndroidRuntime(287): Uncaught handler: thread main exiting due to uncaught exception 03-21 00:30:40.252: ERROR/AndroidRuntime(287): java.lang.RuntimeException: Unable to instantiate activity ComponentInfo{com.chris.stryker.worldly/com.chris.stryker.worldly.com.poppoob.WorldlyMap}: java.lang.ClassNotFoundException: com.chris.stryker.worldly.com.poppoob.WorldlyMap in loader dalvik.system.PathClassLoader@45a13938 03-21 00:30:40.252: ERROR/AndroidRuntime(287): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2417) 03-21 00:30:40.252: ERROR/AndroidRuntime(287): at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:2512) 03-21 00:30:40.252: ERROR/AndroidRuntime(287): at android.app.ActivityThread.access$2200(ActivityThread.java:119) 03-21 00:30:40.252: ERROR/AndroidRuntime(287): at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1863) 03-21 00:30:40.252: ERROR/AndroidRuntime(287): at android.os.Handler.dispatchMessage(Handler.java:99) 03-21 00:30:40.252: ERROR/AndroidRuntime(287): at android.os.Looper.loop(Looper.java:123) 03-21 00:30:40.252: ERROR/AndroidRuntime(287): at android.app.ActivityThread.main(ActivityThread.java:4363) 03-21 00:30:40.252: ERROR/AndroidRuntime(287): at java.lang.reflect.Method.invokeNative(Native Method) 03-21 00:30:40.252: ERROR/AndroidRuntime(287): at java.lang.reflect.Method.invoke(Method.java:521) 03-21 00:30:40.252: ERROR/AndroidRuntime(287): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:860) 03-21 00:30:40.252: ERROR/AndroidRuntime(287): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:618) 03-21 00:30:40.252: ERROR/AndroidRuntime(287): at dalvik.system.NativeStart.main(Native Method) 03-21 00:30:40.252: ERROR/AndroidRuntime(287): Caused by: java.lang.ClassNotFoundException: com.chris.stryker.worldly.com.poppoob.WorldlyMap in loader dalvik.system.PathClassLoader@45a13938 03-21 00:30:40.252: ERROR/AndroidRuntime(287): at dalvik.system.PathClassLoader.findClass(PathClassLoader.java:243) 03-21 00:30:40.252: ERROR/AndroidRuntime(287): at java.lang.ClassLoader.loadClass(ClassLoader.java:573) 03-21 00:30:40.252: ERROR/AndroidRuntime(287): at java.lang.ClassLoader.loadClass(ClassLoader.java:532) 03-21 00:30:40.252: ERROR/AndroidRuntime(287): at android.app.Instrumentation.newActivity(Instrumentation.java:1021) 03-21 00:30:40.252: ERROR/AndroidRuntime(287): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2409) 03-21 00:30:40.252: ERROR/AndroidRuntime(287): ... 11 more 03-21 00:30:40.300: INFO/Process(54): Sending signal. PID: 287 SIG: 3 03-21 00:30:40.312: INFO/dalvikvm(287): threadid=7: reacting to signal 3 03-21 00:30:40.396: INFO/dalvikvm(287): Wrote stack trace to '/data/anr/traces.txt' 03-21 00:30:49.002: WARN/ActivityManager(54): Launch timeout has expired, giving up wake lock! 03-21 00:30:49.685: WARN/ActivityManager(54): Activity idle timeout for HistoryRecord{458ab6d0 com.chris.stryker.worldly/.com.poppoob.WorldlyMap}

    Read the article

  • Running multiple image manipulations in parallel causing OutOfMemory exception

    - by Tom
    I am working on a site where I need to be able to split and image around 4000x6000 into 4 parts (amongst many other tasks) and I need this to be as quick as possible for multiple users. My current code for doing this is var bitmaps = new RenderTargetBitmap[elements.Length]; using (var stream = blobService.Stream(key)) { BitmapImage bi = new BitmapImage(); bi.BeginInit(); bi.StreamSource = stream; bi.EndInit(); for (var i = 0; i < elements.Length; i++) { var element = elements[i]; TransformGroup transformGroup = new TransformGroup(); TranslateTransform translateTransform = new TranslateTransform(); translateTransform.X = -element.Left; translateTransform.Y = -element.Top; transformGroup.Children.Add(translateTransform); DrawingVisual vis = new DrawingVisual(); DrawingContext cont = vis.RenderOpen(); cont.PushTransform(transformGroup); cont.DrawImage(bi, new Rect(new Size(bi.PixelWidth, bi.PixelHeight))); cont.Close(); RenderTargetBitmap rtb = new RenderTargetBitmap(element.Width, element.Height, 96d, 96d, PixelFormats.Default); rtb.Render(vis); bitmaps[i] = rtb; } } for (var i = 0; i < bitmaps.Length; i++) { using (MemoryStream ms = new MemoryStream()) { PngBitmapEncoder encoder = new PngBitmapEncoder(); encoder.Frames.Add(BitmapFrame.Create(bitmaps[i])); encoder.Save(ms); var regionKey = WebPath.Variant(key, elements[i].Id); saveBlobService.Save("image/png", regionKey, ms); } } I am running multiple threads which take jobs off a queue. I am finding that if this part of code is hit by 4 threads at once I get an OutOfMemory exception. I can stop this happening by wrapping all the code above in a lock(obj) but this isn't ideal. I have tried wrapping just the first using block (where the file is read from disk and split) but I still get the out of memory exceptions (this part of the code executes quite quickly). I this normal considering the amount of memory this should be taking up? Are there any optimisations I could make? Can I increase the memory available? UPDATE: My new code as per Moozhe's help public static void GenerateRegions(this IBlobService blobService, string key, Element[] elements) { using (var stream = blobService.Stream(key)) { foreach (var element in elements) { stream.Position = 0; BitmapImage bi = new BitmapImage(); bi.BeginInit(); bi.SourceRect = new Int32Rect(element.Left, element.Top, element.Width, element.Height); bi.StreamSource = stream; bi.EndInit(); DrawingVisual vis = new DrawingVisual(); DrawingContext cont = vis.RenderOpen(); cont.DrawImage(bi, new Rect(new Size(element.Width, element.Height))); cont.Close(); RenderTargetBitmap rtb = new RenderTargetBitmap(element.Width, element.Height, 96d, 96d, PixelFormats.Default); rtb.Render(vis); using (MemoryStream ms = new MemoryStream()) { PngBitmapEncoder encoder = new PngBitmapEncoder(); encoder.Frames.Add(BitmapFrame.Create(rtb)); encoder.Save(ms); var regionKey = WebPath.Variant(key, element.Id); blobService.Save("image/png", regionKey, ms); } } } }

    Read the article

  • Reading off a socket until end of line C#?

    - by Omar Kooheji
    I'm trying to write a service that listens to a TCP Socket on a given port until an end of line is recived and then based on the "line" that was received executes a command. I've followed a basic socket programming tutorial for c# and have come up with the following code to listen to a socket: public void StartListening() { _log.Debug("Creating Maing TCP Listen Socket"); _mainSocket = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp); IPEndPoint ipLocal = new IPEndPoint(IPAddress.Any, _port); _log.Debug("Binding to local IP Address"); _mainSocket.Bind(ipLocal); _log.DebugFormat("Listening to port {0}",_port); _mainSocket.Listen(10); _log.Debug("Creating Asynchronous callback for client connections"); _mainSocket.BeginAccept(new AsyncCallback(OnClientConnect), null); } public void OnClientConnect(IAsyncResult asyn) { try { _log.Debug("OnClientConnect Creating worker socket"); Socket workerSocket = _mainSocket.EndAccept(asyn); _log.Debug("Adding worker socket to list"); _workerSockets.Add(workerSocket); _log.Debug("Waiting For Data"); WaitForData(workerSocket); _log.DebugFormat("Clients Connected [{0}]", _workerSockets.Count); _mainSocket.BeginAccept(new AsyncCallback(OnClientConnect), null); } catch (ObjectDisposedException) { _log.Error("OnClientConnection: Socket has been closed\n"); } catch (SocketException se) { _log.Error("Socket Exception", se); } } public class SocketPacket { private System.Net.Sockets.Socket _currentSocket; public System.Net.Sockets.Socket CurrentSocket { get { return _currentSocket; } set { _currentSocket = value; } } private byte[] _dataBuffer = new byte[1]; public byte[] DataBuffer { get { return _dataBuffer; } set { _dataBuffer = value; } } } private void WaitForData(Socket workerSocket) { _log.Debug("Entering WaitForData"); try { lock (this) { if (_workerCallback == null) { _log.Debug("Initializing worker callback to OnDataRecieved"); _workerCallback = new AsyncCallback(OnDataRecieved); } } SocketPacket socketPacket = new SocketPacket(); socketPacket.CurrentSocket = workerSocket; workerSocket.BeginReceive(socketPacket.DataBuffer, 0, socketPacket.DataBuffer.Length, SocketFlags.None, _workerCallback, socketPacket); } catch (SocketException se) { _log.Error("Socket Exception", se); } } public void OnDataRecieved(IAsyncResult asyn) { SocketPacket socketData = (SocketPacket)asyn.AsyncState; try { int iRx = socketData.CurrentSocket.EndReceive(asyn); char[] chars = new char[iRx + 1]; _log.DebugFormat("Created Char array to hold incomming data. [{0}]",iRx+1); System.Text.Decoder decoder = System.Text.Encoding.UTF8.GetDecoder(); int charLength = decoder.GetChars(socketData.DataBuffer, 0, iRx, chars, 0); _log.DebugFormat("Read [{0}] characters",charLength); String data = new String(chars); _log.DebugFormat("Read in String \"{0}\"",data); WaitForData(socketData.CurrentSocket); } catch (ObjectDisposedException) { _log.Error("OnDataReceived: Socket has been closed. Removing Socket"); _workerSockets.Remove(socketData.CurrentSocket); } catch (SocketException se) { _log.Error("SocketException:",se); _workerSockets.Remove(socketData.CurrentSocket); } } This I thought was going to be a good basis for what I wanted to do, but the code I have appended the incoming characters to a text box one by one and didn't do anything with it. Which doesn't really work for what I want to do. My main issue is the decoupling of the OnDataReceived method from the Wait for data method. which means I'm having issues building a string (I would use a string builder but I can accept multiple connections so that doesn't really work. Ideally I'd like to look while listening to a socket until I see and end of line character and then call a method with the resulting string as a parameter. What's the best way to go about doing this.

    Read the article

  • Why can't I reference child entities with part of the parent entities composite key

    - by tigermain
    I am trying to reference some child entities with part of the parents composite key not all of it, why cant I? This happens when I use the following mapping instead of that which is commented. I get the following error Foreign key in table VolatileEventContent must have same number of columns as referenced primary key in table LocationSearchView <?xml version="1.0" encoding="utf-8" ?> <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2" namespace="JeanieMaster.Domain.Entities" assembly="JeanieMaster.Domain"> <class name="LocationSearchView" table="LocationSearchView"> <composite-id> <key-property name="LocationId" type="Int32"></key-property> <key-property name="ContentProviderId" type="Int32"></key-property> <key-property name="CategoryId" type="Int32"></key-property> </composite-id> <property name="CompanyName" type="String" not-null="true" update="false" insert="false"/> <property name="Description" type="String" not-null="true" update="false" insert="false"/> <property name="CategoryId" type="Int32" not-null="true" update="false" insert="false"/> <property name="ContentProviderId" type="Int32" not-null="true" update="false" insert="false"/> <property name="LocationId" type="Int32" not-null="true" update="false" insert="false"/> <property name="Latitude" type="Double" update="false" insert="false" /> <property name="Longitude" type="Double" update="false" insert="false" /> <bag name="Events" table="VolatileEventContent" where="DeactivatedOn IS NULL" order-by="StartDate DESC" lazy="false" cascade="none"> <key> <column name="LocationId"></column> <column name="ContentProviderId"></column> <!--<column name="LocationId"></column> <column name="ContentProviderId"></column> <column name="CategoryId"></column>--> </key> <one-to-many class="Event" column="VolatileEventContentId"></one-to-many> </bag> </class> </hibernate-mapping> And VolatileEventContent mapping file <?xml version="1.0" encoding="utf-8" ?> <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2" namespace="JeanieMaster.Domain.Entities" assembly="JeanieMaster.Domain"> <class name="Event" table="VolatileEventContent" select-before-update="false" optimistic-lock="none"> <composite-id> <key-property name="LocationId" type="Int32"></key-property> <key-property name="ContentProviderId" type="Int32"></key-property> </composite-id> <property name="Description" type="String" not-null="true" update="false" insert="false"/> <property name="StartDate" type="DateTime" not-null="true" update="false" insert="false" /> <property name="EndDate" type="DateTime" not-null="true" update="false" insert="false" /> <property name="CreatedOn" type="DateTime" not-null="true" update="false" insert="false" /> <property name="ModifiedOn" type="DateTime" not-null="false" update="false" insert="false" /> <many-to-one name="Location" class="Location" column="LocationId" /> <bag name="Artistes" table="EventArtiste" lazy="false" cascade="none"> <key name="VolatileEventContentId" /> <many-to-many class="Artiste" column="ArtisteId" ></many-to-many> </bag> </class> </hibernate-mapping>

    Read the article

  • Overflow exception while performing parallel factorization using the .NET Task Parallel Library (TPL

    - by Aviad P.
    Hello, I'm trying to write a not so smart factorization program and trying to do it in parallel using TPL. However, after about 15 minutes of running on a core 2 duo machine, I am getting an aggregate exception with an overflow exception inside it. All the entries in the stack trace are part of the .NET framework, the overflow does not come from my code. Any help would be appreciated in figuring out why this happens. Here's the commented code, hopefully it's simple enough to understand: class Program { static List<Tuple<BigInteger, int>> factors = new List<Tuple<BigInteger, int>>(); static void Main(string[] args) { BigInteger theNumber = BigInteger.Parse( "653872562986528347561038675107510176501827650178351386656875178" + "568165317809518359617865178659815012571026531984659218451608845" + "719856107834513527"); Stopwatch sw = new Stopwatch(); bool isComposite = false; sw.Start(); do { /* Print out the number we are currently working on. */ Console.WriteLine(theNumber); /* Find a factor, stop when at least one is found (using the Any operator). */ isComposite = Range(theNumber) .AsParallel() .Any(x => CheckAndStoreFactor(theNumber, x)); /* Of the factors found, take the one with the lowest base. */ var factor = factors.OrderBy(x => x.Item1).First(); Console.WriteLine(factor); /* Divide the number by the factor. */ theNumber = BigInteger.Divide( theNumber, BigInteger.Pow(factor.Item1, factor.Item2)); /* Clear the discovered factors cache, and keep looking. */ factors.Clear(); } while (isComposite); sw.Stop(); Console.WriteLine(isComposite + " " + sw.Elapsed); } static IEnumerable<BigInteger> Range(BigInteger squareOfTarget) { BigInteger two = BigInteger.Parse("2"); BigInteger element = BigInteger.Parse("3"); while (element * element < squareOfTarget) { yield return element; element = BigInteger.Add(element, two); } } static bool CheckAndStoreFactor(BigInteger candidate, BigInteger factor) { BigInteger remainder, dividend = candidate; int exponent = 0; do { dividend = BigInteger.DivRem(dividend, factor, out remainder); if (remainder.IsZero) { exponent++; } } while (remainder.IsZero); if (exponent > 0) { lock (factors) { factors.Add(Tuple.Create(factor, exponent)); } } return exponent > 0; } } Here's the exception thrown: Unhandled Exception: System.AggregateException: One or more errors occurred. --- > System.OverflowException: Arithmetic operation resulted in an overflow. at System.Linq.Parallel.PartitionedDataSource`1.ContiguousChunkLazyEnumerator.MoveNext(T& currentElement, Int32& currentKey) at System.Linq.Parallel.AnyAllSearchOperator`1.AnyAllSearchOperatorEnumerator`1.MoveNext(Boolean& currentElement, Int32& currentKey) at System.Linq.Parallel.StopAndGoSpoolingTask`2.SpoolingWork() at System.Linq.Parallel.SpoolingTaskBase.Work() at System.Linq.Parallel.QueryTask.BaseWork(Object unused) at System.Linq.Parallel.QueryTask.<.cctor>b__0(Object o) at System.Threading.Tasks.Task.InnerInvoke() at System.Threading.Tasks.Task.Execute() --- End of inner exception stack trace --- at System.Linq.Parallel.QueryTaskGroupState.QueryEnd(Boolean userInitiatedDispose) at System.Linq.Parallel.SpoolingTask.SpoolStopAndGo[TInputOutput,TIgnoreKey](QueryTaskGroupState groupState, PartitionedStream`2 partitions, SynchronousChannel`1[] channels, TaskScheduler taskScheduler) at System.Linq.Parallel.DefaultMergeHelper`2.System.Linq.Parallel.IMergeHelper<TInputOutput>.Execute() at System.Linq.Parallel.MergeExecutor`1.Execute[TKey](PartitionedStream`2 partitions, Boolean ignoreOutput, ParallelMergeOptions options, TaskScheduler taskScheduler, Boolean isOrdered, CancellationState cancellationState, Int32 queryId) at System.Linq.Parallel.PartitionedStreamMerger`1.Receive[TKey](PartitionedStream`2 partitionedStream) at System.Linq.Parallel.AnyAllSearchOperator`1.WrapPartitionedStream[TKey](PartitionedStream`2 inputStream, IPartitionedStreamRecipient`1 recipient, BooleanpreferStriping, QuerySettings settings) at System.Linq.Parallel.UnaryQueryOperator`2.UnaryQueryOperatorResults.ChildResultsRecipient.Receive[TKey](PartitionedStream`2 inputStream) at System.Linq.Parallel.ScanQueryOperator`1.ScanEnumerableQueryOperatorResults.GivePartitionedStream(IPartitionedStreamRecipient`1 recipient) at System.Linq.Parallel.UnaryQueryOperator`2.UnaryQueryOperatorResults.GivePartitionedStream(IPartitionedStreamRecipient`1 recipient) at System.Linq.Parallel.QueryOperator`1.GetOpenedEnumerator(Nullable`1 mergeOptions, Boolean suppressOrder, Boolean forEffect, QuerySettings querySettings) at System.Linq.Parallel.QueryOpeningEnumerator`1.OpenQuery() at System.Linq.Parallel.QueryOpeningEnumerator`1.MoveNext() at System.Linq.Parallel.AnyAllSearchOperator`1.Aggregate() at System.Linq.ParallelEnumerable.Any[TSource](ParallelQuery`1 source, Func`2 predicate) at PFact.Program.Main(String[] args) in d:\myprojects\PFact\PFact\Program.cs:line 34 Any help would be appreciated. Thanks!

    Read the article

  • Strategy and AI for the game 'Proximity'

    - by smci
    'Proximity' is a strategy game of territorial domination similar to Othello, Go and Risk. Two players, uses a 10x12 hex grid. Game invented by Brian Cable in 2007. Seems to be a worthy game for discussing a) optimal strategy then b) how to build an AI Strategies are going to be probabilistic or heuristic-based, due to the randomness factor, and the high branching factor (starts out at 120). So it will be kind of hard to compare objectively. A compute time limit of 5s per turn seems reasonable. Game: Flash version here and many copies elsewhere on the web Rules: here Object: to have control of the most armies after all tiles have been placed. Each turn you received a randomly numbered tile (value between 1 and 20 armies) to place on any vacant board space. If this tile is adjacent to any ally tiles, it will strengthen each tile's defenses +1 (up to a max value of 20). If it is adjacent to any enemy tiles, it will take control over them if its number is higher than the number on the enemy tile. Thoughts on strategy: Here are some initial thoughts; setting the computer AI to Expert will probably teach a lot: minimizing your perimeter seems to be a good strategy, to prevent flips and minimize worst-case damage like in Go, leaving holes inside your formation is lethal, only more so with the hex grid because you can lose armies on up to 6 squares in one move low-numbered tiles are a liability, so place them away from your main territory, near the board edges and scattered. You can also use low-numbered tiles to plug holes in your formation, or make small gains along the perimeter which the opponent will not tend to bother attacking. a triangle formation of three pieces is strong since they mutually reinforce, and also reduce the perimeter Each tile can be flipped at most 6 times, i.e. when its neighbor tiles are occupied. Control of a formation can flow back and forth. Sometimes you lose part of a formation and plug any holes to render that part of the board 'dead' and lock in your territory/ prevent further losses. Low-numbered tiles are obvious-but-low-valued liabilities, but high-numbered tiles can be bigger liabilities if they get flipped (which is harder). One lucky play with a 20-army tile can cause a swing of 200 (from +100 to -100 armies). So tile placement will have both offensive and defensive considerations. Comment 1,2,4 seem to resemble a minimax strategy where we minimize the maximum expected possible loss (modified by some probabilistic consideration of the value ß the opponent can get from 1..20 i.e. a structure which can only be flipped by a ß=20 tile is 'nearly impregnable'.) I'm not clear what the implications of comments 3,5,6 are for optimal strategy. Interested in comments from Go, Chess or Othello players. (The sequel ProximityHD for XBox Live, allows 4-player -cooperative or -competitive local multiplayer increases the branching factor since you now have 5 tiles in your hand at any given time, of which you can only play one. Reinforcement of ally tiles is increased to +2 per ally.)

    Read the article

< Previous Page | 114 115 116 117 118 119 120 121 122 123 124  | Next Page >