Search Results

Search found 1512 results on 61 pages for 'deny prasetyo'.

Page 31/61 | < Previous Page | 27 28 29 30 31 32 33 34 35 36 37 38  | Next Page >

  • Does Math.Sqrt cache its own results?

    - by Jonathan Beerhalter
    Someone has suggested to me that the built in C# Math.Sqrt function in .NET 4.0 caches its results, so that if I call Math.Sqrt(50) over and over again, it's not actually doing a sqrt, but just pulling the answer from a cache. Can anyone verify or deny this claim? If it's true then I have a bunch of needless caching going on in my code.

    Read the article

  • Singleton rule questions (do not allow to create copy and deserialization)

    - by Petr
    Hi, Reading some article about singleton, I stopped at the point saying: "Do not allow to crate copy of existing instance". I realized that I do not know how would I do that! Could you tell me, please, how could I copy existing instance of class? And the second one: deserializaition. How it could be dangerous? And for both - how to deny creating copies or deserialization? Thanks

    Read the article

  • Windows Azure: Import/Export Hard Drives, VM ACLs, Web Sockets, Remote Debugging, Continuous Delivery, New Relic, Billing Alerts and More

    - by ScottGu
    Two weeks ago we released a giant set of improvements to Windows Azure, as well as a significant update of the Windows Azure SDK. This morning we released another massive set of enhancements to Windows Azure.  Today’s new capabilities include: Storage: Import/Export Hard Disk Drives to your Storage Accounts HDInsight: General Availability of our Hadoop Service in the cloud Virtual Machines: New VM Gallery, ACL support for VIPs Web Sites: WebSocket and Remote Debugging Support Notification Hubs: Segmented customer push notification support with tag expressions TFS & GIT: Continuous Delivery Support for Web Sites + Cloud Services Developer Analytics: New Relic support for Web Sites + Mobile Services Service Bus: Support for partitioned queues and topics Billing: New Billing Alert Service that sends emails notifications when your bill hits a threshold you define All of these improvements are now available to use immediately (note that some features are still in preview).  Below are more details about them. Storage: Import/Export Hard Disk Drives to Windows Azure I am excited to announce the preview of our new Windows Azure Import/Export Service! The Windows Azure Import/Export Service enables you to move large amounts of on-premises data into and out of your Windows Azure Storage accounts. It does this by enabling you to securely ship hard disk drives directly to our Windows Azure data centers. Once we receive the drives we’ll automatically transfer the data to or from your Windows Azure Storage account.  This enables you to import or export massive amounts of data more quickly and cost effectively (and not be constrained by available network bandwidth). Encrypted Transport Our Import/Export service provides built-in support for BitLocker disk encryption – which enables you to securely encrypt data on the hard drives before you send it, and not have to worry about it being compromised even if the disk is lost/stolen in transit (since the content on the transported hard drives is completely encrypted and you are the only one who has the key to it).  The drive preparation tool we are shipping today makes setting up bitlocker encryption on these hard drives easy. How to Import/Export your first Hard Drive of Data You can read our Getting Started Guide to learn more about how to begin using the import/export service.  You can create import and export jobs via the Windows Azure Management Portal as well as programmatically using our Server Management APIs. It is really easy to create a new import or export job using the Windows Azure Management Portal.  Simply navigate to a Windows Azure storage account, and then click the new Import/Export tab now available within it (note: if you don’t have this tab make sure to sign-up for the Import/Export preview): Then click the “Create Import Job” or “Create Export Job” commands at the bottom of it.  This will launch a wizard that easily walks you through the steps required: For more comprehensive information about Import/Export, refer to Windows Azure Storage team blog.  You can also send questions and comments to the [email protected] email address. We think you’ll find this new service makes it much easier to move data into and out of Windows Azure, and it will dramatically cut down the network bandwidth required when working on large data migration projects.  We hope you like it. HDInsight: 100% Compatible Hadoop Service in the Cloud Last week we announced the general availability release of Windows Azure HDInsight. HDInsight is a 100% compatible Hadoop service that allows you to easily provision and manage Hadoop clusters for big data processing in Windows Azure.  This release is now live in production, backed by an enterprise SLA, supported 24x7 by Microsoft Support, and is ready to use for production scenarios. HDInsight allows you to use Apache Hadoop tools, such as Pig and Hive, to process large amounts of data in Windows Azure Blob Storage. Because data is stored in Windows Azure Blob Storage, you can choose to dynamically create Hadoop clusters only when you need them, and then shut them down when they are no longer required (since you pay only for the time the Hadoop cluster instances are running this provides a super cost effective way to use them).  You can create Hadoop clusters using either the Windows Azure Management Portal (see below) or using our PowerShell and Cross Platform Command line tools: The import/export hard drive support that came out today is a perfect companion service to use with HDInsight – the combination allows you to easily ingest, process and optionally export a limitless amount of data.  We’ve also integrated HDInsight with our Business Intelligence tools, so users can leverage familiar tools like Excel in order to analyze the output of jobs.  You can find out more about how to get started with HDInsight here. Virtual Machines: VM Gallery Enhancements Today’s update of Windows Azure brings with it a new Virtual Machine gallery that you can use to create new VMs in the cloud.  You can launch the gallery by doing New->Compute->Virtual Machine->From Gallery within the Windows Azure Management Portal: The new Virtual Machine Gallery includes some nice enhancements that make it even easier to use: Search: You can now easily search and filter images using the search box in the top-right of the dialog.  For example, simply type “SQL” and we’ll filter to show those images in the gallery that contain that substring. Category Tree-view: Each month we add more built-in VM images to the gallery.  You can continue to browse these using the “All” view within the VM Gallery – or now quickly filter them using the category tree-view on the left-hand side of the dialog.  For example, by selecting “Oracle” in the tree-view you can now quickly filter to see the official Oracle supplied images. MSDN and Supported checkboxes: With today’s update we are also introducing filters that makes it easy to filter out types of images that you may not be interested in. The first checkbox is MSDN: using this filter you can exclude any image that is not part of the Windows Azure benefits for MSDN subscribers (which have highly discounted pricing - you can learn more about the MSDN pricing here). The second checkbox is Supported: this filter will exclude any image that contains prerelease software, so you can feel confident that the software you choose to deploy is fully supported by Windows Azure and our partners. Sort options: We sort gallery images by what we think customers are most interested in, but sometimes you might want to sort using different views. So we’re providing some additional sort options, like “Newest,” to customize the image list for what suits you best. Pricing information: We now provide additional pricing information about images and options on how to cost effectively run them directly within the VM Gallery. The above improvements make it even easier to use the VM Gallery and quickly create launch and run Virtual Machines in the cloud. Virtual Machines: ACL Support for VIPs A few months ago we exposed the ability to configure Access Control Lists (ACLs) for Virtual Machines using Windows PowerShell cmdlets and our Service Management API. With today’s release, you can now configure VM ACLs using the Windows Azure Management Portal as well. You can now do this by clicking the new Manage ACL command in the Endpoints tab of a virtual machine instance: This will enable you to configure an ordered list of permit and deny rules to scope the traffic that can access your VM’s network endpoints. For example, if you were on a virtual network, you could limit RDP access to a Windows Azure virtual machine to only a few computers attached to your enterprise. Or if you weren’t on a virtual network you could alternatively limit traffic from public IPs that can access your workloads: Here is the default behaviors for ACLs in Windows Azure: By default (i.e. no rules specified), all traffic is permitted. When using only Permit rules, all other traffic is denied. When using only Deny rules, all other traffic is permitted. When there is a combination of Permit and Deny rules, all other traffic is denied. Lastly, remember that configuring endpoints does not automatically configure them within the VM if it also has firewall rules enabled at the OS level.  So if you create an endpoint using the Windows Azure Management Portal, Windows PowerShell, or REST API, be sure to also configure your guest VM firewall appropriately as well. Web Sites: Web Sockets Support With today’s release you can now use Web Sockets with Windows Azure Web Sites.  This feature enables you to easily integrate real-time communication scenarios within your web based applications, and is available at no extra charge (it even works with the free tier).  Higher level programming libraries like SignalR and socket.io are also now supported with it. You can enable Web Sockets support on a web site by navigating to the Configure tab of a Web Site, and by toggling Web Sockets support to “on”: Once Web Sockets is enabled you can start to integrate some really cool scenarios into your web applications.  Check out the new SignalR documentation hub on www.asp.net to learn more about some of the awesome scenarios you can do with it. Web Sites: Remote Debugging Support The Windows Azure SDK 2.2 we released two weeks ago introduced remote debugging support for Windows Azure Cloud Services. With today’s Windows Azure release we are extending this remote debugging support to also work with Windows Azure Web Sites. With live, remote debugging support inside of Visual Studio, you are able to have more visibility than ever before into how your code is operating live in Windows Azure. It is now super easy to attach the debugger and quickly see what is going on with your application in the cloud. Remote Debugging of a Windows Azure Web Site using VS 2013 Enabling the remote debugging of a Windows Azure Web Site using VS 2013 is really easy.  Start by opening up your web application’s project within Visual Studio. Then navigate to the “Server Explorer” tab within Visual Studio, and click on the deployed web-site you want to debug that is running within Windows Azure using the Windows Azure->Web Sites node in the Server Explorer.  Then right-click and choose the “Attach Debugger” option on it: When you do this Visual Studio will remotely attach the debugger to the Web Site running within Windows Azure.  The debugger will then stop the web site’s execution when it hits any break points that you have set within your web application’s project inside Visual Studio.  For example, below I set a breakpoint on the “ViewBag.Message” assignment statement within the HomeController of the standard ASP.NET MVC project template.  When I hit refresh on the “About” page of the web site within the browser, the breakpoint was triggered and I am now able to debug the app remotely using Visual Studio: Note above how we can debug variables (including autos/watchlist/etc), as well as use the Immediate and Command Windows. In the debug session above I used the Immediate Window to explore some of the request object state, as well as to dynamically change the ViewBag.Message property.  When we click the the “Continue” button (or press F5) the app will continue execution and the Web Site will render the content back to the browser.  This makes it super easy to debug web apps remotely. Tips for Better Debugging To get the best experience while debugging, we recommend publishing your site using the Debug configuration within Visual Studio’s Web Publish dialog. This will ensure that debug symbol information is uploaded to the Web Site which will enable a richer debug experience within Visual Studio.  You can find this option on the Web Publish dialog on the Settings tab: When you ultimately deploy/run the application in production we recommend using the “Release” configuration setting – the release configuration is memory optimized and will provide the best production performance.  To learn more about diagnosing and debugging Windows Azure Web Sites read our new Troubleshooting Windows Azure Web Sites in Visual Studio guide. Notification Hubs: Segmented Push Notification support with tag expressions In August we announced the General Availability of Windows Azure Notification Hubs - a powerful Mobile Push Notifications service that makes it easy to send high volume push notifications with low latency from any mobile app back-end.  Notification hubs can be used with any mobile app back-end (including ones built using our Mobile Services capability) and can also be used with back-ends that run in the cloud as well as on-premises. Beginning with the initial release, Notification Hubs allowed developers to send personalized push notifications to both individual users as well as groups of users by interest, by associating their devices with tags representing the logical target of the notification. For example, by registering all devices of customers interested in a favorite MLB team with a corresponding tag, it is possible to broadcast one message to millions of Boston Red Sox fans and another message to millions of St. Louis Cardinals fans with a single API call respectively. New support for using tag expressions to enable advanced customer segmentation With today’s release we are adding support for even more advanced customer targeting.  You can now identify customers that you want to send push notifications to by defining rich tag expressions. With tag expressions, you can now not only broadcast notifications to Boston Red Sox fans, but take that segmenting a step farther and reach more granular segments. This opens up a variety of scenarios, for example: Offers based on multiple preferences—e.g. send a game day vegetarian special to users tagged as both a Boston Red Sox fan AND a vegetarian Push content to multiple segments in a single message—e.g. rain delay information only to users who are tagged as either a Boston Red Sox fan OR a St. Louis Cardinal fan Avoid presenting subsets of a segment with irrelevant content—e.g. season ticket availability reminder to users who are tagged as a Boston Red Sox fan but NOT also a season ticket holder To illustrate with code, consider a restaurant chain app that sends an offer related to a Red Sox vs Cardinals game for users in Boston. Devices can be tagged by your app with location tags (e.g. “Loc:Boston”) and interest tags (e.g. “Follows:RedSox”, “Follows:Cardinals”), and then a notification can be sent by your back-end to “(Follows:RedSox || Follows:Cardinals) && Loc:Boston” in order to deliver an offer to all devices in Boston that follow either the RedSox or the Cardinals. This can be done directly in your server backend send logic using the code below: var notification = new WindowsNotification(messagePayload); hub.SendNotificationAsync(notification, "(Follows:RedSox || Follows:Cardinals) && Loc:Boston"); In your expressions you can use all Boolean operators: AND (&&), OR (||), and NOT (!).  Some other cool use cases for tag expressions that are now supported include: Social: To “all my group except me” - group:id && !user:id Events: Touchdown event is sent to everybody following either team or any of the players involved in the action: Followteam:A || Followteam:B || followplayer:1 || followplayer:2 … Hours: Send notifications at specific times. E.g. Tag devices with time zone and when it is 12pm in Seattle send to: GMT8 && follows:thaifood Versions and platforms: Send a reminder to people still using your first version for Android - version:1.0 && platform:Android For help on getting started with Notification Hubs, visit the Notification Hub documentation center.  Then download the latest NuGet package (or use the Notification Hubs REST APIs directly) to start sending push notifications using tag expressions.  They are really powerful and enable a bunch of great new scenarios. TFS & GIT: Continuous Delivery Support for Web Sites + Cloud Services With today’s Windows Azure release we are making it really easy to enable continuous delivery support with Windows Azure and Team Foundation Services.  Team Foundation Services is a cloud based offering from Microsoft that provides integrated source control (with both TFS and Git support), build server, test execution, collaboration tools, and agile planning support.  It makes it really easy to setup a team project (complete with automated builds and test runners) in the cloud, and it has really rich integration with Visual Studio. With today’s Windows Azure release it is now really easy to enable continuous delivery support with both TFS and Git based repositories hosted using Team Foundation Services.  This enables a workflow where when code is checked in, built successfully on an automated build server, and all tests pass on it – I can automatically have the app deployed on Windows Azure with zero manual intervention or work required. The below screen-shots demonstrate how to quickly setup a continuous delivery workflow to Windows Azure with a Git-based ASP.NET MVC project hosted using Team Foundation Services. Enabling Continuous Delivery to Windows Azure with Team Foundation Services The project I’m going to enable continuous delivery with is a simple ASP.NET MVC project whose source code I’m hosting using Team Foundation Services.  I did this by creating a “SimpleContinuousDeploymentTest” repository there using Git – and then used the new built-in Git tooling support within Visual Studio 2013 to push the source code to it.  Below is a screen-shot of the Git repository hosted within Team Foundation Services: I can access the repository within Visual Studio 2013 and easily make commits with it (as well as branch, merge and do other tasks).  Using VS 2013 I can also setup automated builds to take place in the cloud using Team Foundation Services every time someone checks in code to the repository: The cool thing about this is that I don’t have to buy or rent my own build server – Team Foundation Services automatically maintains its own build server farm and can automatically queue up a build for me (for free) every time someone checks in code using the above settings.  This build server (and automated testing) support now works with both TFS and Git based source control repositories. Connecting a Team Foundation Services project to Windows Azure Once I have a source repository hosted in Team Foundation Services with Automated Builds and Testing set up, I can then go even further and set it up so that it will be automatically deployed to Windows Azure when a source code commit is made to the repository (assuming the Build + Tests pass).  Enabling this is now really easy.  To set this up with a Windows Azure Web Site simply use the New->Compute->Web Site->Custom Create command inside the Windows Azure Management Portal.  This will create a dialog like below.  I gave the web site a name and then made sure the “Publish from source control” checkbox was selected: When we click next we’ll be prompted for the location of the source repository.  We’ll select “Team Foundation Services”: Once we do this we’ll be prompted for our Team Foundation Services account that our source repository is hosted under (in this case my TFS account is “scottguthrie”): When we click the “Authorize Now” button we’ll be prompted to give Windows Azure permissions to connect to the Team Foundation Services account.  Once we do this we’ll be prompted to pick the source repository we want to connect to.  Starting with today’s Windows Azure release you can now connect to both TFS and Git based source repositories.  This new support allows me to connect to the “SimpleContinuousDeploymentTest” respository we created earlier: Clicking the finish button will then create the Web Site with the continuous delivery hooks setup with Team Foundation Services.  Now every time someone pushes source control to the repository in Team Foundation Services, it will kick off an automated build, run all of the unit tests in the solution , and if they pass the app will be automatically deployed to our Web Site in Windows Azure.  You can monitor the history and status of these automated deployments using the Deployments tab within the Web Site: This enables a really slick continuous delivery workflow, and enables you to build and deploy apps in a really nice way. Developer Analytics: New Relic support for Web Sites + Mobile Services With today’s Windows Azure release we are making it really easy to enable Developer Analytics and Monitoring support with both Windows Azure Web Site and Windows Azure Mobile Services.  We are partnering with New Relic, who provide a great dev analytics and app performance monitoring offering, to enable this - and we have updated the Windows Azure Management Portal to make it really easy to configure. Enabling New Relic with a Windows Azure Web Site Enabling New Relic support with a Windows Azure Web Site is now really easy.  Simply navigate to the Configure tab of a Web Site and scroll down to the “developer analytics” section that is now within it: Clicking the “add-on” button will display some additional UI.  If you don’t already have a New Relic subscription, you can click the “view windows azure store” button to obtain a subscription (note: New Relic has a perpetually free tier so you can enable it even without paying anything): Clicking the “view windows azure store” button will launch the integrated Windows Azure Store experience we have within the Windows Azure Management Portal.  You can use this to browse from a variety of great add-on services – including New Relic: Select “New Relic” within the dialog above, then click the next button, and you’ll be able to choose which type of New Relic subscription you wish to purchase.  For this demo we’ll simply select the “Free Standard Version” – which does not cost anything and can be used forever:  Once we’ve signed-up for our New Relic subscription and added it to our Windows Azure account, we can go back to the Web Site’s configuration tab and choose to use the New Relic add-on with our Windows Azure Web Site.  We can do this by simply selecting it from the “add-on” dropdown (it is automatically populated within it once we have a New Relic subscription in our account): Clicking the “Save” button will then cause the Windows Azure Management Portal to automatically populate all of the needed New Relic configuration settings to our Web Site: Deploying the New Relic Agent as part of a Web Site The final step to enable developer analytics using New Relic is to add the New Relic runtime agent to our web app.  We can do this within Visual Studio by right-clicking on our web project and selecting the “Manage NuGet Packages” context menu: This will bring up the NuGet package manager.  You can search for “New Relic” within it to find the New Relic agent.  Note that there is both a 32-bit and 64-bit edition of it – make sure to install the version that matches how your Web Site is running within Windows Azure (note: you can configure your Web Site to run in either 32-bit or 64-bit mode using the Web Site’s “Configuration” tab within the Windows Azure Management Portal): Once we install the NuGet package we are all set to go.  We’ll simply re-publish the web site again to Windows Azure and New Relic will now automatically start monitoring the application Monitoring a Web Site using New Relic Now that the application has developer analytics support with New Relic enabled, we can launch the New Relic monitoring portal to start monitoring the health of it.  We can do this by clicking on the “Add Ons” tab in the left-hand side of the Windows Azure Management Portal.  Then select the New Relic add-on we signed-up for within it.  The Windows Azure Management Portal will provide some default information about the add-on when we do this.  Clicking the “Manage” button in the tray at the bottom will launch a new browser tab and single-sign us into the New Relic monitoring portal associated with our account: When we do this a new browser tab will launch with the New Relic admin tool loaded within it: We can now see insights into how our app is performing – without having to have written a single line of monitoring code.  The New Relic service provides a ton of great built-in monitoring features allowing us to quickly see: Performance times (including browser rendering speed) for the overall site and individual pages.  You can optionally set alert thresholds to trigger if the speed does not meet a threshold you specify. Information about where in the world your customers are hitting the site from (and how performance varies by region) Details on the latency performance of external services your web apps are using (for example: SQL, Storage, Twitter, etc) Error information including call stack details for exceptions that have occurred at runtime SQL Server profiling information – including which queries executed against your database and what their performance was And a whole bunch more… The cool thing about New Relic is that you don’t need to write monitoring code within your application to get all of the above reports (plus a lot more).  The New Relic agent automatically enables the CLR profiler within applications and automatically captures the information necessary to identify these.  This makes it super easy to get started and immediately have a rich developer analytics view for your solutions with very little effort. If you haven’t tried New Relic out yet with Windows Azure I recommend you do so – I think you’ll find it helps you build even better cloud applications.  Following the above steps will help you get started and deliver you a really good application monitoring solution in only minutes. Service Bus: Support for partitioned queues and topics With today’s release, we are enabling support within Service Bus for partitioned queues and topics. Enabling partitioning enables you to achieve a higher message throughput and better availability from your queues and topics. Higher message throughput is achieved by implementing multiple message brokers for each partitioned queue and topic.  The  multiple messaging stores will also provide higher availability. You can create a partitioned queue or topic by simply checking the Enable Partitioning option in the custom create wizard for a Queue or Topic: Read this article to learn more about partitioned queues and topics and how to take advantage of them today. Billing: New Billing Alert Service Today’s Windows Azure update enables a new Billing Alert Service Preview that enables you to get proactive email notifications when your Windows Azure bill goes above a certain monetary threshold that you configure.  This makes it easier to manage your bill and avoid potential surprises at the end of the month. With the Billing Alert Service Preview, you can now create email alerts to monitor and manage your monetary credits or your current bill total.  To set up an alert first sign-up for the free Billing Alert Service Preview.  Then visit the account management page, click on a subscription you have setup, and then navigate to the new Alerts tab that is available: The alerts tab allows you to setup email alerts that will be sent automatically once a certain threshold is hit.  For example, by clicking the “add alert” button above I can setup a rule to send myself email anytime my Windows Azure bill goes above $100 for the month: The Billing Alert Service will evolve to support additional aspects of your bill as well as support multiple forms of alerts such as SMS.  Try out the new Billing Alert Service Preview today and give us feedback. Summary Today’s Windows Azure release enables a ton of great new scenarios, and makes building applications hosted in the cloud even easier. If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Windows Azure Developer Center to learn more about how to build apps with it. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • multiple puppet masters

    - by Oli
    I would like to set up an additional puppet master but have the CA server handled by only 1 puppet master. I have set this up as per the documentation here: http://docs.puppetlabs.com/guides/scaling_multiple_masters.html I have configured my second puppet master as follows: [main] ... ca = false ca_server = puppet-master1.test.net I am using passenger so I am a bit confused how the virtual-host.conf file should look for my second puppet-master2.test.net. Here is mine (updated as per Shane Maddens answer): LoadModule passenger_module /usr/lib/ruby/gems/1.8/gems/passenger-3.0.18/ext/apache2/mod_passenger.so PassengerRoot /usr/lib/ruby/gems/1.8/gems/passenger-3.0.18 PassengerRuby /usr/bin/ruby Listen 8140 <VirtualHost *:8140> ProxyPassMatch ^/([^/]+/certificate.*)$ https://puppet-master1.test.net:8140/$1 SSLEngine on SSLProtocol -ALL +SSLv3 +TLSv1 SSLCipherSuite ALL:!ADH:RC4+RSA:+HIGH:+MEDIUM:-LOW:-SSLv2:-EXP SSLCertificateFile /var/lib/puppet/ssl/certs/puppet-master2.test.net.pem SSLCertificateKeyFile /var/lib/puppet/ssl/private_keys/puppet-master2.test.net.pem #SSLCertificateChainFile /var/lib/puppet/ssl/ca/ca_crt.pem #SSLCACertificateFile /var/lib/puppet/ssl/ca/ca_crt.pem # If Apache complains about invalid signatures on the CRL, you can try disabling # CRL checking by commenting the next line, but this is not recommended. #SSLCARevocationFile /var/lib/puppet/ssl/ca/ca_crl.pem SSLVerifyClient optional SSLVerifyDepth 1 # The `ExportCertData` option is needed for agent certificate expiration warnings SSLOptions +StdEnvVars +ExportCertData # This header needs to be set if using a loadbalancer or proxy RequestHeader unset X-Forwarded-For RequestHeader set X-SSL-Subject %{SSL_CLIENT_S_DN}e RequestHeader set X-Client-DN %{SSL_CLIENT_S_DN}e RequestHeader set X-Client-Verify %{SSL_CLIENT_VERIFY}e DocumentRoot /etc/puppet/rack/public/ RackBaseURI / <Directory /etc/puppet/rack/> Options None AllowOverride None Order allow,deny allow from all </Directory> </VirtualHost> I have commented out the #SSLCertificateChainFile, #SSLCACertificateFile & #SSLCARevocationFile - this is not a CA server so not sure I need this. How would I get passenger to work with these? I would like to use ProxyPassMatch which I have configured as per the documentation. I don't want to specify a ca server in every puppet.conf file. I am getting this error when trying to get create a cert from a puppet client pointing to the second puppet master server (puppet-master2.test.net): [root@puppet-client2 ~]# puppet agent --test Error: Could not request certificate: Could not intern from s: nested asn1 error Exiting; failed to retrieve certificate and waitforcert is disabled On the puppet client I have this [main] server = puppet-master2.test.net What have I missed? -- update Here is a new virtual host file on my secondary puppet master. Is this correct? I have SSL turned off? LoadModule passenger_module /usr/lib/ruby/gems/1.8/gems/passenger-3.0.18/ext/apache2/mod_passenger.so PassengerRoot /usr/lib/ruby/gems/1.8/gems/passenger-3.0.18 PassengerRuby /usr/bin/ruby # you probably want to tune these settings PassengerHighPerformance on PassengerMaxPoolSize 12 PassengerPoolIdleTime 1500 # PassengerMaxRequests 1000 PassengerStatThrottleRate 120 RackAutoDetect Off RailsAutoDetect Off Listen 8140 <VirtualHost *:8140> SSLEngine off ProxyPassMatch ^/([^/]+/certificate.*)$ https://puppet-master1.test.net:8140/$1 # Obtain Authentication Information from Client Request Headers SetEnvIf X-Client-Verify "(.*)" SSL_CLIENT_VERIFY=$1 SetEnvIf X-SSL-Client-DN "(.*)" SSL_CLIENT_S_DN=$1 DocumentRoot /etc/puppet/rack/public/ RackBaseURI / <Directory /etc/puppet/rack/> Options None AllowOverride None Order allow,deny allow from all </Directory> </VirtualHost> Cheers, Oli

    Read the article

  • Magento installation problem on Nginx in Windows

    - by Nithin
    I am trying to install magento locally using nginx as the web server instead of Apache. I copied the magento folder to the html directory. When i try to call the magento folder, I get the 404 not found error. I am able to access other php files setup in the html folder and have PHP installed. Here is my config file: #user nobody; worker_processes 1; #error_log logs/error.log; #error_log logs/error.log notice; #error_log logs/error.log info; #pid logs/nginx.pid; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; #log_format main '$remote_addr - $remote_user [$time_local] "$request" ' # '$status $body_bytes_sent "$http_referer" ' # '"$http_user_agent" "$http_x_forwarded_for"'; #access_log logs/access.log main; sendfile on; #tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; #gzip on; server { listen 8080; server_name localhost; #charset koi8-r; #access_log logs/host.access.log main; location / { root html; index index.html index.htm index.php; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root html; allow all; } # proxy the PHP scripts to Apache listening on 127.0.0.1:80 # #location ~ \.php$ { # proxy_pass http://127.0.0.1; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # location ~ \.php$ { root html; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME c:/nginx/html/$fastcgi_script_name; include fastcgi_params; } # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} } # another virtual host using mix of IP-, name-, and port-based configuration # #server { # listen 8000; # listen somename:8080; # server_name somename alias another.alias; # location / { # root html; # index index.html index.htm; # } #} # HTTPS server # #server { # listen 443; # server_name localhost; # ssl on; # ssl_certificate cert.pem; # ssl_certificate_key cert.key; # ssl_session_timeout 5m; # ssl_protocols SSLv2 SSLv3 TLSv1; # ssl_ciphers ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP; # ssl_prefer_server_ciphers on; # location / { # root html; # index index.html index.htm; # } #} } How do I fix this? This is what I found in the error.log file : 2011/09/06 12:22:35 [error] 5632#0: *1 "/cygdrive/c/nginx/html/magento/index.php/install/index.html" is not found (20: Not a directory), client: 127.0.0.1, server: localhost, request: "GET /magento/index.php/install/ HTTP/1.1", host: "localhost:8080"

    Read the article

  • nginx: problem configuring a proxy_pass

    - by Ofer Bar
    I'm converting a web app from apache to nginx. In apache's httpd.conf I have: ProxyPass /proxy/ http:// ProxyPassReverse /proxy/ http:// The idea is the client send this url: http://web-server-domain/proxy/login-server-addr/loginUrl.php?user=xxx&pass=yyy and the web server calls: http://login-server-addr/loginUrl.php?user=xxx&pass=yyy My nginx.conf is attached below and it is not working. At the moment it looks like it is calling the server, but returning an application error. This seems promising but any attempt to debug this failed! I can't trace any of the calls as nginx refuses to place them in the error file. Also, placing echo statement on the login server did not help either which is weird. The nginx documentation isn't very helpful about this. Any suggestion on how to configure a proxy_pass? Thanks! user nginx; worker_processes 1; #error_log /var/log/nginx/error.log; error_log /var/log/nginx/error.log notice; #error_log /var/log/nginx/error.log info; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; #tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; #gzip on; # # The default server # server { rewrite_log on; listen 80; server_name _; #charset koi8-r; #access_log logs/host.access.log main; #root /var/www/live/html; index index.php index.html index.htm; location ~ ^/proxy/(.*$) { #location /proxy/ { # rewrite ^/proxy(.*) http://$1 break; proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_buffering off; proxy_pass http://$1; #proxy_pass "http://173.231.134.36/messages_2.7.0/loginUser.php?userID=ofer.fly%40gmail.com&password=y4HTD93vrshMNcy2Qr5ka7ia0xcaa389f4885f59c9"; break; } location / { root /var/www/live/html; #if ( $uri ~ ^/proxy/(.*) ) { # proxy_pass http://$1; # break; #} #try_files $uri $uri/ /index.php; } error_page 404 /404.html; #location = /404.html { # root /usr/share/nginx/html; #} # redirect server error pages to the static page /50x.html # #error_page 500 502 503 504 /50x.html; #location = /50x.html { # root /usr/share/nginx/html; #} # proxy the PHP scripts to Apache listening on 127.0.0.1:80 # #location ~ \.php$ { # proxy_pass http://127.0.0.1; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # location ~ \.php$ { #root html; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; #fastcgi_param SCRIPT_FILENAME /usr/share/nginx/html$fastcgi_script_name; fastcgi_param SCRIPT_FILENAME /var/www/live/html$fastcgi_script_name; include fastcgi_params; } # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} } # Load config files from the /etc/nginx/conf.d directory include /etc/nginx/conf.d/*.conf; }

    Read the article

  • Mod_Rewrite Apache ProxyPass ?

    - by Anon
    I have two websites; OLDSITE and NEWSITE. The OLDSITE has 120 IP Address that it has with it, and the NEWSITE had 5. I want to be able to separate everything from OLDSITE and NEWSITE so they are not tied together but use them on the same linux computer. My current apache setup is this: Listen 80 NameVirtualHost * <VirtualHost *> ServerName oldsite.com ServerAdmin [email protected] DocumentRoot /var/www/ <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> RewriteEngine on RewriteCond %{HTTP_HOST} ^([^.]+)\.oldsite\.com$ RewriteCond /home/%1/ -d RewriteRule ^(.+) %{HTTP_HOST}$1 RewriteRule ^([^.]+)\.oldsite\.com/media/(.*) /home/$1/dir/media/$2 RewriteRule ^([^.]+)\.oldsite\.com/(.*) /home/$1/www/$2 </VirtualHost> <VirtualHost newsite.com> ServerName newsite.com ServerAdmin [email protected] DocumentRoot /var/newsite/ <Directory /var/newsite/> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> RewriteEngine on RewriteCond %{HTTP_HOST} ^([^.]+)\.newsite\.com$ RewriteCond /home/%1/ -d RewriteRule ^(.+) %{HTTP_HOST}$1 RewriteRule ^([^.]+)\.newsite\.com/media/(.*) /home/$1/dir/media/$2 RewriteRule ^([^.]+)\.newsite\.com/(.*) /home/$1/www/$2 </VirtualHost> <VirtualHost *> ServerName panel.oldsite.com ProxyPass / http://panel.oldsite.com:10000/ ProxyPassReverse / http://panel.oldsite.com:10000/ <Proxy *> allow from all </Proxy> </VirtualHost> <VirtualHost *> ServerName panel.newsite.com ProxyPass / http://panel.newsite.com:10000/ ProxyPassReverse / http://panel.newsite.com:10000/ <Proxy *> allow from all </Proxy> </VirtualHost> I want to be able to access anything that is newsite.com and have it go to the /var/newsite unless their is a home directory...and then if its panel.newsite.com I want it to automatically do a proxypass to panel.newsite.com:10000... With this setup, it works perfect for oldsite.com.... both the proxy and the webpages... However, having the Virtualhost set to newsite.com renders the proxypass worthless. If I change the Virtualhost for the newsite.com to a wildcard, the proxypass will work but anything thats a subdomain of newsite.com won't work. so newsite.com will work, but www.newsite.com will not load correctly. I am assuming that when everything is wildcarded, then the ServerName somewhat acts like a RewriteCond and actually just applies the stuff to that URL. It uses the Virtualhost * (oldsite.com) and lets ANYTHING.oldsite.com work, but the second virtualhost * (newsite.com) only newsite.com will work... www.newsite.com will not. If I change the order of them, the opposite is true. So apparently it doesn't like me using 2 wildcards... I tried just making the Servername *.newsite.com .......but that would be too easy. I am not sure what I can do to do what I want? Perhaps I should make the ProxyPass included in the VirtualHosts and use something like: RewriteCond %{HTTP_HOST} ^panel\.newsite\.com$ [NC] RewriteRule ^(.*)$ http://panel.newsite.com:10000/ [P] ProxyPassReverse / http://panel.newsite.com:10000/ but that doesnt seem to want to login to webmin, it loads the login page but isnt working how the ProxyPass & ProxyPassReverse does.

    Read the article

  • Cisco Pix how to add an additional block of static ip addresses for nat?

    - by Scott Szretter
    I have a pix 501 with 5 static ip addresses. My isp just gave me 5 more. I am trying to figure out how to add the new block and then how to nat/open at least one of them to an inside machine. So far, I named a new interface "intf2", ip range is 71.11.11.58 - 62 (gateway should 71.11.11.57) imgsvr is the machine I want to nat to one of the (71.11.11.59) new ip addresses. mail (.123) is an example of a machine that is mapped to the current existing 5 ip block (96.11.11.121 gate / 96.11.11.122-127) and working fine. Building configuration... : Saved : PIX Version 6.3(4) interface ethernet0 auto interface ethernet0 vlan1 logical interface ethernet1 auto nameif ethernet0 outside security0 nameif ethernet1 inside security100 nameif vlan1 intf2 security1 enable password xxxxxxxxx encrypted passwd xxxxxxxxx encrypted hostname xxxxxxxPIX domain-name xxxxxxxxxxx no fixup protocol dns fixup protocol ftp 21 fixup protocol h323 h225 1720 fixup protocol h323 ras 1718-1719 fixup protocol http 80 fixup protocol rsh 514 fixup protocol rtsp 554 fixup protocol sip 5060 fixup protocol sip udp 5060 fixup protocol skinny 2000 no fixup protocol smtp 25 fixup protocol sqlnet 1521 fixup protocol tftp 69 names ...snip... name 192.168.10.13 mail name 192.168.10.29 imgsvr object-group network vpn1 network-object mail 255.255.255.255 access-list outside_access_in permit tcp any host 96.11.11.124 eq www access-list outside_access_in permit tcp any host 96.11.11.124 eq https access-list outside_access_in permit tcp any host 96.11.11.124 eq 3389 access-list outside_access_in permit tcp any host 96.11.11.123 eq https access-list outside_access_in permit tcp any host 96.11.11.123 eq www access-list outside_access_in permit tcp any host 96.11.11.125 eq smtp access-list outside_access_in permit tcp any host 96.11.11.125 eq https access-list outside_access_in permit tcp any host 96.11.11.125 eq 10443 access-list outside_access_in permit tcp any host 96.11.11.126 eq smtp access-list outside_access_in permit tcp any host 96.11.11.126 eq https access-list outside_access_in permit tcp any host 96.11.11.126 eq 10443 access-list outside_access_in deny ip any any access-list inside_nat0_outbound permit ip 192.168.0.0 255.255.0.0 IPPool2 255.255.255.0 access-list inside_nat0_outbound permit ip 172.17.0.0 255.255.0.0 IPPool2 255.255.255.0 access-list inside_nat0_outbound permit ip 172.16.0.0 255.255.0.0 IPPool2 255.255.255.0 ...snip... access-list inside_access_in deny tcp any any eq smtp access-list inside_access_in permit ip any any pager lines 24 logging on logging buffered notifications mtu outside 1500 mtu inside 1500 ip address outside 96.11.11.122 255.255.255.248 ip address inside 192.168.10.15 255.255.255.0 ip address intf2 71.11.11.58 255.255.255.248 ip audit info action alarm ip audit attack action alarm pdm location exchange 255.255.255.255 inside pdm location mail 255.255.255.255 inside pdm location IPPool2 255.255.255.0 outside pdm location 96.11.11.122 255.255.255.255 inside pdm location 192.168.10.1 255.255.255.255 inside pdm location 192.168.10.6 255.255.255.255 inside pdm location mail-gate1 255.255.255.255 inside pdm location mail-gate2 255.255.255.255 inside pdm location imgsvr 255.255.255.255 inside pdm location 71.11.11.59 255.255.255.255 intf2 pdm logging informational 100 pdm history enable arp timeout 14400 global (outside) 1 interface global (outside) 2 96.11.11.123 global (intf2) 3 interface global (intf2) 4 71.11.11.59 nat (inside) 0 access-list inside_nat0_outbound nat (inside) 2 mail 255.255.255.255 0 0 nat (inside) 1 0.0.0.0 0.0.0.0 0 0 static (inside,outside) tcp 96.11.11.123 smtp mail smtp netmask 255.255.255.255 0 0 static (inside,outside) tcp 96.11.11.123 https mail https netmask 255.255.255.255 0 0 static (inside,outside) tcp 96.11.11.123 www mail www netmask 255.255.255.255 0 0 static (inside,outside) 96.11.11.124 ts netmask 255.255.255.255 0 0 static (inside,outside) 96.11.11.126 mail-gate2 netmask 255.255.255.255 0 0 static (inside,outside) 96.11.11.125 mail-gate1 netmask 255.255.255.255 0 0 access-group outside_access_in in interface outside access-group inside_access_in in interface inside route outside 0.0.0.0 0.0.0.0 96.11.11.121 1 route intf2 0.0.0.0 0.0.0.0 71.11.11.57 2 timeout xlate 0:05:00 timeout conn 1:00:00 half-closed 0:10:00 udp 0:02:00 rpc 0:10:00 h225 1:00:00 timeout h323 0:05:00 mgcp 0:05:00 sip 0:30:00 sip_media 0:02:00 timeout uauth 0:05:00 absolute floodguard enable ...snip... : end [OK] Thanks!

    Read the article

  • Nginx - Redirect any Subdomain to File without Rewriting

    - by Waffle
    Recently I have switched from Apache to Nginx to increase performance on a web server running Ubuntu 11.10. I have been having issues trying to figure out how certain things work in Nginx compared to Apache, but one issue has been stumping me and I have not been able to find the answer online. My problem is that I need to be able to redirect (not rewrite) any sub-domain to a file, but that file needs to be able to get the sub-domain part of the URL in order to do a database look-up of that sub-domain. So far, I have been able to get any sub-domain to rewrite to that file, but then it loses the text of the sub-domain I need. So, for example, I would like test.server.com to redirect to server.com/resolve.php, but still remain as test.server.com. If this is not possible, the thing that I would need at the very least would be something such as going to test.server.com would go to server.com/resolve.php?=test . One of these options must be possible in Nginx. My config as it stands right now looks something like this: server { listen 80; ## listen for ipv4; this line is default and implied listen [::]:80 default ipv6only=on; ## listen for ipv6 root /usr/share/nginx/www; index index.php index.html index.htm; # Make site accessible from http://localhost/ server_name www.server.com server.com; location / { # First attempt to serve request as file, then # as directory, then fall back to index.html try_files $uri $uri/ /index.html; } location /doc { root /usr/share; autoindex on; allow 127.0.0.1; } location /images { root /usr/share; autoindex off; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # #error_page 500 502 503 504 /50x.html; #location = /50x.html { # root /usr/share/nginx/www; #} # proxy the PHP scripts to Apache listening on 127.0.0.1:80 # #location ~ \.php$ { # proxy_pass http://127.0.0.1; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # location ~ \.php$ { fastcgi_pass unix:/tmp/php5-fpm.sock; fastcgi_index index.php; include fastcgi_params; } # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} } server { listen 80 default; server_name *.server.com; rewrite ^ http://www.server.com/resolve.php; } As I said before, I am very new to Nginx, so I have a feeling the answer is pretty simple, but no examples online seem to deal with just redirects without rewrites or rewriting with the sub-domain section included. Any help on what to do would be most appreciated and if any one has a better idea to accomplish what I need, I am also open to ideas. Thank you very much.

    Read the article

  • ScriptAlias makes requests match too many Location blocks. What is going on?

    - by brain99
    We wish to restrict access on our development server to those users who have a valid SSL Client certificate. We are running Apache 2.2.16 on Debian 6. However, for some sections (mainly git-http, setup with gitolite on https://my.server/git/) we need an exception since many git clients don't support SSL client certificates. I have succeeded in requiring client cert authentication for the server, and in adding exceptions for some locations. However, it seems this does not work for git. The current setup is as follows: SSLCACertificateFile ssl-certs/client-ca-certs.crt <Location /> SSLVerifyClient require SSLVerifyDepth 2 </Location> # this works <Location /foo> SSLVerifyClient none </Location> # this does not <Location /git> SSLVerifyClient none </Location> I have also tried an alternative solution, with the same results: # require authentication everywhere except /git and /foo <LocationMatch "^/(?!git|foo)"> SSLVerifyClient require SSLVerifyDepth 2 </LocationMatch> In both these cases, a user without client certificate can perfectly access my.server/foo/, but not my.server/git/ (access is refused because no valid client certificate is given). If I disable SSL client certificate authentication completely, my.server/git/ works ok. The ScriptAlias problem Gitolite is setup using the ScriptAlias directive. I have found that the problem occurs with any similar ScriptAlias: # Gitolite ScriptAlias /git/ /path/to/gitolite-shell/ ScriptAlias /gitmob/ /path/to/gitolite-shell/ # My test ScriptAlias /test/ /path/to/test/script/ Note that /path/to/test/script is a file, not a directory, the same goes for /path/to/gitolite-shell/ My test script simply prints out the environment, super simple: #!/usr/bin/perl print "Content-type:text/plain\n\n"; print "TEST\n"; @keys = sort(keys %ENV); foreach (@keys) { print "$_ => $ENV{$_}\n"; } It seems that if I go to https://my.server/test/someLocation, that any SSLVerifyClient directives are being applied which are in Location blocks that match /test/someLocation or just /someLocation. If I have the following config: <LocationMatch "^/f"> SSLVerifyClient require SSLVerifyDepth 2 </LocationMatch> Then, the following URL requires a client certificate: https://my.server/test/foo. However, the following URL does not: https://my.server/test/somethingElse/foo Note that this only seems to apply for SSL configuration. The following has no effect whatsoever on https://my.server/test/foo: <LocationMatch "^/f"> Order allow,deny Deny from all </LocationMatch> However, it does block access to https://my.server/foo. This presents a major problem for cases where I have some project running at https://my.server/project (which has to require SSL client certificate authorization), and there is a git repository for that project at https://my.server/git/project which cannot require a SSL client certificate. Since the /git/project URL also gets matched agains /project Location blocks, such a configuration seems impossible given my current findings. Question: Why is this happening, and how do I solve my problem? In the end, I want to require SSL Client certificate authorization for the whole server except for /git and /someLocation, with as minimal configuration as possible (so I don't have to modify the configuration each time something new is deployed or a new git repository is added). Note: I rewrote my question (instead of just adding more updates at the bottom) to take into account my new findings and hopefully make this more clear.

    Read the article

  • phpMyAdmin setup issues

    - by EquinoX
    I am trying to follow the tutorial here to setup the user and pass. It says there that "this section is only applicable if your MySQL server is running with --skip-show-database". First question is, how do I check if MySQl server is running with --skip-show-database? Is there any way I can access phpMyAdmin SQL query window without logging in? Otherwise I'd have to execute this SQL from command line. I am also getting this: Cannot load mcrypt extension. Please check your PHP configuration. I have added mcrypt.so to php.ini and doing the following command proves that I have it. [root@DT html]# rpm -qa | grep mcrypt mcrypt-2.6.8-1.el5 php-mcrypt-5.3.5-1.1.w5 libmcrypt-2.5.8-4.el5.centos [root@DT html]# php -v PHP 5.3.5 (cli) (built: Feb 19 2011 13:10:09) Copyright (c) 1997-2010 The PHP Group Zend Engine v2.3.0, Copyright (c) 1998-2010 Zend Technologies Now when I go to phpinfo() and search for mcrypt it can find it inside the Configure Command row ('--with-mcrypt=shared,/usr'). So, what to do next?. UPDATE: I didn't put extension=mcrypt.so in php.ini as it will complain the following: PHP Warning: Module 'mcrypt' already loaded in Unknown on line 0 Here's my nginx.conf: #user nobody; worker_processes 2; #error_log logs/error.log; #error_log logs/error.log notice; #error_log logs/error.log info; #pid logs/nginx.pid; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; #log_format main '$remote_addr - $remote_user [$time_local] "$request" ' # '$status $body_bytes_sent "$http_referer" ' # '"$http_user_agent" "$http_x_forwarded_for"'; #access_log logs/access.log main; sendfile on; #tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; gzip on; server { listen 80; root /usr/share/nginx/html; server_name localhost; #charset koi8-r; #access_log logs/host.access.log main; location / { #root html; index index.html index.htm; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { #root html; } # proxy the PHP scripts to Apache listening on 127.0.0.1:80 # #location ~ \.php$ { # proxy_pass http://127.0.0.1; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # location ~ \.php$ { #root /usr/local/nginx/html; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /usr/share/nginx/html$fastcgi_script _name; include fastcgi_params; } # deny access to .htaccess files, if Apache's document root # concurs with nginx's one location ~ /\.ht { deny all; } } # another virtual host using mix of IP-, name-, and port-based configuration # #server { # listen 8000; # listen somename:8080; # server_name somename alias another.alias; # location / { # root html; # index index.html index.htm; # } #} # HTTPS server # #server { # listen 443; # server_name localhost; # ssl on; # ssl_certificate cert.pem; # ssl_certificate_key cert.key; # ssl_session_timeout 5m; # ssl_protocols SSLv2 SSLv3 TLSv1; # ssl_ciphers ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP; # ssl_prefer_server_ciphers on; # location / { # root html; # index index.html index.htm; # } #} }

    Read the article

  • Issue in nginx proxying to apache

    - by Luis Masuelli
    My current nginx configuration is as follows: specific configuration for (currently two) domains: server { listen 443 ssl; server_name studiotv.service.tebusco.lan phpmyadmin.service.tebusco.lan; ssl_certificate /home/administrador/nginx-confs/ssl/service.tebusco.lan.crt; ssl_certificate_key /home/administrador/nginx-confs/ssl/service.tebusco.lan.key; location / { proxy_pass http://127.0.0.1:8180; proxy_set_header Host $http_host:8180; } } default configuration for unmatched ssl connections: server { listen 443 default ssl; ssl_certificate /home/administrador/nginx-confs/ssl/service.tebusco.lan.crt; ssl_certificate_key /home/administrador/nginx-confs/ssl/service.tebusco.lan.key; location / { return 403; } } http configuration: server { listen 80; rewrite ^ https://$host$request_uri? permanent; } The intention is clear: Redirect http traffic to https. Proxy each https:// call from phpmyadmin.service.tebusco.lan and studiotv.service.tebusco.lan to apache2. This includes passing a host header, which is detected. Each unmatched ssl connection must return a 403 in nginx. Does not even reach apache2. In the apache2 side of the life, I have a default site, and a non-default site which will match studiotv.service.tebusco.lan: 000-default.conf file (available and enabled): <VirtualHost 127.0.0.1:8180> # The ServerName directive sets the request scheme, hostname and port that # the server uses to identify itself. This is used when creating # redirection URLs. In the context of virtual hosts, the ServerName # specifies what hostname must appear in the request's Host: header to # match this virtual host. For the default virtual host (this file) this # value is not decisive as it is used as a last resort host regardless. # However, you must set it for any further virtual host explicitly. ServerName localhost ServerAdmin webmaster@localhost DocumentRoot /var/www/html <Directory /var/www/html> Order deny,allow Require all granted </Directory> </VirtualHost> # vim: syntax=apache ts=4 sw=4 sts=4 sr noet studiotv.conf file (available and enabled): <VirtualHost *:8180> ServerName studiotv.service.tebusco.lan ServerAdmin [email protected] DocumentRoot /var/www/studiotv <Directory /var/www/studiotv/> Options -Indexes +FollowSymLinks AllowOverride None Order deny,allow Allow from all Require all granted </Directory> # Available loglevels: trace8, ..., trace1, debug, info, notice, warn, # error, crit, alert, emerg. # It is also possible to configure the loglevel for particular # modules, e.g. #LogLevel info ssl:warn # No usamos ${APACHE_LOG_DIR} sino en su lugar /var/log/<host> ErrorLog /var/log/apache2/studiotv/error.log CustomLog /var/log/apache2/studiotv/access.log combined </VirtualHost> # vim: syntax=apache ts=4 sw=4 sts=4 sr noet However, when I hit the browser with http://studiotv.service.tebusco.lan, the default php page is shown instead. Question: What am I missing? (apache 2.4.7, nginx 1.6.0, ubuntu server 14.04).

    Read the article

  • C# Extension Methods - To Extend or Not To Extend...

    - by James Michael Hare
    I've been thinking a lot about extension methods lately, and I must admit I both love them and hate them. They are a lot like sugar, they taste so nice and sweet, but they'll rot your teeth if you eat them too much.   I can't deny that they aren't useful and very handy. One of the major components of the Shared Component library where I work is a set of useful extension methods. But, I also can't deny that they tend to be overused and abused to willy-nilly extend every living type.   So what constitutes a good extension method? Obviously, you can write an extension method for nearly anything whether it is a good idea or not. Many times, in fact, an idea seems like a good extension method but in retrospect really doesn't fit.   So what's the litmus test? To me, an extension method should be like in the movies when a person runs into their twin, separated at birth. You just know you're related. Obviously, that's hard to quantify, so let's try to put a few rules-of-thumb around them.   A good extension method should:     Apply to any possible instance of the type it extends.     Simplify logic and improve readability/maintainability.     Apply to the most specific type or interface applicable.     Be isolated in a namespace so that it does not pollute IntelliSense.     So let's look at a few examples in relation to these rules.   The first rule, to me, is the most important of all. Once again, it bears repeating, a good extension method should apply to all possible instances of the type it extends. It should feel like the long lost relative that should have been included in the original class but somehow was missing from the family tree.    Take this nifty little int extension, I saw this once in a blog and at first I really thought it was pretty cool, but then I started noticing a code smell I couldn't quite put my finger on. So let's look:       public static class IntExtensinos     {         public static int Seconds(int num)         {             return num * 1000;         }           public static int Minutes(int num)         {             return num * 60000;         }     }     This is so you could do things like:       ...     Thread.Sleep(5.Seconds());     ...     proxy.Timeout = 1.Minutes();     ...     Awww, you say, that's cute! Well, that's the problem, it's kitschy and it doesn't always apply (and incidentally you could achieve the same thing with TimeStamp.FromSeconds(5)). It's syntactical candy that looks cool, but tends to rot and pollute the code. It would allow things like:       total += numberOfTodaysOrders.Seconds();     which makes no sense and should never be allowed. The problem is you're applying an extension method to a logical domain, not a type domain. That is, the extension method Seconds() doesn't really apply to ALL ints, it applies to ints that are representative of time that you want to convert to milliseconds.    Do you see what I mean? The two problems, in a nutshell, are that a) Seconds() called off a non-time value makes no sense and b) calling Seconds() off something to pass to something that does not take milliseconds will be off by a factor of 1000 or worse.   Thus, in my mind, you should only ever have an extension method that applies to the whole domain of that type.   For example, this is one of my personal favorites:       public static bool IsBetween<T>(this T value, T low, T high)         where T : IComparable<T>     {         return value.CompareTo(low) >= 0 && value.CompareTo(high) <= 0;     }   This allows you to check if any IComparable<T> is within an upper and lower bound. Think of how many times you type something like:       if (response.Employee.Address.YearsAt >= 2         && response.Employee.Address.YearsAt <= 10)     {     ...     }     Now, you can instead type:       if(response.Employee.Address.YearsAt.IsBetween(2, 10))     {     ...     }     Note that this applies to all IComparable<T> -- that's ints, chars, strings, DateTime, etc -- and does not depend on any logical domain. In addition, it satisfies the second point and actually makes the code more readable and maintainable.   Let's look at the third point. In it we said that an extension method should fit the most specific interface or type possible. Now, I'm not saying if you have something that applies to enumerables, you create an extension for List, Array, Dictionary, etc (though you may have reasons for doing so), but that you should beware of making things TOO general.   For example, let's say we had an extension method like this:       public static T ConvertTo<T>(this object value)     {         return (T)Convert.ChangeType(value, typeof(T));     }         This lets you do more fluent conversions like:       double d = "5.0".ConvertTo<double>();     However, if you dig into Reflector (LOVE that tool) you will see that if the type you are calling on does not implement IConvertible, what you convert to MUST be the exact type or it will throw an InvalidCastException. Now this may or may not be what you want in this situation, and I leave that up to you. Things like this would fail:       object value = new Employee();     ...     // class cast exception because typeof(IEmployee) != typeof(Employee)     IEmployee emp = value.ConvertTo<IEmployee>();       Yes, that's a downfall of working with Convertible in general, but if you wanted your fluent interface to be more type-safe so that ConvertTo were only callable on IConvertibles (and let casting be a manual task), you could easily make it:         public static T ConvertTo<T>(this IConvertible value)     {         return (T)Convert.ChangeType(value, typeof(T));     }         This is what I mean by choosing the best type to extend. Consider that if we used the previous (object) version, every time we typed a dot ('.') on an instance we'd pull up ConvertTo() whether it was applicable or not. By filtering our extension method down to only valid types (those that implement IConvertible) we greatly reduce our IntelliSense pollution and apply a good level of compile-time correctness.   Now my fourth rule is just my general rule-of-thumb. Obviously, you can make extension methods as in-your-face as you want. I included all mine in my work libraries in its own sub-namespace, something akin to:       namespace Shared.Core.Extensions { ... }     This is in a library called Shared.Core, so just referencing the Core library doesn't pollute your IntelliSense, you have to actually do a using on Shared.Core.Extensions to bring the methods in. This is very similar to the way Microsoft puts its extension methods in System.Linq. This way, if you want 'em, you use the appropriate namespace. If you don't want 'em, they won't pollute your namespace.   To really make this work, however, that namespace should only include extension methods and subordinate types those extensions themselves may use. If you plant other useful classes in those namespaces, once a user includes it, they get all the extensions too.   Also, just as a personal preference, extension methods that aren't simply syntactical shortcuts, I like to put in a static utility class and then have extension methods for syntactical candy. For instance, I think it imaginable that any object could be converted to XML:       namespace Shared.Core     {         // A collection of XML Utility classes         public static class XmlUtility         {             ...             // Serialize an object into an xml string             public static string ToXml(object input)             {                 var xs = new XmlSerializer(input.GetType());                   // use new UTF8Encoding here, not Encoding.UTF8. The later includes                 // the BOM which screws up subsequent reads, the former does not.                 using (var memoryStream = new MemoryStream())                 using (var xmlTextWriter = new XmlTextWriter(memoryStream, new UTF8Encoding()))                 {                     xs.Serialize(xmlTextWriter, input);                     return Encoding.UTF8.GetString(memoryStream.ToArray());                 }             }             ...         }     }   I also wanted to be able to call this from an object like:       value.ToXml();     But here's the problem, if i made this an extension method from the start with that one little keyword "this", it would pop into IntelliSense for all objects which could be very polluting. Instead, I put the logic into a utility class so that users have the choice of whether or not they want to use it as just a class and not pollute IntelliSense, then in my extensions namespace, I add the syntactical candy:       namespace Shared.Core.Extensions     {         public static class XmlExtensions         {             public static string ToXml(this object value)             {                 return XmlUtility.ToXml(value);             }         }     }   So now it's the best of both worlds. On one hand, they can use the utility class if they don't want to pollute IntelliSense, and on the other hand they can include the Extensions namespace and use as an extension if they want. The neat thing is it also adheres to the Single Responsibility Principle. The XmlUtility is responsible for converting objects to XML, and the XmlExtensions is responsible for extending object's interface for ToXml().

    Read the article

  • SQL SERVER – Windows File/Folder and Share Permissions – Notes from the Field #029

    - by Pinal Dave
    [Note from Pinal]: This is a 29th episode of Notes from the Field series. Security is the task which we should give it to the experts. If there is a small overlook or misstep, there are good chances that security of the organization is compromised. This is very true, but there are always devils’s advocates who believe everyone should know the security. As a DBA and Administrator, I often see people not taking interest in the Windows Security hiding behind the reason of not expert of Windows Server. We all often miss the important mission statement for the success of any organization – Teamwork. In this blog post Brian tells the story in very interesting lucid language. Read On! In this episode of the Notes from the Field series database expert Brian Kelley explains a very crucial issue DBAs and Developer faces on their production server. Linchpin People are database coaches and wellness experts for a data driven world. Read the experience of Brian in his own words. When I talk security among database professionals, I find that most have at least a working knowledge of how to apply security within a database. When I talk with DBAs in particular, I find that most have at least a working knowledge of security at the server level if we’re speaking of SQL Server. One area I see continually that is weak is in the area of Windows file/folder (NTFS) and share permissions. The typical response is, “I’m a database developer and the Windows system administrator is responsible for that.” That may very well be true – the system administrator may have the primary responsibility and accountability for file/folder and share security for the server. However, if you’re involved in the typical activities surrounding databases and moving data around, you should know these permissions, too. Otherwise, you could be setting yourself up where someone is able to get to data he or she shouldn’t, or you could be opening the door where human error puts bad data in your production system. File/Folder Permission Basics: I wrote about file/folder permissions a few years ago to give the basic permissions that are most often seen. Here’s what you must know as a minimum at the file/folder level: Read - Allows you to read the contents of the file or folder. Having read permissions allows you to copy the file or folder. Write  – Again, as the name implies, it allows you to write to the file or folder. This doesn’t include the ability to delete, however, nothing stops a person with this access from writing an empty file. Delete - Allows the file/folder to be deleted. If you overwrite files, you may need this permission. Modify - Allows read, write, and delete. Full Control - Same as modify + the ability to assign permissions. File/Folder permissions aggregate, unless there is a DENY (where it trumps, just like within SQL Server), meaning if a person is in one group that gives Read and antoher group that gives Write, that person has both Read and Write permissions. As you might expect me to say, always apply the Principle of Least Privilege. This likely means that any additional permission you might add does not need Full Control. Share Permission Basics: At the share level, here are the permissions. Read - Allows you to read the contents on the share. Change - Allows you to read, write, and delete contents on the share. Full control - Change + the ability to modify permissions. Like with file/folder permissions, these permissions aggregate, and DENY trumps. So What Access Does a Person / Process Have? Figuring out what someone or some process has depends on how the location is being accessed: Access comes through the share (\\ServerName\Share) – a combination of permissions is considered. Access is through a drive letter (C:\, E:\, S:\, etc.) – only the file/folder permissions are considered. The only complicated one here is access through the share. Here’s what Windows does: Figures out what the aggregated permissions are at the file/folder level. Figures out what the aggregated permissions are at the share level. Takes the most restrictive of the two sets of permissions. You can test this by granting Full Control over a folder (this is likely already in place for the Users local group) and then setting up a share. Give only Read access through the share, and that includes to Administrators (if you’re creating a share, likely you have membership in the Administrators group). Try to read a file through the share. Now try to modify it. The most restrictive permission is the Share level permissions. It’s set to only allow Read. Therefore, if you come through the share, it’s the most restrictive. Does This Knowledge Really Help Me? In my experience, it does. I’ve seen cases where sensitive files were accessible by every authenticated user through a share. Auditors, as you might expect, have a real problem with that. I’ve also seen cases where files to be imported as part of the nightly processing were overwritten by files intended from development. And I’ve seen cases where a process can’t get to the files it needs for a process because someone changed the permissions. If you know file/folder and share permissions, you can spot and correct these types of security flaws. Given that there are a lot of database professionals that don’t understand these permissions, if you know it, you set yourself apart. And if you’re able to help on critical processes, you begin to set yourself up as a linchpin (link to .pdf) for your organization. If you want to get started with performance tuning and database security with the help of experts, read more over at Fix Your SQL Server. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: Notes from the Field, PostADay, SQL, SQL Authority, SQL Query, SQL Security, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Mixing SSL and non-SSL content in an Apache2 virtual host

    - by gravyface
    I have a (hopefully) common scenario for one of my sites that I just can't seem to figure out how to deploy correctly. I have the following site and directories for example.com: These need to require SSL: /var/www/example.com/admin /var/www/example.com/order These need to be non-SSL: /var/www/example.com/maps These need to support both: /var/www/example.com/css /var/www/example.com/js /var/www/example.com/img I have two virtual host declarations for the one site in my /sites-available/example.com file; the top one is *:443 the second one is *:80. Since I have two sites, and if a request comes in on 443, the top virtualhost is used, same with the bottom if it's a port 80 request. However, I can't seem to enforce my SSL requirements using SSLRequireSSL because I'm assuming a port 80 request to /admin or /order is not even hitting the *:443 vhost. Should I just Deny All to /order and /admin within the *:80 virtual host so that if you try to request it on 80, you'll get a 403 Forbidden?

    Read the article

  • Disable .htaccess or disable some rules from .htaccess on specific URL

    - by petRUShka
    I have Kerberos-based authentication and I want to disable it on only root url: mysite.com/. And I want it to works fine on any other page like mysite.com/page1. I have such things in my .htaccess: AuthType Kerberos AuthName "Domain login" KrbAuthRealms DOMAIN.COM KrbMethodK5Passwd on Krb5KeyTab /etc/httpd/httpd.keytab require valid-user I want to turn it off only for root URL. As workaround it is possible to turn off using .htaccess in virtual host config. Unfortunately I don't know how to do it. Part of my vhost.conf: <Directory /home/user/www/current/public/> Options -MultiViews +FollowSymLinks AllowOverride All Order allow,deny Allow from all </Directory> It would be great if you can advice me something!

    Read the article

  • Blocking yandex.ru bot

    - by Ross
    I want to block all request from yandex.ru search bot. It is very traffic intensive (2GB/day). I first blocked one C class IP range, but it seems this bot appear from different IP ranges. For example: spider31.yandex.ru - 77.88.26.27 spider79.yandex.ru - 95.108.155.251 etc.. I can put some deny in robots.txt but not sure if it respect this. I am thinking of blocking a list of IP ranges. Can somebody suggest some general solution.

    Read the article

  • XenServer 5.6.1-fp1. Can't get network working

    - by bakytn
    I have a PC where XenServer 5.6.1 fp-1 has been successfully installed. I've manually set the network settings: 192.168.1.50 255.255.255.0 192.168.1.1 but it's set to xenbr0 iface. While eth0 is empty. When I click on "Configure Management Inteface" it shows that eth0 is connected. But when I ping a default gateway (which is 100% should be accessible) it fails. I used to another shell (Alt+F3) and logged as root. I also failed to ping. with both: ping -I eth0 192.168.1.1 and ping -I xenbr0 192.168.1.1 Be assured that: Cable works Ethernet adapter is 100% functional (prev OS was Ubuntu it was working) There is no firewall rule to deny anything. (everything is allowed)

    Read the article

  • Issues in getting Synergy setup

    - by chris
    For some reason I cannot get things working when the Linux box is the server and the macbook pro is the client. However I can get things working just fine in the inverse, unfortunately since the macbook is not the primary machine, and not powered on all the time, the later setup won't work. Here is the error that I am getting: started client connecting to '10.0.1.4': 10.0.1.4:24800 The only firewall that I have is the one on the router, so since things work with the macbook as the server I am pretty sure that is not where the problem is. Here is the .synergy.conf file section: screens Chris-MacBook-Pro: # I have tried this with the .local as well chris-archlinux.local: end section: links Chris-MacBook-Pro: right = chris-archlinux.local chris-archlinux.local: left = Chris-MacBook-Pro.local end ** Update: I should also add that I can ping the linux machine from the mac. To try get things working, I have also prevented the hosts.deny/.allow files from blocking anything. An ideas to where the problem could be?

    Read the article

  • Configure mod_wsgi WSGIScriptAlias with mod_rewrite

    - by Lazik
    I want to redirect ex.com to www.ex.com but I still want www.ex.com/ to point to my app.wsgi without it showing up in the url. When I use the conf below and I go to ex.com, I get a 404 error saying can't find www.ex.com/app.wsgi/ If I change the WSGIScriptAlias / /var/www/vhosts/ex/app.wsgi to WSGIScriptAlias /app.wsgi /var/www/vhosts/ex/app.wsgi Then all my url look like www.ex.com/app.wsgi/blabla/... Is it possible to use some kind of rule to redirect ex.com to www.ex.com and still keeping / as the app.wsgi root? my conf file <VirtualHost *:80> ServerName www.ex.com ServerAlias ex.com *.ex.com RewriteEngine On RewriteCond %{HTTP_HOST} !^www\. RewriteRule ^(.*)$ http://www.%{HTTP_HOST}/$1 [R=301,L] WSGIDaemonProcess ex user=www-data group=www-data processes=1 threads=5 WSGIScriptAlias / /var/www/vhosts/ex/app.wsgi <Directory /var/www/vhosts/ex> WSGIProcessGroup ex WSGIApplicationGroup %{GLOBAL} Order deny,allow Allow from all </Directory> </VirtualHost>

    Read the article

  • connecting to redis from wsgi application

    - by W_P
    I am running redis and httpd with mod_wsgi on the same server. I have a wsgi application using a virtualenv and the follow virtualhost conf: WSGIPythonPath /var/www/wsgi <VirtualHost *:80> ServerName test.example.com DocumentRoot /var/www/html WSGIScriptAlias / /var/www/wsgi/myapp/app.wsgi CustomLog logs/test.example.com-access-log common ErrorLog logs/test.example.com-error-log <Directory /var/www/wsgi/myapp> Order allow,deny Allow from all WSGIScriptReloading On </Directory> </VirtualHost> I have a sites.pth in my /var/www/wsgi directory, adding myapp to the PYTHONPATH, and this is my /var/www/wsgi/myapp/app.wsgi: activate_this = '/var/www/envs/myapp/bin/activate_this.py' execfile( activate_this, dict(__file__=activate_this) ) from myapp.application import app as application Sorry, I am trying to edit the question to have the rest of my info by it keeps timing out before the edits go through

    Read the article

  • Running a reverse proxy in front of Splunk 4.x

    - by sgerrand
    So, I have previously installed Splunk 3.x behind a reverse proxy and downloaded the latest version (4.0.6 at time of typing) expecting it to be as easy to use as before. Sadly this was not the case. There appears to be some elements which are not being translated correctly through the reverse proxy, causing Splunk to fail. I have used the following configuration in Apache2 to no avail: ServerName monitoringbox.com DocumentRoot /path/to/nowhere ProxyRequests off ProxyPass /splunk http://127.0.0.1:8000/splunk ProxyPassReverse /splunk http://127.0.0.1:8000/splunk Order allow,deny Allow from all Has anyone else had more luck than me in setting up Splunk 4.x behind a reverse proxy?

    Read the article

  • iptables syn flood countermeasure

    - by Penegal
    I'm trying to adjust my iptables firewall to increase the security of my server, and I found something a bit problematic here : I have to set INPUT policy to ACCEPT and, in addition, to have a rule saying iptables -I INPUT -i eth0 -j ACCEPT. Here comes my script (launched manually for tests) : #!/bin/sh IPT=/sbin/iptables echo "Clearing firewall rules" $IPT -F $IPT -Z $IPT -t nat -F $IPT -t nat -Z $IPT -t mangle -F $IPT -t mangle -Z $IPT -X echo "Defining logging policy for dropped packets" $IPT -N LOGDROP $IPT -A LOGDROP -j LOG -m limit --limit 5/min --log-level debug --log-prefix "iptables rejected: " $IPT -A LOGDROP -j DROP echo "Setting firewall policy" $IPT -P INPUT DROP # Deny all incoming connections $IPT -P OUTPUT ACCEPT # Allow all outgoing connections $IPT -P FORWARD DROP # Deny all forwaring echo "Allowing connections from/to lo and incoming connections from eth0" $IPT -I INPUT -i lo -j ACCEPT $IPT -I OUTPUT -o lo -j ACCEPT #$IPT -I INPUT -i eth0 -j ACCEPT echo "Setting SYN flood countermeasures" $IPT -A INPUT -p tcp -i eth0 --syn -m limit --limit 100/second --limit-burst 200 -j LOGDROP echo "Allowing outgoing traffic corresponding to already initiated connections" $IPT -A OUTPUT -p ALL -m state --state ESTABLISHED,RELATED -j ACCEPT echo "Allowing incoming SSH" $IPT -A INPUT -p tcp --dport 22 -m state --state NEW -m recent --set --name SSH -j ACCEPT echo "Setting SSH bruteforce attacks countermeasures (deny more than 10 connections every 10 minutes)" $IPT -A INPUT -p tcp --dport 22 -m recent --update --seconds 600 --hitcount 10 --rttl --name SSH -j LOGDROP echo "Allowing incoming traffic for HTTP, SMTP, NTP, PgSQL and SolR" $IPT -A INPUT -p tcp --dport 25 -i eth0 -j ACCEPT $IPT -A INPUT -p tcp --dport 80 -i eth0 -j ACCEPT $IPT -A INPUT -p udp --dport 123 -i eth0 -j ACCEPT $IPT -A INPUT -p tcp --dport 5433 -i eth0.2654 -s 172.16.0.2 -j ACCEPT $IPT -A INPUT -p udp --dport 5433 -i eth0.2654 -s 172.16.0.2 -j ACCEPT $IPT -A INPUT -p tcp --dport 8983 -i eth0.2654 -s 172.16.0.2 -j ACCEPT $IPT -A INPUT -p udp --dport 8983 -i eth0.2654 -s 172.16.0.2 -j ACCEPT echo "Allowing outgoing traffic for ICMP, SSH, whois, SMTP, DNS, HTTP, PgSQL and SolR" $IPT -A OUTPUT -p tcp --dport 22 -j ACCEPT $IPT -A OUTPUT -p tcp --dport 25 -o eth0 -j ACCEPT $IPT -A OUTPUT -p tcp --dport 43 -o eth0 -j ACCEPT $IPT -A OUTPUT -p tcp --dport 53 -o eth0 -j ACCEPT $IPT -A OUTPUT -p udp --dport 53 -o eth0 -j ACCEPT $IPT -A OUTPUT -p tcp --dport 80 -o eth0 -j ACCEPT $IPT -A OUTPUT -p udp --dport 80 -o eth0 -j ACCEPT #$IPT -A OUTPUT -p tcp --dport 5433 -o eth0 -d 176.31.236.101 -j ACCEPT #$IPT -A OUTPUT -p udp --dport 5433 -o eth0 -d 176.31.236.101 -j ACCEPT #$IPT -A OUTPUT -p tcp --dport 8983 -o eth0 -d 176.31.236.101 -j ACCEPT #$IPT -A OUTPUT -p udp --dport 8983 -o eth0 -d 176.31.236.101 -j ACCEPT $IPT -A OUTPUT -p tcp --sport 5433 -o eth0.2654 -j ACCEPT $IPT -A OUTPUT -p udp --sport 5433 -o eth0.2654 -j ACCEPT $IPT -A OUTPUT -p tcp --sport 8983 -o eth0.2654 -j ACCEPT $IPT -A OUTPUT -p udp --sport 8983 -o eth0.2654 -j ACCEPT $IPT -A OUTPUT -p icmp -j ACCEPT echo "Allowing outgoing FTP backup" $IPT -A OUTPUT -p tcp --dport 20:21 -o eth0 -d 91.121.190.78 -j ACCEPT echo "Dropping and logging everything else" $IPT -A INPUT -s 0/0 -j LOGDROP $IPT -A OUTPUT -j LOGDROP $IPT -A FORWARD -j LOGDROP echo "Firewall loaded." echo "Maintaining new rules for 3 minutes for tests" sleep 180 $IPT -nvL echo "Clearing firewall rules" $IPT -F $IPT -Z $IPT -t nat -F $IPT -t nat -Z $IPT -t mangle -F $IPT -t mangle -Z $IPT -X $IPT -P INPUT ACCEPT $IPT -P OUTPUT ACCEPT $IPT -P FORWARD ACCEPT When I launch this script (I only have a SSH access), the shell displays every message up to Maintaining new rules for 3 minutes for tests, the server is unresponsive during the 3 minutes delay and then resume normal operations. The only solution I found until now was to set $IPT -P INPUT ACCEPT and $IPT -I INPUT -i eth0 -j ACCEPT, but this configuration does not protect me of any attack, which is a great shame for a firewall. I suspect that the error comes from my script and not from iptables, but I don't understand what's wrong with my script. Could some do-gooder explain me my error, please? EDIT: here comes the result of iptables -nvL with the "accept all input" ($IPT -P INPUT ACCEPT and $IPT -I INPUT -i eth0 -j ACCEPT) solution : Chain INPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 1 52 ACCEPT all -- eth0 * 0.0.0.0/0 0.0.0.0/0 0 0 ACCEPT all -- lo * 0.0.0.0/0 0.0.0.0/0 0 0 LOGDROP tcp -- eth0 * 0.0.0.0/0 0.0.0.0/0 tcp flags:0x17/0x02 limit: avg 100/sec burst 200 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:22 state NEW recent: SET name: SSH side: source 0 0 LOGDROP tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:22 recent: UPDATE seconds: 600 hit_count: 10 TTL-Match name: SSH side: source 0 0 ACCEPT tcp -- eth0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:25 0 0 ACCEPT tcp -- eth0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 0 0 ACCEPT udp -- eth0 * 0.0.0.0/0 0.0.0.0/0 udp dpt:123 0 0 ACCEPT tcp -- eth0.2654 * 172.16.0.2 0.0.0.0/0 tcp dpt:5433 0 0 ACCEPT udp -- eth0.2654 * 172.16.0.2 0.0.0.0/0 udp dpt:5433 0 0 ACCEPT tcp -- eth0.2654 * 172.16.0.2 0.0.0.0/0 tcp dpt:8983 0 0 ACCEPT udp -- eth0.2654 * 172.16.0.2 0.0.0.0/0 udp dpt:8983 0 0 LOGDROP all -- * * 0.0.0.0/0 0.0.0.0/0 Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 0 0 LOGDROP all -- * * 0.0.0.0/0 0.0.0.0/0 Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 0 0 ACCEPT all -- * lo 0.0.0.0/0 0.0.0.0/0 2 728 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:22 0 0 ACCEPT tcp -- * eth0 0.0.0.0/0 0.0.0.0/0 tcp dpt:25 0 0 ACCEPT tcp -- * eth0 0.0.0.0/0 0.0.0.0/0 tcp dpt:43 0 0 ACCEPT tcp -- * eth0 0.0.0.0/0 0.0.0.0/0 tcp dpt:53 0 0 ACCEPT udp -- * eth0 0.0.0.0/0 0.0.0.0/0 udp dpt:53 0 0 ACCEPT tcp -- * eth0 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 0 0 ACCEPT udp -- * eth0 0.0.0.0/0 0.0.0.0/0 udp dpt:80 0 0 ACCEPT tcp -- * eth0.2654 0.0.0.0/0 0.0.0.0/0 tcp spt:5433 0 0 ACCEPT udp -- * eth0.2654 0.0.0.0/0 0.0.0.0/0 udp spt:5433 0 0 ACCEPT tcp -- * eth0.2654 0.0.0.0/0 0.0.0.0/0 tcp spt:8983 0 0 ACCEPT udp -- * eth0.2654 0.0.0.0/0 0.0.0.0/0 udp spt:8983 0 0 ACCEPT icmp -- * * 0.0.0.0/0 0.0.0.0/0 0 0 ACCEPT tcp -- * eth0 0.0.0.0/0 91.121.190.78 tcp dpts:20:21 0 0 LOGDROP all -- * * 0.0.0.0/0 0.0.0.0/0 Chain LOGDROP (5 references) pkts bytes target prot opt in out source destination 0 0 LOG all -- * * 0.0.0.0/0 0.0.0.0/0 limit: avg 5/min burst 5 LOG flags 0 level 7 prefix `iptables rejected: ' 0 0 DROP all -- * * 0.0.0.0/0 0.0.0.0/0 EDIT #2 : I modified my script (policy ACCEPT, defining authorized incoming packets then logging and dropping everything else) to write iptables -nvL results to a file and to allow only 10 ICMP requests per second, logging and dropping everything else. The result proved unexpected : while the server was unavailable to SSH connections, even already established, I ping-flooded it from another server, and the ping rate was restricted to 10 requests per second. During this test, I also tried to open new SSH connections, which remained unanswered until the script flushed rules. Here comes the iptables stats written after these tests : Chain INPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 600 35520 ACCEPT all -- lo * 0.0.0.0/0 0.0.0.0/0 6 360 LOGDROP tcp -- eth0 * 0.0.0.0/0 0.0.0.0/0 tcp flags:0x17/0x02 limit: avg 100/sec burst 200 0 0 LOGDROP tcp -- eth0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 STRING match "w00tw00t.at.ISC.SANS." ALGO name bm TO 65535 0 0 LOGDROP tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 STRING match "Host: anoticiapb.com.br" ALGO name bm TO 65535 0 0 LOGDROP tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 STRING match "Host: www.anoticiapb.com.br" ALGO name bm TO 65535 105 8820 ACCEPT icmp -- * * 0.0.0.0/0 0.0.0.0/0 limit: avg 10/sec burst 5 830 69720 LOGDROP icmp -- * * 0.0.0.0/0 0.0.0.0/0 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:22 state NEW recent: SET name: SSH side: source 0 0 LOGDROP tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:22 recent: UPDATE seconds: 600 hit_count: 10 TTL-Match name: SSH side: source 0 0 ACCEPT tcp -- eth0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:25 0 0 ACCEPT tcp -- eth0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 0 0 ACCEPT udp -- eth0 * 0.0.0.0/0 0.0.0.0/0 udp dpt:80 0 0 ACCEPT udp -- eth0 * 0.0.0.0/0 0.0.0.0/0 udp dpt:123 0 0 ACCEPT tcp -- eth0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:443 0 0 ACCEPT tcp -- eth0.2654 * 172.16.0.1 0.0.0.0/0 tcp spt:5433 0 0 ACCEPT udp -- eth0.2654 * 172.16.0.1 0.0.0.0/0 udp spt:5433 0 0 ACCEPT tcp -- eth0.2654 * 172.16.0.1 0.0.0.0/0 tcp spt:8983 0 0 ACCEPT udp -- eth0.2654 * 172.16.0.1 0.0.0.0/0 udp spt:8983 16 1684 LOGDROP all -- * * 0.0.0.0/0 0.0.0.0/0 Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 0 0 LOGDROP all -- * * 0.0.0.0/0 0.0.0.0/0 Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 600 35520 ACCEPT all -- * lo 0.0.0.0/0 0.0.0.0/0 0 0 LOGDROP tcp -- * eth0 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 owner UID match 33 0 0 LOGDROP udp -- * eth0 0.0.0.0/0 0.0.0.0/0 udp dpt:80 owner UID match 33 116 11136 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:22 0 0 ACCEPT tcp -- * eth0 0.0.0.0/0 0.0.0.0/0 tcp dpt:25 0 0 ACCEPT tcp -- * eth0 0.0.0.0/0 0.0.0.0/0 tcp dpt:53 0 0 ACCEPT udp -- * eth0 0.0.0.0/0 0.0.0.0/0 udp dpt:53 0 0 ACCEPT tcp -- * eth0 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 0 0 ACCEPT udp -- * eth0 0.0.0.0/0 0.0.0.0/0 udp dpt:80 0 0 ACCEPT tcp -- * eth0.2654 0.0.0.0/0 0.0.0.0/0 tcp dpt:5433 0 0 ACCEPT udp -- * eth0.2654 0.0.0.0/0 0.0.0.0/0 udp dpt:5433 0 0 ACCEPT tcp -- * eth0.2654 0.0.0.0/0 0.0.0.0/0 tcp dpt:8983 0 0 ACCEPT udp -- * eth0.2654 0.0.0.0/0 0.0.0.0/0 udp dpt:8983 0 0 ACCEPT icmp -- * * 0.0.0.0/0 0.0.0.0/0 0 0 ACCEPT tcp -- * eth0 0.0.0.0/0 0.0.0.0/0 tcp dpt:43 0 0 ACCEPT tcp -- * eth0 0.0.0.0/0 91.121.190.18 tcp dpts:20:21 7 1249 LOGDROP all -- * * 0.0.0.0/0 0.0.0.0/0 Chain LOGDROP (11 references) pkts bytes target prot opt in out source destination 35 3156 LOG all -- * * 0.0.0.0/0 0.0.0.0/0 limit: avg 1/sec burst 5 LOG flags 0 level 7 prefix `iptables rejected: ' 859 73013 DROP all -- * * 0.0.0.0/0 0.0.0.0/0 Here comes the log content added during this test : Mar 28 09:52:51 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=194.51.74.245 DST=176.31.238.3 LEN=52 TOS=0x00 PREC=0x00 TTL=51 ID=55666 DF PROTO=TCP SPT=57504 DPT=22 WINDOW=501 RES=0x00 ACK URGP=0 Mar 28 09:52:51 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=194.51.74.245 DST=176.31.238.3 LEN=52 TOS=0x00 PREC=0x00 TTL=51 ID=55667 DF PROTO=TCP SPT=57504 DPT=22 WINDOW=501 RES=0x00 ACK URGP=0 Mar 28 09:52:51 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=194.51.74.245 DST=176.31.238.3 LEN=64 TOS=0x00 PREC=0x00 TTL=51 ID=55668 DF PROTO=TCP SPT=57504 DPT=22 WINDOW=501 RES=0x00 ACK URGP=0 Mar 28 09:52:51 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=194.51.74.245 DST=176.31.238.3 LEN=64 TOS=0x00 PREC=0x00 TTL=51 ID=55669 DF PROTO=TCP SPT=57504 DPT=22 WINDOW=501 RES=0x00 ACK URGP=0 Mar 28 09:52:52 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=194.51.74.245 DST=176.31.238.3 LEN=64 TOS=0x00 PREC=0x00 TTL=51 ID=55670 DF PROTO=TCP SPT=57504 DPT=22 WINDOW=501 RES=0x00 ACK URGP=0 Mar 28 09:52:54 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=194.51.74.245 DST=176.31.238.3 LEN=64 TOS=0x00 PREC=0x00 TTL=51 ID=55671 DF PROTO=TCP SPT=57504 DPT=22 WINDOW=501 RES=0x00 ACK URGP=0 Mar 28 09:52:58 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=194.51.74.245 DST=176.31.238.3 LEN=64 TOS=0x00 PREC=0x00 TTL=51 ID=55672 DF PROTO=TCP SPT=57504 DPT=22 WINDOW=501 RES=0x00 ACK URGP=0 Mar 28 09:52:59 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=176.31.236.101 DST=176.31.238.3 LEN=84 TOS=0x00 PREC=0x00 TTL=63 ID=0 DF PROTO=ICMP TYPE=8 CODE=0 ID=7430 SEQ=6 Mar 28 09:52:59 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=176.31.236.101 DST=176.31.238.3 LEN=84 TOS=0x00 PREC=0x00 TTL=63 ID=0 DF PROTO=ICMP TYPE=8 CODE=0 ID=7430 SEQ=7 Mar 28 09:52:59 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=176.31.236.101 DST=176.31.238.3 LEN=84 TOS=0x00 PREC=0x00 TTL=63 ID=0 DF PROTO=ICMP TYPE=8 CODE=0 ID=7430 SEQ=8 Mar 28 09:52:59 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=176.31.236.101 DST=176.31.238.3 LEN=84 TOS=0x00 PREC=0x00 TTL=63 ID=0 DF PROTO=ICMP TYPE=8 CODE=0 ID=7430 SEQ=9 Mar 28 09:52:59 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=176.31.236.101 DST=176.31.238.3 LEN=84 TOS=0x00 PREC=0x00 TTL=63 ID=0 DF PROTO=ICMP TYPE=8 CODE=0 ID=7430 SEQ=59 Mar 28 09:53:00 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=176.31.236.101 DST=176.31.238.3 LEN=84 TOS=0x00 PREC=0x00 TTL=63 ID=0 DF PROTO=ICMP TYPE=8 CODE=0 ID=7430 SEQ=152 Mar 28 09:53:01 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=176.31.236.101 DST=176.31.238.3 LEN=84 TOS=0x00 PREC=0x00 TTL=63 ID=0 DF PROTO=ICMP TYPE=8 CODE=0 ID=7430 SEQ=246 Mar 28 09:53:02 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=176.31.236.101 DST=176.31.238.3 LEN=84 TOS=0x00 PREC=0x00 TTL=63 ID=0 DF PROTO=ICMP TYPE=8 CODE=0 ID=7430 SEQ=339 Mar 28 09:53:03 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=176.31.236.101 DST=176.31.238.3 LEN=84 TOS=0x00 PREC=0x00 TTL=63 ID=0 DF PROTO=ICMP TYPE=8 CODE=0 ID=7430 SEQ=432 Mar 28 09:53:04 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=176.31.236.101 DST=176.31.238.3 LEN=84 TOS=0x00 PREC=0x00 TTL=63 ID=0 DF PROTO=ICMP TYPE=8 CODE=0 ID=7430 SEQ=524 Mar 28 09:53:05 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=176.31.236.101 DST=176.31.238.3 LEN=84 TOS=0x00 PREC=0x00 TTL=63 ID=0 DF PROTO=ICMP TYPE=8 CODE=0 ID=7430 SEQ=617 Mar 28 09:53:06 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=176.31.236.101 DST=176.31.238.3 LEN=84 TOS=0x00 PREC=0x00 TTL=63 ID=0 DF PROTO=ICMP TYPE=8 CODE=0 ID=7430 SEQ=711 Mar 28 09:53:07 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=176.31.236.101 DST=176.31.238.3 LEN=84 TOS=0x00 PREC=0x00 TTL=63 ID=0 DF PROTO=ICMP TYPE=8 CODE=0 ID=7430 SEQ=804 Mar 28 09:53:08 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=176.31.236.101 DST=176.31.238.3 LEN=84 TOS=0x00 PREC=0x00 TTL=63 ID=0 DF PROTO=ICMP TYPE=8 CODE=0 ID=7430 SEQ=897 Mar 28 09:53:16 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:c0:62:6b:e3:5c:80:08:00 SRC=194.51.74.245 DST=176.31.238.3 LEN=60 TOS=0x00 PREC=0x00 TTL=51 ID=61402 DF PROTO=TCP SPT=57637 DPT=22 WINDOW=5840 RES=0x00 SYN URGP=0 Mar 28 09:53:19 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:c0:62:6b:e3:5c:80:08:00 SRC=194.51.74.245 DST=176.31.238.3 LEN=60 TOS=0x00 PREC=0x00 TTL=51 ID=61403 DF PROTO=TCP SPT=57637 DPT=22 WINDOW=5840 RES=0x00 SYN URGP=0 Mar 28 09:53:21 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=194.51.74.245 DST=176.31.238.3 LEN=64 TOS=0x00 PREC=0x00 TTL=51 ID=55674 DF PROTO=TCP SPT=57504 DPT=22 WINDOW=501 RES=0x00 ACK URGP=0 Mar 28 09:53:25 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:c0:62:6b:e3:5c:80:08:00 SRC=194.51.74.245 DST=176.31.238.3 LEN=60 TOS=0x00 PREC=0x00 TTL=51 ID=61404 DF PROTO=TCP SPT=57637 DPT=22 WINDOW=5840 RES=0x00 SYN URGP=0 Mar 28 09:53:37 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=194.51.74.245 DST=176.31.238.3 LEN=116 TOS=0x00 PREC=0x00 TTL=51 ID=55675 DF PROTO=TCP SPT=57504 DPT=22 WINDOW=501 RES=0x00 ACK PSH URGP=0 Mar 28 09:53:37 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=194.51.74.245 DST=176.31.238.3 LEN=116 TOS=0x00 PREC=0x00 TTL=51 ID=55676 DF PROTO=TCP SPT=57504 DPT=22 WINDOW=501 RES=0x00 ACK PSH URGP=0 Mar 28 09:53:37 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=194.51.74.245 DST=176.31.238.3 LEN=180 TOS=0x00 PREC=0x00 TTL=51 ID=55677 DF PROTO=TCP SPT=57504 DPT=22 WINDOW=501 RES=0x00 ACK PSH URGP=0 Mar 28 09:53:38 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=194.51.74.245 DST=176.31.238.3 LEN=180 TOS=0x00 PREC=0x00 TTL=51 ID=55678 DF PROTO=TCP SPT=57504 DPT=22 WINDOW=501 RES=0x00 ACK PSH URGP=0 Mar 28 09:53:39 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=194.51.74.245 DST=176.31.238.3 LEN=180 TOS=0x00 PREC=0x00 TTL=51 ID=55679 DF PROTO=TCP SPT=57504 DPT=22 WINDOW=501 RES=0x00 ACK PSH URGP=0 Mar 28 09:53:39 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:c0:62:6b:e3:5c:80:08:00 SRC=194.51.74.245 DST=176.31.238.3 LEN=60 TOS=0x00 PREC=0x00 TTL=51 ID=5055 DF PROTO=TCP SPT=57638 DPT=22 WINDOW=5840 RES=0x00 SYN URGP=0 Mar 28 09:53:41 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=194.51.74.245 DST=176.31.238.3 LEN=180 TOS=0x00 PREC=0x00 TTL=51 ID=55680 DF PROTO=TCP SPT=57504 DPT=22 WINDOW=501 RES=0x00 ACK PSH URGP=0 Mar 28 09:53:42 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:c0:62:6b:e3:5c:80:08:00 SRC=194.51.74.245 DST=176.31.238.3 LEN=60 TOS=0x00 PREC=0x00 TTL=51 ID=5056 DF PROTO=TCP SPT=57638 DPT=22 WINDOW=5840 RES=0x00 SYN URGP=0 Mar 28 09:53:45 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=194.51.74.245 DST=176.31.238.3 LEN=180 TOS=0x00 PREC=0x00 TTL=51 ID=55681 DF PROTO=TCP SPT=57504 DPT=22 WINDOW=501 RES=0x00 ACK PSH URGP=0 Mar 28 09:53:48 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:c0:62:6b:e3:5c:80:08:00 SRC=194.51.74.245 DST=176.31.238.3 LEN=60 TOS=0x00 PREC=0x00 TTL=51 ID=5057 DF PROTO=TCP SPT=57638 DPT=22 WINDOW=5840 RES=0x00 SYN URGP=0 If I correctly interpreted these results, they say that ICMP rules were correctly interpreted by iptables, but SSH rules were not. This does not make any sense... Does somebody understand where my error comes from? EDIT #3 : After some more tests, I found out that commenting the SYN flood countermeasure removes the problem. I continue researches in this way but, meanwhile, if somebody sees my anti SYN flood rule error...

    Read the article

  • proxy pass for activeMQ

    - by user1172482
    I have a apache server that I'm trying to use for proxy access my activeMQ admin page. I am able to load the inital landing page properly, but I can't seem to load any of the sub-pages (Queues, Connections, etc.). My proxypass rules on the apache server are the following: ProxyPass /foo http://10.5.124.108:8161/admin ProxyPassReverse /foo http://10.5.124.108:8161/admin The activeMQ installation included a activemq-httpd.conf file in /etc/httpd/conf.d/. Proxy connections there are enabled: ProxyRequests On ProxyVia On <Proxy *> Allow from all Order allow,deny </Proxy> ProxyPass /admin http://localhost:8161/admin ProxyPassReverse /admin http://localhost:8161/admin ProxyPass /message http://localhost:8161/admin/send ProxyPassReverse /message http://localhost:8161/admin/send From what I've read the proxypass rules should be recursive (the rule for /foo should also work for /foo/bar). Is there something else that I'm missing here that's preventing me from accessing pages beyond the initial admin landing page?

    Read the article

< Previous Page | 27 28 29 30 31 32 33 34 35 36 37 38  | Next Page >