Search Results

Search found 4073 results on 163 pages for 'hosts deny'.

Page 71/163 | < Previous Page | 67 68 69 70 71 72 73 74 75 76 77 78  | Next Page >

  • how can I change the domain and name server of a image?

    - by jpganz18
    I wonder if someone did the same, I create an image of a server, then I move it to another , and that other one will have a different domain, so, now I want to create and send mail with the new domain, still the other domain is the one who signs... I already changed the dovecot.cnf , postfix, apache and hosts file, but I cant find where is that domain comming from, any idea of where to look for? Thanks!

    Read the article

  • Message from Nagios Server

    - by user12213
    Nagios Server is monitoring my Server which hosts Windows Sharepoint. I am getting the following 2 alerts in my inbox from Nagios Server 1. Service: C:\ Drive Space State: CRITICAL Additional Info: CRITICAL - Socket timeout after 10 seconds 2. Service: CPU Load State: CRITICAL Additional Info: CRITICAL - Socket timeout after 10 seconds What do I infer from these?

    Read the article

  • CentOS 5: Can't access webserver via http from host OS?

    - by adred
    My server is installed on a guest OS on vmware. It really bugs me because I can't access it from the host OS's browser even though there is no discrepancy between /etc/hosts, /etc/sysconfig/network, httpd.conf files. Issuing ifconfig command also returns the same IP. I have also enabled netwroking in the vmware settings. And I can ping the guest OS's IP from the host. Any insights pls???

    Read the article

  • Does Math.Sqrt cache its own results?

    - by Jonathan Beerhalter
    Someone has suggested to me that the built in C# Math.Sqrt function in .NET 4.0 caches its results, so that if I call Math.Sqrt(50) over and over again, it's not actually doing a sqrt, but just pulling the answer from a cache. Can anyone verify or deny this claim? If it's true then I have a bunch of needless caching going on in my code.

    Read the article

  • How can I manage hostnames across multiple servers? [closed]

    - by Dan
    In a lot of documentation I've seen recently, servers are referred to by internal hostnames, such as production-1, production-2, db-1. I realize I can associate these names in the hosts file on the server, but this would obviously mean maintaining a host file for multiple servers, which for anything greater than 2 or 3 would get unwieldy. Is there some simple way people manage common hostnames across multiple servers and keep them in sync, without having to edit multiple files every time?

    Read the article

  • What is the best tool to aggregate traffic stats from multiple nginx servers?

    - by gekkz
    The setup: 2 or more nginx machines each machine has the same virtual hosts traffic is load balanced via DNS to each machine I need to figure out what are the best tools to use to get some traffic stats, mostly interested in amount of hits and total traffic in gigabytes. Obviously, the log information will come from nginx, formatted like this: log_format main '$remote_addr $host $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" "$http_user_agent" "$gzip_ratio"';

    Read the article

  • Is it OK to have a diferent hostname to fqdn?

    - by Cameron Ball
    I have a server with fqdn git.mydomain.com (this is in DNS) but I don't really want the machine to have git as its hostname. Right now I have the hostname in /etc/hostname set as (for example): mycustomhostname And in /etc/hosts I have 1.2.3.4 git.mydomain.com mycustomhostname (Where 1.2.3.4 is my server's IP) I've read that the first component of the FQDN should always be the unqualified hostname, so is what I'm doing bad? If so, what is the correct way to do what I want?

    Read the article

  • Singleton rule questions (do not allow to create copy and deserialization)

    - by Petr
    Hi, Reading some article about singleton, I stopped at the point saying: "Do not allow to crate copy of existing instance". I realized that I do not know how would I do that! Could you tell me, please, how could I copy existing instance of class? And the second one: deserializaition. How it could be dangerous? And for both - how to deny creating copies or deserialization? Thanks

    Read the article

  • determining server config

    - by tristan625
    I have recently set up a Linux server for my college which also hosts the college website and other downloadables. This server is linked to outside world using a leased line connection of 1mbps capacity. My problem is that the users complain that the downloads from the server are very slow, how can I imporve this? What things should I be looking at. My current server config: Dell vostro260s with 4Gb of ram and i3 processor.

    Read the article

  • Windows Azure: Import/Export Hard Drives, VM ACLs, Web Sockets, Remote Debugging, Continuous Delivery, New Relic, Billing Alerts and More

    - by ScottGu
    Two weeks ago we released a giant set of improvements to Windows Azure, as well as a significant update of the Windows Azure SDK. This morning we released another massive set of enhancements to Windows Azure.  Today’s new capabilities include: Storage: Import/Export Hard Disk Drives to your Storage Accounts HDInsight: General Availability of our Hadoop Service in the cloud Virtual Machines: New VM Gallery, ACL support for VIPs Web Sites: WebSocket and Remote Debugging Support Notification Hubs: Segmented customer push notification support with tag expressions TFS & GIT: Continuous Delivery Support for Web Sites + Cloud Services Developer Analytics: New Relic support for Web Sites + Mobile Services Service Bus: Support for partitioned queues and topics Billing: New Billing Alert Service that sends emails notifications when your bill hits a threshold you define All of these improvements are now available to use immediately (note that some features are still in preview).  Below are more details about them. Storage: Import/Export Hard Disk Drives to Windows Azure I am excited to announce the preview of our new Windows Azure Import/Export Service! The Windows Azure Import/Export Service enables you to move large amounts of on-premises data into and out of your Windows Azure Storage accounts. It does this by enabling you to securely ship hard disk drives directly to our Windows Azure data centers. Once we receive the drives we’ll automatically transfer the data to or from your Windows Azure Storage account.  This enables you to import or export massive amounts of data more quickly and cost effectively (and not be constrained by available network bandwidth). Encrypted Transport Our Import/Export service provides built-in support for BitLocker disk encryption – which enables you to securely encrypt data on the hard drives before you send it, and not have to worry about it being compromised even if the disk is lost/stolen in transit (since the content on the transported hard drives is completely encrypted and you are the only one who has the key to it).  The drive preparation tool we are shipping today makes setting up bitlocker encryption on these hard drives easy. How to Import/Export your first Hard Drive of Data You can read our Getting Started Guide to learn more about how to begin using the import/export service.  You can create import and export jobs via the Windows Azure Management Portal as well as programmatically using our Server Management APIs. It is really easy to create a new import or export job using the Windows Azure Management Portal.  Simply navigate to a Windows Azure storage account, and then click the new Import/Export tab now available within it (note: if you don’t have this tab make sure to sign-up for the Import/Export preview): Then click the “Create Import Job” or “Create Export Job” commands at the bottom of it.  This will launch a wizard that easily walks you through the steps required: For more comprehensive information about Import/Export, refer to Windows Azure Storage team blog.  You can also send questions and comments to the [email protected] email address. We think you’ll find this new service makes it much easier to move data into and out of Windows Azure, and it will dramatically cut down the network bandwidth required when working on large data migration projects.  We hope you like it. HDInsight: 100% Compatible Hadoop Service in the Cloud Last week we announced the general availability release of Windows Azure HDInsight. HDInsight is a 100% compatible Hadoop service that allows you to easily provision and manage Hadoop clusters for big data processing in Windows Azure.  This release is now live in production, backed by an enterprise SLA, supported 24x7 by Microsoft Support, and is ready to use for production scenarios. HDInsight allows you to use Apache Hadoop tools, such as Pig and Hive, to process large amounts of data in Windows Azure Blob Storage. Because data is stored in Windows Azure Blob Storage, you can choose to dynamically create Hadoop clusters only when you need them, and then shut them down when they are no longer required (since you pay only for the time the Hadoop cluster instances are running this provides a super cost effective way to use them).  You can create Hadoop clusters using either the Windows Azure Management Portal (see below) or using our PowerShell and Cross Platform Command line tools: The import/export hard drive support that came out today is a perfect companion service to use with HDInsight – the combination allows you to easily ingest, process and optionally export a limitless amount of data.  We’ve also integrated HDInsight with our Business Intelligence tools, so users can leverage familiar tools like Excel in order to analyze the output of jobs.  You can find out more about how to get started with HDInsight here. Virtual Machines: VM Gallery Enhancements Today’s update of Windows Azure brings with it a new Virtual Machine gallery that you can use to create new VMs in the cloud.  You can launch the gallery by doing New->Compute->Virtual Machine->From Gallery within the Windows Azure Management Portal: The new Virtual Machine Gallery includes some nice enhancements that make it even easier to use: Search: You can now easily search and filter images using the search box in the top-right of the dialog.  For example, simply type “SQL” and we’ll filter to show those images in the gallery that contain that substring. Category Tree-view: Each month we add more built-in VM images to the gallery.  You can continue to browse these using the “All” view within the VM Gallery – or now quickly filter them using the category tree-view on the left-hand side of the dialog.  For example, by selecting “Oracle” in the tree-view you can now quickly filter to see the official Oracle supplied images. MSDN and Supported checkboxes: With today’s update we are also introducing filters that makes it easy to filter out types of images that you may not be interested in. The first checkbox is MSDN: using this filter you can exclude any image that is not part of the Windows Azure benefits for MSDN subscribers (which have highly discounted pricing - you can learn more about the MSDN pricing here). The second checkbox is Supported: this filter will exclude any image that contains prerelease software, so you can feel confident that the software you choose to deploy is fully supported by Windows Azure and our partners. Sort options: We sort gallery images by what we think customers are most interested in, but sometimes you might want to sort using different views. So we’re providing some additional sort options, like “Newest,” to customize the image list for what suits you best. Pricing information: We now provide additional pricing information about images and options on how to cost effectively run them directly within the VM Gallery. The above improvements make it even easier to use the VM Gallery and quickly create launch and run Virtual Machines in the cloud. Virtual Machines: ACL Support for VIPs A few months ago we exposed the ability to configure Access Control Lists (ACLs) for Virtual Machines using Windows PowerShell cmdlets and our Service Management API. With today’s release, you can now configure VM ACLs using the Windows Azure Management Portal as well. You can now do this by clicking the new Manage ACL command in the Endpoints tab of a virtual machine instance: This will enable you to configure an ordered list of permit and deny rules to scope the traffic that can access your VM’s network endpoints. For example, if you were on a virtual network, you could limit RDP access to a Windows Azure virtual machine to only a few computers attached to your enterprise. Or if you weren’t on a virtual network you could alternatively limit traffic from public IPs that can access your workloads: Here is the default behaviors for ACLs in Windows Azure: By default (i.e. no rules specified), all traffic is permitted. When using only Permit rules, all other traffic is denied. When using only Deny rules, all other traffic is permitted. When there is a combination of Permit and Deny rules, all other traffic is denied. Lastly, remember that configuring endpoints does not automatically configure them within the VM if it also has firewall rules enabled at the OS level.  So if you create an endpoint using the Windows Azure Management Portal, Windows PowerShell, or REST API, be sure to also configure your guest VM firewall appropriately as well. Web Sites: Web Sockets Support With today’s release you can now use Web Sockets with Windows Azure Web Sites.  This feature enables you to easily integrate real-time communication scenarios within your web based applications, and is available at no extra charge (it even works with the free tier).  Higher level programming libraries like SignalR and socket.io are also now supported with it. You can enable Web Sockets support on a web site by navigating to the Configure tab of a Web Site, and by toggling Web Sockets support to “on”: Once Web Sockets is enabled you can start to integrate some really cool scenarios into your web applications.  Check out the new SignalR documentation hub on www.asp.net to learn more about some of the awesome scenarios you can do with it. Web Sites: Remote Debugging Support The Windows Azure SDK 2.2 we released two weeks ago introduced remote debugging support for Windows Azure Cloud Services. With today’s Windows Azure release we are extending this remote debugging support to also work with Windows Azure Web Sites. With live, remote debugging support inside of Visual Studio, you are able to have more visibility than ever before into how your code is operating live in Windows Azure. It is now super easy to attach the debugger and quickly see what is going on with your application in the cloud. Remote Debugging of a Windows Azure Web Site using VS 2013 Enabling the remote debugging of a Windows Azure Web Site using VS 2013 is really easy.  Start by opening up your web application’s project within Visual Studio. Then navigate to the “Server Explorer” tab within Visual Studio, and click on the deployed web-site you want to debug that is running within Windows Azure using the Windows Azure->Web Sites node in the Server Explorer.  Then right-click and choose the “Attach Debugger” option on it: When you do this Visual Studio will remotely attach the debugger to the Web Site running within Windows Azure.  The debugger will then stop the web site’s execution when it hits any break points that you have set within your web application’s project inside Visual Studio.  For example, below I set a breakpoint on the “ViewBag.Message” assignment statement within the HomeController of the standard ASP.NET MVC project template.  When I hit refresh on the “About” page of the web site within the browser, the breakpoint was triggered and I am now able to debug the app remotely using Visual Studio: Note above how we can debug variables (including autos/watchlist/etc), as well as use the Immediate and Command Windows. In the debug session above I used the Immediate Window to explore some of the request object state, as well as to dynamically change the ViewBag.Message property.  When we click the the “Continue” button (or press F5) the app will continue execution and the Web Site will render the content back to the browser.  This makes it super easy to debug web apps remotely. Tips for Better Debugging To get the best experience while debugging, we recommend publishing your site using the Debug configuration within Visual Studio’s Web Publish dialog. This will ensure that debug symbol information is uploaded to the Web Site which will enable a richer debug experience within Visual Studio.  You can find this option on the Web Publish dialog on the Settings tab: When you ultimately deploy/run the application in production we recommend using the “Release” configuration setting – the release configuration is memory optimized and will provide the best production performance.  To learn more about diagnosing and debugging Windows Azure Web Sites read our new Troubleshooting Windows Azure Web Sites in Visual Studio guide. Notification Hubs: Segmented Push Notification support with tag expressions In August we announced the General Availability of Windows Azure Notification Hubs - a powerful Mobile Push Notifications service that makes it easy to send high volume push notifications with low latency from any mobile app back-end.  Notification hubs can be used with any mobile app back-end (including ones built using our Mobile Services capability) and can also be used with back-ends that run in the cloud as well as on-premises. Beginning with the initial release, Notification Hubs allowed developers to send personalized push notifications to both individual users as well as groups of users by interest, by associating their devices with tags representing the logical target of the notification. For example, by registering all devices of customers interested in a favorite MLB team with a corresponding tag, it is possible to broadcast one message to millions of Boston Red Sox fans and another message to millions of St. Louis Cardinals fans with a single API call respectively. New support for using tag expressions to enable advanced customer segmentation With today’s release we are adding support for even more advanced customer targeting.  You can now identify customers that you want to send push notifications to by defining rich tag expressions. With tag expressions, you can now not only broadcast notifications to Boston Red Sox fans, but take that segmenting a step farther and reach more granular segments. This opens up a variety of scenarios, for example: Offers based on multiple preferences—e.g. send a game day vegetarian special to users tagged as both a Boston Red Sox fan AND a vegetarian Push content to multiple segments in a single message—e.g. rain delay information only to users who are tagged as either a Boston Red Sox fan OR a St. Louis Cardinal fan Avoid presenting subsets of a segment with irrelevant content—e.g. season ticket availability reminder to users who are tagged as a Boston Red Sox fan but NOT also a season ticket holder To illustrate with code, consider a restaurant chain app that sends an offer related to a Red Sox vs Cardinals game for users in Boston. Devices can be tagged by your app with location tags (e.g. “Loc:Boston”) and interest tags (e.g. “Follows:RedSox”, “Follows:Cardinals”), and then a notification can be sent by your back-end to “(Follows:RedSox || Follows:Cardinals) && Loc:Boston” in order to deliver an offer to all devices in Boston that follow either the RedSox or the Cardinals. This can be done directly in your server backend send logic using the code below: var notification = new WindowsNotification(messagePayload); hub.SendNotificationAsync(notification, "(Follows:RedSox || Follows:Cardinals) && Loc:Boston"); In your expressions you can use all Boolean operators: AND (&&), OR (||), and NOT (!).  Some other cool use cases for tag expressions that are now supported include: Social: To “all my group except me” - group:id && !user:id Events: Touchdown event is sent to everybody following either team or any of the players involved in the action: Followteam:A || Followteam:B || followplayer:1 || followplayer:2 … Hours: Send notifications at specific times. E.g. Tag devices with time zone and when it is 12pm in Seattle send to: GMT8 && follows:thaifood Versions and platforms: Send a reminder to people still using your first version for Android - version:1.0 && platform:Android For help on getting started with Notification Hubs, visit the Notification Hub documentation center.  Then download the latest NuGet package (or use the Notification Hubs REST APIs directly) to start sending push notifications using tag expressions.  They are really powerful and enable a bunch of great new scenarios. TFS & GIT: Continuous Delivery Support for Web Sites + Cloud Services With today’s Windows Azure release we are making it really easy to enable continuous delivery support with Windows Azure and Team Foundation Services.  Team Foundation Services is a cloud based offering from Microsoft that provides integrated source control (with both TFS and Git support), build server, test execution, collaboration tools, and agile planning support.  It makes it really easy to setup a team project (complete with automated builds and test runners) in the cloud, and it has really rich integration with Visual Studio. With today’s Windows Azure release it is now really easy to enable continuous delivery support with both TFS and Git based repositories hosted using Team Foundation Services.  This enables a workflow where when code is checked in, built successfully on an automated build server, and all tests pass on it – I can automatically have the app deployed on Windows Azure with zero manual intervention or work required. The below screen-shots demonstrate how to quickly setup a continuous delivery workflow to Windows Azure with a Git-based ASP.NET MVC project hosted using Team Foundation Services. Enabling Continuous Delivery to Windows Azure with Team Foundation Services The project I’m going to enable continuous delivery with is a simple ASP.NET MVC project whose source code I’m hosting using Team Foundation Services.  I did this by creating a “SimpleContinuousDeploymentTest” repository there using Git – and then used the new built-in Git tooling support within Visual Studio 2013 to push the source code to it.  Below is a screen-shot of the Git repository hosted within Team Foundation Services: I can access the repository within Visual Studio 2013 and easily make commits with it (as well as branch, merge and do other tasks).  Using VS 2013 I can also setup automated builds to take place in the cloud using Team Foundation Services every time someone checks in code to the repository: The cool thing about this is that I don’t have to buy or rent my own build server – Team Foundation Services automatically maintains its own build server farm and can automatically queue up a build for me (for free) every time someone checks in code using the above settings.  This build server (and automated testing) support now works with both TFS and Git based source control repositories. Connecting a Team Foundation Services project to Windows Azure Once I have a source repository hosted in Team Foundation Services with Automated Builds and Testing set up, I can then go even further and set it up so that it will be automatically deployed to Windows Azure when a source code commit is made to the repository (assuming the Build + Tests pass).  Enabling this is now really easy.  To set this up with a Windows Azure Web Site simply use the New->Compute->Web Site->Custom Create command inside the Windows Azure Management Portal.  This will create a dialog like below.  I gave the web site a name and then made sure the “Publish from source control” checkbox was selected: When we click next we’ll be prompted for the location of the source repository.  We’ll select “Team Foundation Services”: Once we do this we’ll be prompted for our Team Foundation Services account that our source repository is hosted under (in this case my TFS account is “scottguthrie”): When we click the “Authorize Now” button we’ll be prompted to give Windows Azure permissions to connect to the Team Foundation Services account.  Once we do this we’ll be prompted to pick the source repository we want to connect to.  Starting with today’s Windows Azure release you can now connect to both TFS and Git based source repositories.  This new support allows me to connect to the “SimpleContinuousDeploymentTest” respository we created earlier: Clicking the finish button will then create the Web Site with the continuous delivery hooks setup with Team Foundation Services.  Now every time someone pushes source control to the repository in Team Foundation Services, it will kick off an automated build, run all of the unit tests in the solution , and if they pass the app will be automatically deployed to our Web Site in Windows Azure.  You can monitor the history and status of these automated deployments using the Deployments tab within the Web Site: This enables a really slick continuous delivery workflow, and enables you to build and deploy apps in a really nice way. Developer Analytics: New Relic support for Web Sites + Mobile Services With today’s Windows Azure release we are making it really easy to enable Developer Analytics and Monitoring support with both Windows Azure Web Site and Windows Azure Mobile Services.  We are partnering with New Relic, who provide a great dev analytics and app performance monitoring offering, to enable this - and we have updated the Windows Azure Management Portal to make it really easy to configure. Enabling New Relic with a Windows Azure Web Site Enabling New Relic support with a Windows Azure Web Site is now really easy.  Simply navigate to the Configure tab of a Web Site and scroll down to the “developer analytics” section that is now within it: Clicking the “add-on” button will display some additional UI.  If you don’t already have a New Relic subscription, you can click the “view windows azure store” button to obtain a subscription (note: New Relic has a perpetually free tier so you can enable it even without paying anything): Clicking the “view windows azure store” button will launch the integrated Windows Azure Store experience we have within the Windows Azure Management Portal.  You can use this to browse from a variety of great add-on services – including New Relic: Select “New Relic” within the dialog above, then click the next button, and you’ll be able to choose which type of New Relic subscription you wish to purchase.  For this demo we’ll simply select the “Free Standard Version” – which does not cost anything and can be used forever:  Once we’ve signed-up for our New Relic subscription and added it to our Windows Azure account, we can go back to the Web Site’s configuration tab and choose to use the New Relic add-on with our Windows Azure Web Site.  We can do this by simply selecting it from the “add-on” dropdown (it is automatically populated within it once we have a New Relic subscription in our account): Clicking the “Save” button will then cause the Windows Azure Management Portal to automatically populate all of the needed New Relic configuration settings to our Web Site: Deploying the New Relic Agent as part of a Web Site The final step to enable developer analytics using New Relic is to add the New Relic runtime agent to our web app.  We can do this within Visual Studio by right-clicking on our web project and selecting the “Manage NuGet Packages” context menu: This will bring up the NuGet package manager.  You can search for “New Relic” within it to find the New Relic agent.  Note that there is both a 32-bit and 64-bit edition of it – make sure to install the version that matches how your Web Site is running within Windows Azure (note: you can configure your Web Site to run in either 32-bit or 64-bit mode using the Web Site’s “Configuration” tab within the Windows Azure Management Portal): Once we install the NuGet package we are all set to go.  We’ll simply re-publish the web site again to Windows Azure and New Relic will now automatically start monitoring the application Monitoring a Web Site using New Relic Now that the application has developer analytics support with New Relic enabled, we can launch the New Relic monitoring portal to start monitoring the health of it.  We can do this by clicking on the “Add Ons” tab in the left-hand side of the Windows Azure Management Portal.  Then select the New Relic add-on we signed-up for within it.  The Windows Azure Management Portal will provide some default information about the add-on when we do this.  Clicking the “Manage” button in the tray at the bottom will launch a new browser tab and single-sign us into the New Relic monitoring portal associated with our account: When we do this a new browser tab will launch with the New Relic admin tool loaded within it: We can now see insights into how our app is performing – without having to have written a single line of monitoring code.  The New Relic service provides a ton of great built-in monitoring features allowing us to quickly see: Performance times (including browser rendering speed) for the overall site and individual pages.  You can optionally set alert thresholds to trigger if the speed does not meet a threshold you specify. Information about where in the world your customers are hitting the site from (and how performance varies by region) Details on the latency performance of external services your web apps are using (for example: SQL, Storage, Twitter, etc) Error information including call stack details for exceptions that have occurred at runtime SQL Server profiling information – including which queries executed against your database and what their performance was And a whole bunch more… The cool thing about New Relic is that you don’t need to write monitoring code within your application to get all of the above reports (plus a lot more).  The New Relic agent automatically enables the CLR profiler within applications and automatically captures the information necessary to identify these.  This makes it super easy to get started and immediately have a rich developer analytics view for your solutions with very little effort. If you haven’t tried New Relic out yet with Windows Azure I recommend you do so – I think you’ll find it helps you build even better cloud applications.  Following the above steps will help you get started and deliver you a really good application monitoring solution in only minutes. Service Bus: Support for partitioned queues and topics With today’s release, we are enabling support within Service Bus for partitioned queues and topics. Enabling partitioning enables you to achieve a higher message throughput and better availability from your queues and topics. Higher message throughput is achieved by implementing multiple message brokers for each partitioned queue and topic.  The  multiple messaging stores will also provide higher availability. You can create a partitioned queue or topic by simply checking the Enable Partitioning option in the custom create wizard for a Queue or Topic: Read this article to learn more about partitioned queues and topics and how to take advantage of them today. Billing: New Billing Alert Service Today’s Windows Azure update enables a new Billing Alert Service Preview that enables you to get proactive email notifications when your Windows Azure bill goes above a certain monetary threshold that you configure.  This makes it easier to manage your bill and avoid potential surprises at the end of the month. With the Billing Alert Service Preview, you can now create email alerts to monitor and manage your monetary credits or your current bill total.  To set up an alert first sign-up for the free Billing Alert Service Preview.  Then visit the account management page, click on a subscription you have setup, and then navigate to the new Alerts tab that is available: The alerts tab allows you to setup email alerts that will be sent automatically once a certain threshold is hit.  For example, by clicking the “add alert” button above I can setup a rule to send myself email anytime my Windows Azure bill goes above $100 for the month: The Billing Alert Service will evolve to support additional aspects of your bill as well as support multiple forms of alerts such as SMS.  Try out the new Billing Alert Service Preview today and give us feedback. Summary Today’s Windows Azure release enables a ton of great new scenarios, and makes building applications hosted in the cloud even easier. If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Windows Azure Developer Center to learn more about how to build apps with it. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • multiple puppet masters

    - by Oli
    I would like to set up an additional puppet master but have the CA server handled by only 1 puppet master. I have set this up as per the documentation here: http://docs.puppetlabs.com/guides/scaling_multiple_masters.html I have configured my second puppet master as follows: [main] ... ca = false ca_server = puppet-master1.test.net I am using passenger so I am a bit confused how the virtual-host.conf file should look for my second puppet-master2.test.net. Here is mine (updated as per Shane Maddens answer): LoadModule passenger_module /usr/lib/ruby/gems/1.8/gems/passenger-3.0.18/ext/apache2/mod_passenger.so PassengerRoot /usr/lib/ruby/gems/1.8/gems/passenger-3.0.18 PassengerRuby /usr/bin/ruby Listen 8140 <VirtualHost *:8140> ProxyPassMatch ^/([^/]+/certificate.*)$ https://puppet-master1.test.net:8140/$1 SSLEngine on SSLProtocol -ALL +SSLv3 +TLSv1 SSLCipherSuite ALL:!ADH:RC4+RSA:+HIGH:+MEDIUM:-LOW:-SSLv2:-EXP SSLCertificateFile /var/lib/puppet/ssl/certs/puppet-master2.test.net.pem SSLCertificateKeyFile /var/lib/puppet/ssl/private_keys/puppet-master2.test.net.pem #SSLCertificateChainFile /var/lib/puppet/ssl/ca/ca_crt.pem #SSLCACertificateFile /var/lib/puppet/ssl/ca/ca_crt.pem # If Apache complains about invalid signatures on the CRL, you can try disabling # CRL checking by commenting the next line, but this is not recommended. #SSLCARevocationFile /var/lib/puppet/ssl/ca/ca_crl.pem SSLVerifyClient optional SSLVerifyDepth 1 # The `ExportCertData` option is needed for agent certificate expiration warnings SSLOptions +StdEnvVars +ExportCertData # This header needs to be set if using a loadbalancer or proxy RequestHeader unset X-Forwarded-For RequestHeader set X-SSL-Subject %{SSL_CLIENT_S_DN}e RequestHeader set X-Client-DN %{SSL_CLIENT_S_DN}e RequestHeader set X-Client-Verify %{SSL_CLIENT_VERIFY}e DocumentRoot /etc/puppet/rack/public/ RackBaseURI / <Directory /etc/puppet/rack/> Options None AllowOverride None Order allow,deny allow from all </Directory> </VirtualHost> I have commented out the #SSLCertificateChainFile, #SSLCACertificateFile & #SSLCARevocationFile - this is not a CA server so not sure I need this. How would I get passenger to work with these? I would like to use ProxyPassMatch which I have configured as per the documentation. I don't want to specify a ca server in every puppet.conf file. I am getting this error when trying to get create a cert from a puppet client pointing to the second puppet master server (puppet-master2.test.net): [root@puppet-client2 ~]# puppet agent --test Error: Could not request certificate: Could not intern from s: nested asn1 error Exiting; failed to retrieve certificate and waitforcert is disabled On the puppet client I have this [main] server = puppet-master2.test.net What have I missed? -- update Here is a new virtual host file on my secondary puppet master. Is this correct? I have SSL turned off? LoadModule passenger_module /usr/lib/ruby/gems/1.8/gems/passenger-3.0.18/ext/apache2/mod_passenger.so PassengerRoot /usr/lib/ruby/gems/1.8/gems/passenger-3.0.18 PassengerRuby /usr/bin/ruby # you probably want to tune these settings PassengerHighPerformance on PassengerMaxPoolSize 12 PassengerPoolIdleTime 1500 # PassengerMaxRequests 1000 PassengerStatThrottleRate 120 RackAutoDetect Off RailsAutoDetect Off Listen 8140 <VirtualHost *:8140> SSLEngine off ProxyPassMatch ^/([^/]+/certificate.*)$ https://puppet-master1.test.net:8140/$1 # Obtain Authentication Information from Client Request Headers SetEnvIf X-Client-Verify "(.*)" SSL_CLIENT_VERIFY=$1 SetEnvIf X-SSL-Client-DN "(.*)" SSL_CLIENT_S_DN=$1 DocumentRoot /etc/puppet/rack/public/ RackBaseURI / <Directory /etc/puppet/rack/> Options None AllowOverride None Order allow,deny allow from all </Directory> </VirtualHost> Cheers, Oli

    Read the article

  • Magento installation problem on Nginx in Windows

    - by Nithin
    I am trying to install magento locally using nginx as the web server instead of Apache. I copied the magento folder to the html directory. When i try to call the magento folder, I get the 404 not found error. I am able to access other php files setup in the html folder and have PHP installed. Here is my config file: #user nobody; worker_processes 1; #error_log logs/error.log; #error_log logs/error.log notice; #error_log logs/error.log info; #pid logs/nginx.pid; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; #log_format main '$remote_addr - $remote_user [$time_local] "$request" ' # '$status $body_bytes_sent "$http_referer" ' # '"$http_user_agent" "$http_x_forwarded_for"'; #access_log logs/access.log main; sendfile on; #tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; #gzip on; server { listen 8080; server_name localhost; #charset koi8-r; #access_log logs/host.access.log main; location / { root html; index index.html index.htm index.php; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root html; allow all; } # proxy the PHP scripts to Apache listening on 127.0.0.1:80 # #location ~ \.php$ { # proxy_pass http://127.0.0.1; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # location ~ \.php$ { root html; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME c:/nginx/html/$fastcgi_script_name; include fastcgi_params; } # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} } # another virtual host using mix of IP-, name-, and port-based configuration # #server { # listen 8000; # listen somename:8080; # server_name somename alias another.alias; # location / { # root html; # index index.html index.htm; # } #} # HTTPS server # #server { # listen 443; # server_name localhost; # ssl on; # ssl_certificate cert.pem; # ssl_certificate_key cert.key; # ssl_session_timeout 5m; # ssl_protocols SSLv2 SSLv3 TLSv1; # ssl_ciphers ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP; # ssl_prefer_server_ciphers on; # location / { # root html; # index index.html index.htm; # } #} } How do I fix this? This is what I found in the error.log file : 2011/09/06 12:22:35 [error] 5632#0: *1 "/cygdrive/c/nginx/html/magento/index.php/install/index.html" is not found (20: Not a directory), client: 127.0.0.1, server: localhost, request: "GET /magento/index.php/install/ HTTP/1.1", host: "localhost:8080"

    Read the article

  • nginx: problem configuring a proxy_pass

    - by Ofer Bar
    I'm converting a web app from apache to nginx. In apache's httpd.conf I have: ProxyPass /proxy/ http:// ProxyPassReverse /proxy/ http:// The idea is the client send this url: http://web-server-domain/proxy/login-server-addr/loginUrl.php?user=xxx&pass=yyy and the web server calls: http://login-server-addr/loginUrl.php?user=xxx&pass=yyy My nginx.conf is attached below and it is not working. At the moment it looks like it is calling the server, but returning an application error. This seems promising but any attempt to debug this failed! I can't trace any of the calls as nginx refuses to place them in the error file. Also, placing echo statement on the login server did not help either which is weird. The nginx documentation isn't very helpful about this. Any suggestion on how to configure a proxy_pass? Thanks! user nginx; worker_processes 1; #error_log /var/log/nginx/error.log; error_log /var/log/nginx/error.log notice; #error_log /var/log/nginx/error.log info; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; #tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; #gzip on; # # The default server # server { rewrite_log on; listen 80; server_name _; #charset koi8-r; #access_log logs/host.access.log main; #root /var/www/live/html; index index.php index.html index.htm; location ~ ^/proxy/(.*$) { #location /proxy/ { # rewrite ^/proxy(.*) http://$1 break; proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_buffering off; proxy_pass http://$1; #proxy_pass "http://173.231.134.36/messages_2.7.0/loginUser.php?userID=ofer.fly%40gmail.com&password=y4HTD93vrshMNcy2Qr5ka7ia0xcaa389f4885f59c9"; break; } location / { root /var/www/live/html; #if ( $uri ~ ^/proxy/(.*) ) { # proxy_pass http://$1; # break; #} #try_files $uri $uri/ /index.php; } error_page 404 /404.html; #location = /404.html { # root /usr/share/nginx/html; #} # redirect server error pages to the static page /50x.html # #error_page 500 502 503 504 /50x.html; #location = /50x.html { # root /usr/share/nginx/html; #} # proxy the PHP scripts to Apache listening on 127.0.0.1:80 # #location ~ \.php$ { # proxy_pass http://127.0.0.1; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # location ~ \.php$ { #root html; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; #fastcgi_param SCRIPT_FILENAME /usr/share/nginx/html$fastcgi_script_name; fastcgi_param SCRIPT_FILENAME /var/www/live/html$fastcgi_script_name; include fastcgi_params; } # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} } # Load config files from the /etc/nginx/conf.d directory include /etc/nginx/conf.d/*.conf; }

    Read the article

  • Mod_Rewrite Apache ProxyPass ?

    - by Anon
    I have two websites; OLDSITE and NEWSITE. The OLDSITE has 120 IP Address that it has with it, and the NEWSITE had 5. I want to be able to separate everything from OLDSITE and NEWSITE so they are not tied together but use them on the same linux computer. My current apache setup is this: Listen 80 NameVirtualHost * <VirtualHost *> ServerName oldsite.com ServerAdmin [email protected] DocumentRoot /var/www/ <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> RewriteEngine on RewriteCond %{HTTP_HOST} ^([^.]+)\.oldsite\.com$ RewriteCond /home/%1/ -d RewriteRule ^(.+) %{HTTP_HOST}$1 RewriteRule ^([^.]+)\.oldsite\.com/media/(.*) /home/$1/dir/media/$2 RewriteRule ^([^.]+)\.oldsite\.com/(.*) /home/$1/www/$2 </VirtualHost> <VirtualHost newsite.com> ServerName newsite.com ServerAdmin [email protected] DocumentRoot /var/newsite/ <Directory /var/newsite/> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> RewriteEngine on RewriteCond %{HTTP_HOST} ^([^.]+)\.newsite\.com$ RewriteCond /home/%1/ -d RewriteRule ^(.+) %{HTTP_HOST}$1 RewriteRule ^([^.]+)\.newsite\.com/media/(.*) /home/$1/dir/media/$2 RewriteRule ^([^.]+)\.newsite\.com/(.*) /home/$1/www/$2 </VirtualHost> <VirtualHost *> ServerName panel.oldsite.com ProxyPass / http://panel.oldsite.com:10000/ ProxyPassReverse / http://panel.oldsite.com:10000/ <Proxy *> allow from all </Proxy> </VirtualHost> <VirtualHost *> ServerName panel.newsite.com ProxyPass / http://panel.newsite.com:10000/ ProxyPassReverse / http://panel.newsite.com:10000/ <Proxy *> allow from all </Proxy> </VirtualHost> I want to be able to access anything that is newsite.com and have it go to the /var/newsite unless their is a home directory...and then if its panel.newsite.com I want it to automatically do a proxypass to panel.newsite.com:10000... With this setup, it works perfect for oldsite.com.... both the proxy and the webpages... However, having the Virtualhost set to newsite.com renders the proxypass worthless. If I change the Virtualhost for the newsite.com to a wildcard, the proxypass will work but anything thats a subdomain of newsite.com won't work. so newsite.com will work, but www.newsite.com will not load correctly. I am assuming that when everything is wildcarded, then the ServerName somewhat acts like a RewriteCond and actually just applies the stuff to that URL. It uses the Virtualhost * (oldsite.com) and lets ANYTHING.oldsite.com work, but the second virtualhost * (newsite.com) only newsite.com will work... www.newsite.com will not. If I change the order of them, the opposite is true. So apparently it doesn't like me using 2 wildcards... I tried just making the Servername *.newsite.com .......but that would be too easy. I am not sure what I can do to do what I want? Perhaps I should make the ProxyPass included in the VirtualHosts and use something like: RewriteCond %{HTTP_HOST} ^panel\.newsite\.com$ [NC] RewriteRule ^(.*)$ http://panel.newsite.com:10000/ [P] ProxyPassReverse / http://panel.newsite.com:10000/ but that doesnt seem to want to login to webmin, it loads the login page but isnt working how the ProxyPass & ProxyPassReverse does.

    Read the article

  • Cisco Pix how to add an additional block of static ip addresses for nat?

    - by Scott Szretter
    I have a pix 501 with 5 static ip addresses. My isp just gave me 5 more. I am trying to figure out how to add the new block and then how to nat/open at least one of them to an inside machine. So far, I named a new interface "intf2", ip range is 71.11.11.58 - 62 (gateway should 71.11.11.57) imgsvr is the machine I want to nat to one of the (71.11.11.59) new ip addresses. mail (.123) is an example of a machine that is mapped to the current existing 5 ip block (96.11.11.121 gate / 96.11.11.122-127) and working fine. Building configuration... : Saved : PIX Version 6.3(4) interface ethernet0 auto interface ethernet0 vlan1 logical interface ethernet1 auto nameif ethernet0 outside security0 nameif ethernet1 inside security100 nameif vlan1 intf2 security1 enable password xxxxxxxxx encrypted passwd xxxxxxxxx encrypted hostname xxxxxxxPIX domain-name xxxxxxxxxxx no fixup protocol dns fixup protocol ftp 21 fixup protocol h323 h225 1720 fixup protocol h323 ras 1718-1719 fixup protocol http 80 fixup protocol rsh 514 fixup protocol rtsp 554 fixup protocol sip 5060 fixup protocol sip udp 5060 fixup protocol skinny 2000 no fixup protocol smtp 25 fixup protocol sqlnet 1521 fixup protocol tftp 69 names ...snip... name 192.168.10.13 mail name 192.168.10.29 imgsvr object-group network vpn1 network-object mail 255.255.255.255 access-list outside_access_in permit tcp any host 96.11.11.124 eq www access-list outside_access_in permit tcp any host 96.11.11.124 eq https access-list outside_access_in permit tcp any host 96.11.11.124 eq 3389 access-list outside_access_in permit tcp any host 96.11.11.123 eq https access-list outside_access_in permit tcp any host 96.11.11.123 eq www access-list outside_access_in permit tcp any host 96.11.11.125 eq smtp access-list outside_access_in permit tcp any host 96.11.11.125 eq https access-list outside_access_in permit tcp any host 96.11.11.125 eq 10443 access-list outside_access_in permit tcp any host 96.11.11.126 eq smtp access-list outside_access_in permit tcp any host 96.11.11.126 eq https access-list outside_access_in permit tcp any host 96.11.11.126 eq 10443 access-list outside_access_in deny ip any any access-list inside_nat0_outbound permit ip 192.168.0.0 255.255.0.0 IPPool2 255.255.255.0 access-list inside_nat0_outbound permit ip 172.17.0.0 255.255.0.0 IPPool2 255.255.255.0 access-list inside_nat0_outbound permit ip 172.16.0.0 255.255.0.0 IPPool2 255.255.255.0 ...snip... access-list inside_access_in deny tcp any any eq smtp access-list inside_access_in permit ip any any pager lines 24 logging on logging buffered notifications mtu outside 1500 mtu inside 1500 ip address outside 96.11.11.122 255.255.255.248 ip address inside 192.168.10.15 255.255.255.0 ip address intf2 71.11.11.58 255.255.255.248 ip audit info action alarm ip audit attack action alarm pdm location exchange 255.255.255.255 inside pdm location mail 255.255.255.255 inside pdm location IPPool2 255.255.255.0 outside pdm location 96.11.11.122 255.255.255.255 inside pdm location 192.168.10.1 255.255.255.255 inside pdm location 192.168.10.6 255.255.255.255 inside pdm location mail-gate1 255.255.255.255 inside pdm location mail-gate2 255.255.255.255 inside pdm location imgsvr 255.255.255.255 inside pdm location 71.11.11.59 255.255.255.255 intf2 pdm logging informational 100 pdm history enable arp timeout 14400 global (outside) 1 interface global (outside) 2 96.11.11.123 global (intf2) 3 interface global (intf2) 4 71.11.11.59 nat (inside) 0 access-list inside_nat0_outbound nat (inside) 2 mail 255.255.255.255 0 0 nat (inside) 1 0.0.0.0 0.0.0.0 0 0 static (inside,outside) tcp 96.11.11.123 smtp mail smtp netmask 255.255.255.255 0 0 static (inside,outside) tcp 96.11.11.123 https mail https netmask 255.255.255.255 0 0 static (inside,outside) tcp 96.11.11.123 www mail www netmask 255.255.255.255 0 0 static (inside,outside) 96.11.11.124 ts netmask 255.255.255.255 0 0 static (inside,outside) 96.11.11.126 mail-gate2 netmask 255.255.255.255 0 0 static (inside,outside) 96.11.11.125 mail-gate1 netmask 255.255.255.255 0 0 access-group outside_access_in in interface outside access-group inside_access_in in interface inside route outside 0.0.0.0 0.0.0.0 96.11.11.121 1 route intf2 0.0.0.0 0.0.0.0 71.11.11.57 2 timeout xlate 0:05:00 timeout conn 1:00:00 half-closed 0:10:00 udp 0:02:00 rpc 0:10:00 h225 1:00:00 timeout h323 0:05:00 mgcp 0:05:00 sip 0:30:00 sip_media 0:02:00 timeout uauth 0:05:00 absolute floodguard enable ...snip... : end [OK] Thanks!

    Read the article

  • ScriptAlias makes requests match too many Location blocks. What is going on?

    - by brain99
    We wish to restrict access on our development server to those users who have a valid SSL Client certificate. We are running Apache 2.2.16 on Debian 6. However, for some sections (mainly git-http, setup with gitolite on https://my.server/git/) we need an exception since many git clients don't support SSL client certificates. I have succeeded in requiring client cert authentication for the server, and in adding exceptions for some locations. However, it seems this does not work for git. The current setup is as follows: SSLCACertificateFile ssl-certs/client-ca-certs.crt <Location /> SSLVerifyClient require SSLVerifyDepth 2 </Location> # this works <Location /foo> SSLVerifyClient none </Location> # this does not <Location /git> SSLVerifyClient none </Location> I have also tried an alternative solution, with the same results: # require authentication everywhere except /git and /foo <LocationMatch "^/(?!git|foo)"> SSLVerifyClient require SSLVerifyDepth 2 </LocationMatch> In both these cases, a user without client certificate can perfectly access my.server/foo/, but not my.server/git/ (access is refused because no valid client certificate is given). If I disable SSL client certificate authentication completely, my.server/git/ works ok. The ScriptAlias problem Gitolite is setup using the ScriptAlias directive. I have found that the problem occurs with any similar ScriptAlias: # Gitolite ScriptAlias /git/ /path/to/gitolite-shell/ ScriptAlias /gitmob/ /path/to/gitolite-shell/ # My test ScriptAlias /test/ /path/to/test/script/ Note that /path/to/test/script is a file, not a directory, the same goes for /path/to/gitolite-shell/ My test script simply prints out the environment, super simple: #!/usr/bin/perl print "Content-type:text/plain\n\n"; print "TEST\n"; @keys = sort(keys %ENV); foreach (@keys) { print "$_ => $ENV{$_}\n"; } It seems that if I go to https://my.server/test/someLocation, that any SSLVerifyClient directives are being applied which are in Location blocks that match /test/someLocation or just /someLocation. If I have the following config: <LocationMatch "^/f"> SSLVerifyClient require SSLVerifyDepth 2 </LocationMatch> Then, the following URL requires a client certificate: https://my.server/test/foo. However, the following URL does not: https://my.server/test/somethingElse/foo Note that this only seems to apply for SSL configuration. The following has no effect whatsoever on https://my.server/test/foo: <LocationMatch "^/f"> Order allow,deny Deny from all </LocationMatch> However, it does block access to https://my.server/foo. This presents a major problem for cases where I have some project running at https://my.server/project (which has to require SSL client certificate authorization), and there is a git repository for that project at https://my.server/git/project which cannot require a SSL client certificate. Since the /git/project URL also gets matched agains /project Location blocks, such a configuration seems impossible given my current findings. Question: Why is this happening, and how do I solve my problem? In the end, I want to require SSL Client certificate authorization for the whole server except for /git and /someLocation, with as minimal configuration as possible (so I don't have to modify the configuration each time something new is deployed or a new git repository is added). Note: I rewrote my question (instead of just adding more updates at the bottom) to take into account my new findings and hopefully make this more clear.

    Read the article

  • Nginx - Redirect any Subdomain to File without Rewriting

    - by Waffle
    Recently I have switched from Apache to Nginx to increase performance on a web server running Ubuntu 11.10. I have been having issues trying to figure out how certain things work in Nginx compared to Apache, but one issue has been stumping me and I have not been able to find the answer online. My problem is that I need to be able to redirect (not rewrite) any sub-domain to a file, but that file needs to be able to get the sub-domain part of the URL in order to do a database look-up of that sub-domain. So far, I have been able to get any sub-domain to rewrite to that file, but then it loses the text of the sub-domain I need. So, for example, I would like test.server.com to redirect to server.com/resolve.php, but still remain as test.server.com. If this is not possible, the thing that I would need at the very least would be something such as going to test.server.com would go to server.com/resolve.php?=test . One of these options must be possible in Nginx. My config as it stands right now looks something like this: server { listen 80; ## listen for ipv4; this line is default and implied listen [::]:80 default ipv6only=on; ## listen for ipv6 root /usr/share/nginx/www; index index.php index.html index.htm; # Make site accessible from http://localhost/ server_name www.server.com server.com; location / { # First attempt to serve request as file, then # as directory, then fall back to index.html try_files $uri $uri/ /index.html; } location /doc { root /usr/share; autoindex on; allow 127.0.0.1; } location /images { root /usr/share; autoindex off; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # #error_page 500 502 503 504 /50x.html; #location = /50x.html { # root /usr/share/nginx/www; #} # proxy the PHP scripts to Apache listening on 127.0.0.1:80 # #location ~ \.php$ { # proxy_pass http://127.0.0.1; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # location ~ \.php$ { fastcgi_pass unix:/tmp/php5-fpm.sock; fastcgi_index index.php; include fastcgi_params; } # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} } server { listen 80 default; server_name *.server.com; rewrite ^ http://www.server.com/resolve.php; } As I said before, I am very new to Nginx, so I have a feeling the answer is pretty simple, but no examples online seem to deal with just redirects without rewrites or rewriting with the sub-domain section included. Any help on what to do would be most appreciated and if any one has a better idea to accomplish what I need, I am also open to ideas. Thank you very much.

    Read the article

  • phpMyAdmin setup issues

    - by EquinoX
    I am trying to follow the tutorial here to setup the user and pass. It says there that "this section is only applicable if your MySQL server is running with --skip-show-database". First question is, how do I check if MySQl server is running with --skip-show-database? Is there any way I can access phpMyAdmin SQL query window without logging in? Otherwise I'd have to execute this SQL from command line. I am also getting this: Cannot load mcrypt extension. Please check your PHP configuration. I have added mcrypt.so to php.ini and doing the following command proves that I have it. [root@DT html]# rpm -qa | grep mcrypt mcrypt-2.6.8-1.el5 php-mcrypt-5.3.5-1.1.w5 libmcrypt-2.5.8-4.el5.centos [root@DT html]# php -v PHP 5.3.5 (cli) (built: Feb 19 2011 13:10:09) Copyright (c) 1997-2010 The PHP Group Zend Engine v2.3.0, Copyright (c) 1998-2010 Zend Technologies Now when I go to phpinfo() and search for mcrypt it can find it inside the Configure Command row ('--with-mcrypt=shared,/usr'). So, what to do next?. UPDATE: I didn't put extension=mcrypt.so in php.ini as it will complain the following: PHP Warning: Module 'mcrypt' already loaded in Unknown on line 0 Here's my nginx.conf: #user nobody; worker_processes 2; #error_log logs/error.log; #error_log logs/error.log notice; #error_log logs/error.log info; #pid logs/nginx.pid; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; #log_format main '$remote_addr - $remote_user [$time_local] "$request" ' # '$status $body_bytes_sent "$http_referer" ' # '"$http_user_agent" "$http_x_forwarded_for"'; #access_log logs/access.log main; sendfile on; #tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; gzip on; server { listen 80; root /usr/share/nginx/html; server_name localhost; #charset koi8-r; #access_log logs/host.access.log main; location / { #root html; index index.html index.htm; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { #root html; } # proxy the PHP scripts to Apache listening on 127.0.0.1:80 # #location ~ \.php$ { # proxy_pass http://127.0.0.1; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # location ~ \.php$ { #root /usr/local/nginx/html; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /usr/share/nginx/html$fastcgi_script _name; include fastcgi_params; } # deny access to .htaccess files, if Apache's document root # concurs with nginx's one location ~ /\.ht { deny all; } } # another virtual host using mix of IP-, name-, and port-based configuration # #server { # listen 8000; # listen somename:8080; # server_name somename alias another.alias; # location / { # root html; # index index.html index.htm; # } #} # HTTPS server # #server { # listen 443; # server_name localhost; # ssl on; # ssl_certificate cert.pem; # ssl_certificate_key cert.key; # ssl_session_timeout 5m; # ssl_protocols SSLv2 SSLv3 TLSv1; # ssl_ciphers ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP; # ssl_prefer_server_ciphers on; # location / { # root html; # index index.html index.htm; # } #} }

    Read the article

  • Issue in nginx proxying to apache

    - by Luis Masuelli
    My current nginx configuration is as follows: specific configuration for (currently two) domains: server { listen 443 ssl; server_name studiotv.service.tebusco.lan phpmyadmin.service.tebusco.lan; ssl_certificate /home/administrador/nginx-confs/ssl/service.tebusco.lan.crt; ssl_certificate_key /home/administrador/nginx-confs/ssl/service.tebusco.lan.key; location / { proxy_pass http://127.0.0.1:8180; proxy_set_header Host $http_host:8180; } } default configuration for unmatched ssl connections: server { listen 443 default ssl; ssl_certificate /home/administrador/nginx-confs/ssl/service.tebusco.lan.crt; ssl_certificate_key /home/administrador/nginx-confs/ssl/service.tebusco.lan.key; location / { return 403; } } http configuration: server { listen 80; rewrite ^ https://$host$request_uri? permanent; } The intention is clear: Redirect http traffic to https. Proxy each https:// call from phpmyadmin.service.tebusco.lan and studiotv.service.tebusco.lan to apache2. This includes passing a host header, which is detected. Each unmatched ssl connection must return a 403 in nginx. Does not even reach apache2. In the apache2 side of the life, I have a default site, and a non-default site which will match studiotv.service.tebusco.lan: 000-default.conf file (available and enabled): <VirtualHost 127.0.0.1:8180> # The ServerName directive sets the request scheme, hostname and port that # the server uses to identify itself. This is used when creating # redirection URLs. In the context of virtual hosts, the ServerName # specifies what hostname must appear in the request's Host: header to # match this virtual host. For the default virtual host (this file) this # value is not decisive as it is used as a last resort host regardless. # However, you must set it for any further virtual host explicitly. ServerName localhost ServerAdmin webmaster@localhost DocumentRoot /var/www/html <Directory /var/www/html> Order deny,allow Require all granted </Directory> </VirtualHost> # vim: syntax=apache ts=4 sw=4 sts=4 sr noet studiotv.conf file (available and enabled): <VirtualHost *:8180> ServerName studiotv.service.tebusco.lan ServerAdmin [email protected] DocumentRoot /var/www/studiotv <Directory /var/www/studiotv/> Options -Indexes +FollowSymLinks AllowOverride None Order deny,allow Allow from all Require all granted </Directory> # Available loglevels: trace8, ..., trace1, debug, info, notice, warn, # error, crit, alert, emerg. # It is also possible to configure the loglevel for particular # modules, e.g. #LogLevel info ssl:warn # No usamos ${APACHE_LOG_DIR} sino en su lugar /var/log/<host> ErrorLog /var/log/apache2/studiotv/error.log CustomLog /var/log/apache2/studiotv/access.log combined </VirtualHost> # vim: syntax=apache ts=4 sw=4 sts=4 sr noet However, when I hit the browser with http://studiotv.service.tebusco.lan, the default php page is shown instead. Question: What am I missing? (apache 2.4.7, nginx 1.6.0, ubuntu server 14.04).

    Read the article

  • MVC4 App opens with directory listing and gives a 404 for any direct URL's entered in the browser

    - by ProfK
    I've just deployed a previously (on my local IIS) working MVC4 app to IIS 7.5 on the dev server. After tweaking this and that - one knows how these things get forgotten - the app finally launches, but shows a directory listing of the app root. Clicking on most links there works, opening the directory listing of the sub-directory. Elmah logs no errors and /elmah.asd also gives a 404. The site has an appropriate localhost binding in the hosts file. I can find nothing wrong. MVC is installed on the server, as another MCV app works fine.

    Read the article

  • A VS2010 Project Made From Post: How to: Host a WCF Service in a Managed Windows Service

    MSDN has a very nice article on how to create a windows service that hosts a Windows Communication Foundation (WCF) service.  It explains all the details of doing this in a step by step... This site is a resource for asp.net web programming. It has examples by Peter Kellner of techniques for high performance programming...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • How to find web hosting that meets my requirements?

    - by John Conde
    This question is here so we can offer users who are looking for information on how to determine which web hosting solution is right for them. All future questions pertaining to finding web hosting should be closed as a duplicate of this question. As per this meta question. How to find web hosting that meets my requirements? What we're looking for in answers to this question the basics about web hosting: What is web hosting? What is the difference between shared, VPS, and dedicated hosting? How does a content delivery network relate to web hosting? Anything else you feel is helpful in finding a web host. What we do not want is: Endorsements or recommendations for specific web hosts We do not want your experience or other subjective information (just the facts please)

    Read the article

  • The .NET Rocks! Visual Studio 2010 Road Trip

    Carl Franklin and Richard Campbell, hosts of the .NET Rocks radio show, have decided to set off in their DotNetMobile (a 30 foot RV) to 15 US cities between April 19th and May 7th. What for, you ask? Why, to meet up with .NET developers and show off the latest and greatest in Visual Studio 2010 and .NET 4.0...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

< Previous Page | 67 68 69 70 71 72 73 74 75 76 77 78  | Next Page >