Search Results

Search found 4721 results on 189 pages for 'traffic'.

Page 57/189 | < Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >

  • How to block access to addresses outside network (internet)

    - by devnull
    I have a homeserver, that is now connected to the internet with an own network device (ath0 - 192.168.1.x). It also has one more network interface (eth0 - 192.168.0.x). Soon I will get a second internet line that will be connected the second network. The server then has both networks with different internet lines available, but i only want it to connect to the internet on the old ath0 interface - not the new eth0 (192.168.0.x). Background of that constellation is that the new line has a volume-limit in traffic - the old hasn't and i need the new line for all mobile devices and laptops. The devices should be able to use the new network to connect to the internet and the server. The homeserver is a debian 6 with iptables and some already written rules for it. I need now a rule to block all outgoing internet access on the eth0 interface - i guess it could be something with --target != 192.168.0.0 but i did not succeed in finding the proper solution. Edit: found the solution: iptables -A OUTPUT -o eth0 -d 192.168.0.0/24 -m state --state NEW,ESTABLISHED -j ACCEPT With that setting, all traffic that uses the eth0 interface is only allowed if the destination is inside the network 192.168.0.x - all other traffic is denied .

    Read the article

  • OpenVPN / iptables restrict some access

    - by RitonLaJoie
    I want to create an openvpn service on a dedicated server I have, for some friends so that they are able to play online games faster. Is there an easy way to restrict which traffic I allow them with iptables ? It seems iptable is not very easy to maintain and we can easily get kicked out of our own server. Rebooting on a rescue mode every time I would get kicked out because of bad iptable rules would just be a pain. As far as I understand, the tun interface would be providing the access. Which kind of rule in iptables would I have to implement to restrict their access to only 1 ip ? Also, I don't want this vpn to be the default gateway for all the traffic. I guess I should go with the option of pushing a route to the clients so that they connect to the IP of the game server through the VPN and use their regular routes through their ISP for all the other traffic ? As a side not, it seems Openvpn AS is not very robust. Is there some other (commercial is ok) product that would give me the same administration options through a web interface ? Is Webmin the only other solution ? Thanks !

    Read the article

  • Routing connections through VPN based on hostname (not IP range)

    - by Michal M
    This bugs me immensly. I need to connect to client's network through VPN. But I definitely do not want to send all the traffic through client's network so this option is out of question. What I need basically is for the OS to know that all client's network subdomains (*.example.com) need to go through the VPN connection. I tried a couple of options: Changing order of services and setting the VPN on top, but this works the same as "Send all traffic over VPN connection". Using "VPN on Demand" option from network advanced options, but this feature is quite rubbish to be honest. Seems to work only in Safari (?!) and it doesn't route the connection, but it basically triggers the OS to connect to the selected VPN. The reason I need it to work based on hostnames rather than IP range is simple - my client has a lot of servers inside his network and it's impossible for me to remember all IPs. They are all within a range, but this doesn't help me remembering. Another option would be to put the VPN connection on the bottom of network services and untick "Send all traffic..." and then put all known hostnames in hosts file, but considering there could be hundreds of servers (therefore hostnames and ips too) it ridiculous job. And if new server appears on the network I'd need to edit the hosts file again. Sisyphean labours. However this works on Windows very simply. If a hostname is not available through default network interface, then it seems to try VPN connection and this works brilliantly. So, how can I achieve that on Mac, then? I know client's internal DNS addresses if that is of any help (like directing a certain domains through a different DNS)? PS. Using latest version 10.6.6. PS2. I am using VPN to access intranet, version control servers (svn://), samba shares and for SSH access to servers.

    Read the article

  • Using iptables to make a VPN router

    - by lost_in_the_sauce
    I am attempting to make a VPN connection to a third party VPN site, then forward traffic from my internal computers (ssh and ping for now) out to the VPN site using IPTables. 3rd Party <- (tun0/eth0)Linux VPN Box(eth1) <- Windows7TestBox I am running on CentOS 6.3 Linux and have two network connections eth0-public eth1-private. I am running vpnc-0.5.3-4 which is currently connecting to my destination. When I connect I am able to ping the destination IPAddresses but that is as far as I can get. ping -I tun0 10.1.33.26 success ping -I eth0 10.1.33.26 fail ping -I eth1 10.1.33.26 fail I have my private network Windows 7 test box set up to have the eth1 (private) network of my VPN Server as its gateway and can ping him fine. I need IPTables to send the Windows 7 traffic out the VPN tunnel. I have tried for a few days many different IPTables configurations from this site and others, either the other examples are too simple or overly complicated. The only thing this server is doing is connecting to the VPN and forwarding all traffic. So we can "flush" everything and start from scratch here. It is a blank slate. #!/bin/bash echo "Define variables" ipt="/sbin/iptables" echo "Zero out all counters" $ipt -Z $ipt -t nat -Z $ipt -t mangle -Z echo "Flush all active rules, delete all chains" $ipt -F $ipt -X $ipt -t nat -F $ipt -t nat -X $ipt -t mangle -F $ipt -t mangle -X $ipt -P INPUT ACCEPT $ipt -P FORWARD ACCEPT $ipt -P OUTPUT ACCEPT $ipt -t nat -A POSTROUTING -o tun0 -j MASQUERADE $ipt -A FORWARD -i eth1 -o eth0 -j ACCEPT $ipt -A FORWARD -i eth0 -o eth1 -j ACCEPT $ipt -A FORWARD -i eth0 -o tun0 -j ACCEPT $ipt -A FORWARD -i tun0 -o eth0 -j ACCEPT Again I have done many variations of the above and many other rules from other posts but haven't been able to move forward. It seems like such a simple task, and yet....

    Read the article

  • Tunnel over HTTPS

    - by ephemient
    At my workplace, the traffic blocker/firewall has been getting progressively worse. I can't connect to my home machine on port 22, and lack of ssh access makes me sad. I was previously able to use SSH by moving it to port 5050, but I think some recent filters now treat this traffic as IM and redirect it through another proxy, maybe. That's my best guess; in any case, my ssh connections now terminate before I get to log in. These days I've been using Ajaxterm over HTTPS, as port 443 is still unmolested, but this is far from ideal. (Sucky terminal emulation, lack of port forwarding, my browser leaks memory at an amazing rate...) I tried setting up mod_proxy_connect on top of mod_ssl, with the idea that I could send a CONNECT localhost:22 HTTP/1.1 request through HTTPS, and then I'd be all set. Sadly, this seems to not work; the HTTPS connection works, up until I finish sending my request; then SSL craps out. It appears as though mod_proxy_connect takes over the whole connection instead of continuing to pipe through mod_ssl, confusing the heck out of the HTTPS client. Is there a way to get this to work? I don't want to do this over plain HTTP, for several reasons: Leaving a big fat open proxy like that just stinks A big fat open proxy is not good over HTTPS either, but with authentication required it feels fine to me HTTP goes through a proxy -- I'm not too concerned about my traffic being sniffed, as it's ssh that'll be going "plaintext" through the tunnel -- but it's a lot more likely to be mangled than HTTPS, which fundamentally cannot be proxied Requirements: Must work over port 443, without disturbing other HTTPS traffic (i.e. I can't just put the ssh server on port 443, because I would no longer be able to serve pages over HTTPS) I have or can write a simple port forwarder client that runs under Windows (or Cygwin) Edit DAG: Tunnelling SSH over HTTP(S) has been pointed out to me, but it doesn't help: at the end of the article, they mention Bug 29744 - CONNECT does not work over existing SSL connection preventing tunnelling over HTTPS, exactly the problem I was running into. At this point, I am probably looking at some CGI script, but I don't want to list that as a requirement if there's better solutions available.

    Read the article

  • How to access Java servlet running on my PC from outside ?

    - by Frank
    I used Netbeans6.7 to write a servlet, when it runs, it opens a browser window with this address : http://localhost:8080/My_App/Test_Servlet, I replaced the "localhost" with my IP address, now it looks like this : http://192.???.1.??:8080/My_App/Test_Servlet, but I tried to access it from another computer outside my home, it can't read anything, I wonder if I need to change Windows Fire Wall setting to allow outside traffic, it's a Paypal IPN app, so I call Paypal, they said they can't access : http://192.???.1.??:8080/My_App/Test_Servlet What on my side should I do to allow traffic from "paypal.com" to access "My_App/Test_Servlet" ? Frank

    Read the article

  • mono in production websites?

    - by sirmak
    Hi, I'm investigating the use of mono in real world high traffic web applications. There are some references on the mono site (companies using mono), but I couldn't find a high traffic website sample other than Deki powered ones. And I've read some mailings about mod_mono stability problems because of inexistence of compacting GC. Please reference your app and give some info, if is there anyone using mono in production. ...or do I have to look at Java ? Regards, sirmak

    Read the article

  • Firebug's "net" tab is not showing anything?

    - by Jian Lin
    I usually run Fiddler for net traffic monitoring and now am using a Mac machine. I thought Firebug's net tab can show the traffic that is fetched through AJAX (the net tab is enabled). But if I try google.com, and type in something, its "google suggest" will show a bunch of suggestions, but the Firebug's "net" tab is not showing anything?

    Read the article

  • What's the BPF for HTTP?

    - by Gtker
    The definition can be seen here. The candidate answer may be tcp and dst port 80,but can tcp and dst port 80 guarantee it's HTTP traffic and includes all HTTP traffic? It seems not,because some site can be visited by specifying a different port other than 80 this way: http://domain.name:8080 So my question is: what's the exact BPF for HTTP?

    Read the article

  • Twitter rate limit

    - by raulriera
    Hi, I am whitelisted in Twitter, and I have this "traffic heavy" application that just makes 2 request to find out how many users 2 people have.... the traffic currently is killing the 150 request limit per hour. How do I authenticate my requests so that twitter knows I am whitelisted? http://api.twitter.com/1/users/show.xml?screen_name=chavezcandanga http://api.twitter.com/1/users/show.xml?screen_name=luischataing I wish to authenticate those for this simple project http://250mil.com Thanks!

    Read the article

  • IIS7 web farm - local or shared content?

    - by rbeier
    We're setting up an IIS7 web farm with two servers. Should each server have its own local copy of the content, or should they pull content directly from a UNC share? What are the pros and cons of each approach? We currently have a single live server WEB1, with content stored locally on a separate partition. A job periodically syncs WEB1 to a standby server WEB2, using robocopy for content and msdeploy for config. If WEB1 goes down, Nagios notifies us, and we manually run a script to move the IP addresses to WEB2's network interface. Both servers are actually VMs running on separate VMWare ESX 4 hosts. The servers are domain-joined. We have around 50-60 live sites on WEB1 - mostly ASP.NET, with a few that are just static HTML. Most are low-traffic "microsites". A few have moderate traffic, but none are massive. We'd like to change this so both WEB1 and WEB2 are actively serving content. This is mainly for reliability - if WEB1 goes down, we don't want to have to manually intervene to fail things over. Spreading the load is also nice, but the load is not high enough right now for us to need this. We're planning to configure our firewall to balance traffic across the two servers. It will detect when a server goes down and will send all the traffic to the remaining live server. We're planning to use sticky sessions for now... eventually we may move to SQL Server session state and stateless load balancing. But we need a way for the servers to share content. We were originally planning to move all the content to a UNC share. Our storage provider says they can set up a highly available SMB share for us. So if we go the UNC route, the storage shouldn't be a single point of failure. But we're wondering about the downsides to this approach: We'll need to change the physical paths for each site and virtual directory. There are also some projects that have absolute paths in their web.config files - we'll have to update those as well. We'll need to create a domain user for the web servers to access the share, and grant that user appropriate permissions. I haven't looked into this yet - I'm not sure if the application pool identity needs to be changed to this user, or if there's another way to tell IIS to use this account when connecting to the share. Sites will no longer be able to access their content if there's ever an Active Directory problem. In general, it just seems a lot more complicated, with more moving parts that could break. Our storage provider would create a volume for us on their redundant SAN. If I understand correctly, this SAN volume would be mounted on a VM running in their redundant VMWare environment; this VM would then expose the SMB share to our web servers. On the other hand, a benefit of the shared content approach is that we'd only need to deploy code to one place, and there would never be a temporary inconsistency between multiple copies of the content. This thread is pretty interesting, though some of these people are working at a much larger scale. I've just been discussing content so far, but we also need to think about configuration. I don't know if we can just use DFS replication for the applicationHost.config and other files, or if it's best to use the shared configuration feature with the config on a UNC share. What do you think? Thanks for your help, Richard

    Read the article

  • Debian, 6rd tunnel, and connection troubles

    - by Chris B
    Long story short I am having issues with IPv6 using a 6rd tunnel with my ISP, charter business. They offer a 6rd tunnel that I think I have properly set up, but the server doesn’t reply to every ipv6 request. When the server has the network interfaces idle with no traffic for about 10 minutes, then IPv6 stops accepting inbound connections. to re-allow it, I must go into the server, and make it do a outbound ipv6 connection (normally a ping) to start it back up. Whats weird though i that if I run iptraf when its not working, it still shows a inbound ipv6 packet… the server is just not replying, and I can’t figure out why. Also, if I try to access my server over IPv6 from a house about 1 mile away on the same ISP, it is never able to connect. it always times out, but again the iptraf shows a ipv6 inbound packet. Again, it just does not reply. To test if my server is accessible through IPv6 I always have to use my vzw 4g phone (they use IPv6) or ipv6proxy dot net. Here is all of the configuration information my ISP gives on there tunnel server: 6rd Prefix = 2602:100::/32 Border Relay Address = 68.114.165.1 6rd prefix length = 32 IPv4 mask length = 0 Here is my /etc/network/interfaces for ipv6 (used x's to block real addresses) auto charterv6 iface charterv6 inet6 v4tunnel address 2602:100:189f:xxxx::1 netmask 32 ttl 64 gateway ::68.114.165.1 endpoint 68.114.165.1 local 24.159.218.xxx up ip link set mtu 1280 dev charterv6 here is my iptables config filter :INPUT DROP [0:0] :fail2ban-ssh – [0:0] :OUTPUT ACCEPT [0:0] :FORWARD DROP [0:0] :hold – [0:0] -A INPUT -p tcp -m tcp —dport 22 -j fail2ban-ssh -A INPUT -m state —state RELATED,ESTABLISHED -j ACCEPT -A INPUT -p tcp -m multiport -j ACCEPT —dports 80,443,25,465,110,995,143,993,587,465,22 -A INPUT -i lo -j ACCEPT -A INPUT -p tcp -m tcp —dport 10000 -j ACCEPT -A INPUT -p tcp -m tcp —dport 5900:5910 -j ACCEPT -A fail2ban-ssh -j RETURN -A INPUT -p icmp -j ACCEPT COMMIT and last here is my ip6tables firewall config filter :INPUT DROP [1653:339023] :FORWARD DROP [0:0] :OUTPUT ACCEPT [60141:13757903] :hold – [0:0] -A INPUT -m state —state RELATED,ESTABLISHED -j ACCEPT -A INPUT -p tcp -m multiport —dports 80,443,25,465,110,995,143,993,587,465,22 -j ACCEPT -A INPUT -i lo -j ACCEPT -A INPUT -p tcp -m tcp —dport 10000 -j ACCEPT -A INPUT -p tcp -m tcp —dport 5900:5910 -j ACCEPT -A INPUT -p ipv6-icmp -j ACCEPT COMMIT So Summary: 1.iptraf always shows IPv6 traffic, so its always making it to the server 2.server stops replying on ipv6 after no traffic for awhile (10 minutesish) until a outbound connection is made, then the process repeats. 3.server is NEVER accessable vi same ISP (yet iptraf still shows ipv6 request) Notes: When I try to access it from the same ISP from across town, even with iptables and ip6tables allowing ALL inbound traffic, this is what iptraf shows. IPv6 (92 bytes) from 97.92.18.xxx to 24.159.218.xxx on eth0 ICMP dest unrch (port) (120 bytes) from 24.159.218.xxx to 97.92.18.xxx on eth1 its strange, like its trying to forward to LAN? (eth1 is LAN, eth0 is WAN) even with the IPv6 address being set in the hosts file to the servers domain name. With iptables set up normally with the above configurations it only says this: IPv6 (100 bytes) from 97.92.18.xxx to 24.159.218.xxx on eth0 Im REALLY stuck on this, and any help would be GREATLY appreciated.

    Read the article

  • Network Restructure Method for Double-NAT network

    - by Adrian
    Due to a series of poor network design decisions (mostly) made many years ago in order to save a few bucks here and there, I have a network that is decidedly sub-optimally architected. I'm looking for suggestions to improve this less-than-pleasant situation. We're a non-profit with a Linux-based IT department and a limited budget. (Note: None of the Windows equipment we have runs does anything that talks to the Internet nor do we have any Windows admins on staff.) Key points: We have a main office and about 12 remote sites that essentially double NAT their subnets with physically-segregated switches. (No VLANing and limited ability to do so with current switches) These locations have a "DMZ" subnet that are NAT'd on an identically assigned 10.0.0/24 subnet at each site. These subnets cannot talk to DMZs at any other location because we don't route them anywhere except between server and adjacent "firewall". Some of these locations have multiple ISP connections (T1, Cable, and/or DSLs) that we manually route using IP Tools in Linux. These firewalls all run on the (10.0.0/24) network and are mostly "pro-sumer" grade firewalls (Linksys, Netgear, etc.) or ISP-provided DSL modems. Connecting these firewalls (via simple unmanaged switches) is one or more servers that must be publically-accessible. Connected to the main office's 10.0.0/24 subnet are servers for email, tele-commuter VPN, remote office VPN server, primary router to the internal 192.168/24 subnets. These have to be access from specific ISP connections based on traffic type and connection source. All our routing is done manually or with OpenVPN route statements Inter-office traffic goes through the OpenVPN service in the main 'Router' server which has it's own NAT'ing involved. Remote sites only have one server installed at each site and cannot afford multiple servers due to budget constraints. These servers are all LTSP servers several 5-20 terminals. The 192.168.2/24 and 192.168.3/24 subnets are mostly but NOT entirely on Cisco 2960 switches that can do VLAN. The remainder are DLink DGS-1248 switches that I am not sure I trust well enough to use with VLANs. There is also some remaining internal concern about VLANs since only the senior networking staff person understands how it works. All regular internet traffic goes through the CentOS 5 router server which in turns NATs the 192.168/24 subnets to the 10.0.0.0/24 subnets according to the manually-configured routing rules that we use to point outbound traffic to the proper internet connection based on '-host' routing statements. I want to simplify this and ready All Of The Things for ESXi virtualization, including these public-facing services. Is there a no- or low-cost solution that would get rid of the Double-NAT and restore a little sanity to this mess so that my future replacement doesn't hunt me down? Basic Diagram for the main office: These are my goals: Public-facing Servers with interfaces on that middle 10.0.0/24 network to be moved in to 192.168.2/24 subnet on ESXi servers. Get rid of the double NAT and get our entire network on one single subnet. My understanding is that this is something we'll need to do under IPv6 anyway, but I think this mess is standing in the way.

    Read the article

  • Transparent proxy which preserves client mac address

    - by A G
    I have a customer that wants to intercept SSL traffic as it leaves their network. My proposed solution is to setup a proxy that is transparent and both layer 2 and layer 3 so it can simply be dropped into their network without any change in config required. The proxy has two NICs, one connected to the server, the other to the client. The client, proxy and gateway are under control of the customer, the server is not. For example: client --- Proxy --- gateway -|- server I have my proxy program configured with IP_TRANSPARENT socket option to it can respond to connections destined for a remote IP. I am using the following setup: iptables -t mangle -A PREROUTING -p tcp --dport 80 -j TPROXY --on-port 3128 --tproxy-mark 1/1 iptables -t mangle -A PREROUTING -p tcp -j MARK --set-mark 1 ip rule add fwmark 1/1 table 1 ip route add local 0.0.0.0/0 dev lo table 1 The client in question is on its own subnet and has been configured so that the proxy is the default gateway. The result is: Client sends a frame to the proxy; source IP is client, source mac is client, destination IP is server, destination mac is proxy Proxy forwards this frame to the gateway; source IP is proxy, source mac is proxy, destination IP is server, destination mac is gateway Gateway forwards this to the server and gets a response back. Gateway sends reply back to proxy; source IP is server, source mac is gateway, destination IP is proxy, destination mac is proxy Proxy forwards this reply to client; source IP is server, source mac is proxy, destination IP is client, destination mac is client. The tproxy and iptables configuration lets the proxy send packets with a non local ip address. Is there a way to make something transparent at the mac address level? That is, put the client on the same subnet as the gateway. The gateway sees the source IP and mac as that of the client, even though they originated from the proxy. Could this be done by configuring the proxy as a bridge then use ebtables to escalate the traffic to be handled by iptables? When I use ebtables to push something up to iptables, it appears my proxy program doesn't respond to the packets as they are destined for the gateways's mac address, not the proxy's. What are some other potential avenues I could investigate? EDIT: When the client and gateway are on different subnets (and client has set the proxy as the gateway), it works as described in 1 to 5. But I want to know if it is possible to have the client and gateway on the same subnet and have the proxy fully transparent (ie client is not aware of the proxy). Thanks! EDIT 2: I can configure the proxy as a bridge using brctl, but cannot find a way to direct this traffic to my proxy program - asked here Possible for linux bridge to intercept traffic?. Currently, with the description numbered 1 to 5, it operates at layer 3; it is transparent on the client side (client thinks it is talking to the server's IP), but not on the gateway side (gateway is talking to the proxy's IP). What I want to find out is, is it possible to make this operate at layer 2, so it is fully transparent? What are the available options I should research? Thanks

    Read the article

  • IPv6: Should I have private addresses?

    - by AlReece45
    Right now, we have a rack of servers. Every server right now has at least 2 IP addresses, one for the public interface, another for the private. The servers that have SSL websites on them have more IP addresses. We also have virtual servers, that are configured similarly. Private Network The private range is currently just used for backups and monitoring. Its a gigabit port, the interface usage does not usually get very high. There are other technologies we're considering using that would use this port: iSCSI (implementations usually recommends dedicating an interface to it, which would be yet another IP network), VPN to get access to the private range (something I'd rather avoid) dedicated database servers LDAP centralized configuration (like puppet) centralized logging We don't have any private addresses in our DNS records (only public addresses). For our servers to utilize the correct IP address for the right interface (and not hard code the IP address) probably requires setting up a private DNS server (So now we add 2 different dns entries to 2 different systems). Public Network Our public range has a variety of services include web, email, and ftp. There is a hardware firewall between our network and the "public" network. We have (relatively secure) method to instruct the firewall to open and close administrative access (web interfaces, ssh, etc) for our current IP address. With either solution discussed, the host-based firewalls will be configured as well. The public network currently runs at a dedicated 20Mbps link. There are a couple of legacy servers with fast-ethernet ports, but they are scheduled for decommissioning. All of the other production boxes have at least 2 Gigabit Ethernet ports. The more traffic-heavy servers have 4-6 available (none is using more than the 2 Gigabit ports right now). IPv6 I want to get an IPv6 prefix from our ISP. So at least every "server" has at least one IPv6 interface. We'll still need to keep the IPv4 addressees up and available for legacy clients (web servers and email at the very least). We have two IP networks right now. Adding the public IPv6 address would make it three. Just use IPv6? I'm thinking about just dumping the private IPv4 range and using the IPv6 range as the primary means of all communications. If an interface starts reaching its capacity, utilize the newly free interfaces to create a trunk. It has the advantage that if either the public or private traffic needs to exceed 1Gbps. The traffic for each interface is already analyzed on a regular basis to predict future bandwidth use. In the rare instances where bandwidth unexpected peaks: utilize QoS to ensure traffic (like our limited SSH access) is prioritized correctly so the problem can be corrected (if possible, our WAN is the bottleneck right now). It also has the advantage of not needing to make an entry for every private address. We may have private DNS (or just LDAP), but it'll be much more limited in scope with less entries to duplicate. Summary I'm trying to make this network as "simple" as possible. At the same time, I want to make sure its reliable, upgradeable, scalable, and (eventually) redundant. Having one IPv6 network, and a legacy IPv4 network seems to be the best solution to me. Regarding using assigned IPv6 addresses for both networks, sharing the available bandwidth on one (more trunked if needed): Are there any technical disadvantages (limitations, buffers, scalability)? Are there any other security considerations (asides from firewalls mentioned above) to consider? Are there regulations or other security requirements (like PCI-DSS) that this doesn't meet? Is there typical software for setting up a Linux network that doesn't have IPv6 support yet? (logging, ldap, puppet) Some other thing I didn't consider?

    Read the article

  • Windows Azure: Import/Export Hard Drives, VM ACLs, Web Sockets, Remote Debugging, Continuous Delivery, New Relic, Billing Alerts and More

    - by ScottGu
    Two weeks ago we released a giant set of improvements to Windows Azure, as well as a significant update of the Windows Azure SDK. This morning we released another massive set of enhancements to Windows Azure.  Today’s new capabilities include: Storage: Import/Export Hard Disk Drives to your Storage Accounts HDInsight: General Availability of our Hadoop Service in the cloud Virtual Machines: New VM Gallery, ACL support for VIPs Web Sites: WebSocket and Remote Debugging Support Notification Hubs: Segmented customer push notification support with tag expressions TFS & GIT: Continuous Delivery Support for Web Sites + Cloud Services Developer Analytics: New Relic support for Web Sites + Mobile Services Service Bus: Support for partitioned queues and topics Billing: New Billing Alert Service that sends emails notifications when your bill hits a threshold you define All of these improvements are now available to use immediately (note that some features are still in preview).  Below are more details about them. Storage: Import/Export Hard Disk Drives to Windows Azure I am excited to announce the preview of our new Windows Azure Import/Export Service! The Windows Azure Import/Export Service enables you to move large amounts of on-premises data into and out of your Windows Azure Storage accounts. It does this by enabling you to securely ship hard disk drives directly to our Windows Azure data centers. Once we receive the drives we’ll automatically transfer the data to or from your Windows Azure Storage account.  This enables you to import or export massive amounts of data more quickly and cost effectively (and not be constrained by available network bandwidth). Encrypted Transport Our Import/Export service provides built-in support for BitLocker disk encryption – which enables you to securely encrypt data on the hard drives before you send it, and not have to worry about it being compromised even if the disk is lost/stolen in transit (since the content on the transported hard drives is completely encrypted and you are the only one who has the key to it).  The drive preparation tool we are shipping today makes setting up bitlocker encryption on these hard drives easy. How to Import/Export your first Hard Drive of Data You can read our Getting Started Guide to learn more about how to begin using the import/export service.  You can create import and export jobs via the Windows Azure Management Portal as well as programmatically using our Server Management APIs. It is really easy to create a new import or export job using the Windows Azure Management Portal.  Simply navigate to a Windows Azure storage account, and then click the new Import/Export tab now available within it (note: if you don’t have this tab make sure to sign-up for the Import/Export preview): Then click the “Create Import Job” or “Create Export Job” commands at the bottom of it.  This will launch a wizard that easily walks you through the steps required: For more comprehensive information about Import/Export, refer to Windows Azure Storage team blog.  You can also send questions and comments to the [email protected] email address. We think you’ll find this new service makes it much easier to move data into and out of Windows Azure, and it will dramatically cut down the network bandwidth required when working on large data migration projects.  We hope you like it. HDInsight: 100% Compatible Hadoop Service in the Cloud Last week we announced the general availability release of Windows Azure HDInsight. HDInsight is a 100% compatible Hadoop service that allows you to easily provision and manage Hadoop clusters for big data processing in Windows Azure.  This release is now live in production, backed by an enterprise SLA, supported 24x7 by Microsoft Support, and is ready to use for production scenarios. HDInsight allows you to use Apache Hadoop tools, such as Pig and Hive, to process large amounts of data in Windows Azure Blob Storage. Because data is stored in Windows Azure Blob Storage, you can choose to dynamically create Hadoop clusters only when you need them, and then shut them down when they are no longer required (since you pay only for the time the Hadoop cluster instances are running this provides a super cost effective way to use them).  You can create Hadoop clusters using either the Windows Azure Management Portal (see below) or using our PowerShell and Cross Platform Command line tools: The import/export hard drive support that came out today is a perfect companion service to use with HDInsight – the combination allows you to easily ingest, process and optionally export a limitless amount of data.  We’ve also integrated HDInsight with our Business Intelligence tools, so users can leverage familiar tools like Excel in order to analyze the output of jobs.  You can find out more about how to get started with HDInsight here. Virtual Machines: VM Gallery Enhancements Today’s update of Windows Azure brings with it a new Virtual Machine gallery that you can use to create new VMs in the cloud.  You can launch the gallery by doing New->Compute->Virtual Machine->From Gallery within the Windows Azure Management Portal: The new Virtual Machine Gallery includes some nice enhancements that make it even easier to use: Search: You can now easily search and filter images using the search box in the top-right of the dialog.  For example, simply type “SQL” and we’ll filter to show those images in the gallery that contain that substring. Category Tree-view: Each month we add more built-in VM images to the gallery.  You can continue to browse these using the “All” view within the VM Gallery – or now quickly filter them using the category tree-view on the left-hand side of the dialog.  For example, by selecting “Oracle” in the tree-view you can now quickly filter to see the official Oracle supplied images. MSDN and Supported checkboxes: With today’s update we are also introducing filters that makes it easy to filter out types of images that you may not be interested in. The first checkbox is MSDN: using this filter you can exclude any image that is not part of the Windows Azure benefits for MSDN subscribers (which have highly discounted pricing - you can learn more about the MSDN pricing here). The second checkbox is Supported: this filter will exclude any image that contains prerelease software, so you can feel confident that the software you choose to deploy is fully supported by Windows Azure and our partners. Sort options: We sort gallery images by what we think customers are most interested in, but sometimes you might want to sort using different views. So we’re providing some additional sort options, like “Newest,” to customize the image list for what suits you best. Pricing information: We now provide additional pricing information about images and options on how to cost effectively run them directly within the VM Gallery. The above improvements make it even easier to use the VM Gallery and quickly create launch and run Virtual Machines in the cloud. Virtual Machines: ACL Support for VIPs A few months ago we exposed the ability to configure Access Control Lists (ACLs) for Virtual Machines using Windows PowerShell cmdlets and our Service Management API. With today’s release, you can now configure VM ACLs using the Windows Azure Management Portal as well. You can now do this by clicking the new Manage ACL command in the Endpoints tab of a virtual machine instance: This will enable you to configure an ordered list of permit and deny rules to scope the traffic that can access your VM’s network endpoints. For example, if you were on a virtual network, you could limit RDP access to a Windows Azure virtual machine to only a few computers attached to your enterprise. Or if you weren’t on a virtual network you could alternatively limit traffic from public IPs that can access your workloads: Here is the default behaviors for ACLs in Windows Azure: By default (i.e. no rules specified), all traffic is permitted. When using only Permit rules, all other traffic is denied. When using only Deny rules, all other traffic is permitted. When there is a combination of Permit and Deny rules, all other traffic is denied. Lastly, remember that configuring endpoints does not automatically configure them within the VM if it also has firewall rules enabled at the OS level.  So if you create an endpoint using the Windows Azure Management Portal, Windows PowerShell, or REST API, be sure to also configure your guest VM firewall appropriately as well. Web Sites: Web Sockets Support With today’s release you can now use Web Sockets with Windows Azure Web Sites.  This feature enables you to easily integrate real-time communication scenarios within your web based applications, and is available at no extra charge (it even works with the free tier).  Higher level programming libraries like SignalR and socket.io are also now supported with it. You can enable Web Sockets support on a web site by navigating to the Configure tab of a Web Site, and by toggling Web Sockets support to “on”: Once Web Sockets is enabled you can start to integrate some really cool scenarios into your web applications.  Check out the new SignalR documentation hub on www.asp.net to learn more about some of the awesome scenarios you can do with it. Web Sites: Remote Debugging Support The Windows Azure SDK 2.2 we released two weeks ago introduced remote debugging support for Windows Azure Cloud Services. With today’s Windows Azure release we are extending this remote debugging support to also work with Windows Azure Web Sites. With live, remote debugging support inside of Visual Studio, you are able to have more visibility than ever before into how your code is operating live in Windows Azure. It is now super easy to attach the debugger and quickly see what is going on with your application in the cloud. Remote Debugging of a Windows Azure Web Site using VS 2013 Enabling the remote debugging of a Windows Azure Web Site using VS 2013 is really easy.  Start by opening up your web application’s project within Visual Studio. Then navigate to the “Server Explorer” tab within Visual Studio, and click on the deployed web-site you want to debug that is running within Windows Azure using the Windows Azure->Web Sites node in the Server Explorer.  Then right-click and choose the “Attach Debugger” option on it: When you do this Visual Studio will remotely attach the debugger to the Web Site running within Windows Azure.  The debugger will then stop the web site’s execution when it hits any break points that you have set within your web application’s project inside Visual Studio.  For example, below I set a breakpoint on the “ViewBag.Message” assignment statement within the HomeController of the standard ASP.NET MVC project template.  When I hit refresh on the “About” page of the web site within the browser, the breakpoint was triggered and I am now able to debug the app remotely using Visual Studio: Note above how we can debug variables (including autos/watchlist/etc), as well as use the Immediate and Command Windows. In the debug session above I used the Immediate Window to explore some of the request object state, as well as to dynamically change the ViewBag.Message property.  When we click the the “Continue” button (or press F5) the app will continue execution and the Web Site will render the content back to the browser.  This makes it super easy to debug web apps remotely. Tips for Better Debugging To get the best experience while debugging, we recommend publishing your site using the Debug configuration within Visual Studio’s Web Publish dialog. This will ensure that debug symbol information is uploaded to the Web Site which will enable a richer debug experience within Visual Studio.  You can find this option on the Web Publish dialog on the Settings tab: When you ultimately deploy/run the application in production we recommend using the “Release” configuration setting – the release configuration is memory optimized and will provide the best production performance.  To learn more about diagnosing and debugging Windows Azure Web Sites read our new Troubleshooting Windows Azure Web Sites in Visual Studio guide. Notification Hubs: Segmented Push Notification support with tag expressions In August we announced the General Availability of Windows Azure Notification Hubs - a powerful Mobile Push Notifications service that makes it easy to send high volume push notifications with low latency from any mobile app back-end.  Notification hubs can be used with any mobile app back-end (including ones built using our Mobile Services capability) and can also be used with back-ends that run in the cloud as well as on-premises. Beginning with the initial release, Notification Hubs allowed developers to send personalized push notifications to both individual users as well as groups of users by interest, by associating their devices with tags representing the logical target of the notification. For example, by registering all devices of customers interested in a favorite MLB team with a corresponding tag, it is possible to broadcast one message to millions of Boston Red Sox fans and another message to millions of St. Louis Cardinals fans with a single API call respectively. New support for using tag expressions to enable advanced customer segmentation With today’s release we are adding support for even more advanced customer targeting.  You can now identify customers that you want to send push notifications to by defining rich tag expressions. With tag expressions, you can now not only broadcast notifications to Boston Red Sox fans, but take that segmenting a step farther and reach more granular segments. This opens up a variety of scenarios, for example: Offers based on multiple preferences—e.g. send a game day vegetarian special to users tagged as both a Boston Red Sox fan AND a vegetarian Push content to multiple segments in a single message—e.g. rain delay information only to users who are tagged as either a Boston Red Sox fan OR a St. Louis Cardinal fan Avoid presenting subsets of a segment with irrelevant content—e.g. season ticket availability reminder to users who are tagged as a Boston Red Sox fan but NOT also a season ticket holder To illustrate with code, consider a restaurant chain app that sends an offer related to a Red Sox vs Cardinals game for users in Boston. Devices can be tagged by your app with location tags (e.g. “Loc:Boston”) and interest tags (e.g. “Follows:RedSox”, “Follows:Cardinals”), and then a notification can be sent by your back-end to “(Follows:RedSox || Follows:Cardinals) && Loc:Boston” in order to deliver an offer to all devices in Boston that follow either the RedSox or the Cardinals. This can be done directly in your server backend send logic using the code below: var notification = new WindowsNotification(messagePayload); hub.SendNotificationAsync(notification, "(Follows:RedSox || Follows:Cardinals) && Loc:Boston"); In your expressions you can use all Boolean operators: AND (&&), OR (||), and NOT (!).  Some other cool use cases for tag expressions that are now supported include: Social: To “all my group except me” - group:id && !user:id Events: Touchdown event is sent to everybody following either team or any of the players involved in the action: Followteam:A || Followteam:B || followplayer:1 || followplayer:2 … Hours: Send notifications at specific times. E.g. Tag devices with time zone and when it is 12pm in Seattle send to: GMT8 && follows:thaifood Versions and platforms: Send a reminder to people still using your first version for Android - version:1.0 && platform:Android For help on getting started with Notification Hubs, visit the Notification Hub documentation center.  Then download the latest NuGet package (or use the Notification Hubs REST APIs directly) to start sending push notifications using tag expressions.  They are really powerful and enable a bunch of great new scenarios. TFS & GIT: Continuous Delivery Support for Web Sites + Cloud Services With today’s Windows Azure release we are making it really easy to enable continuous delivery support with Windows Azure and Team Foundation Services.  Team Foundation Services is a cloud based offering from Microsoft that provides integrated source control (with both TFS and Git support), build server, test execution, collaboration tools, and agile planning support.  It makes it really easy to setup a team project (complete with automated builds and test runners) in the cloud, and it has really rich integration with Visual Studio. With today’s Windows Azure release it is now really easy to enable continuous delivery support with both TFS and Git based repositories hosted using Team Foundation Services.  This enables a workflow where when code is checked in, built successfully on an automated build server, and all tests pass on it – I can automatically have the app deployed on Windows Azure with zero manual intervention or work required. The below screen-shots demonstrate how to quickly setup a continuous delivery workflow to Windows Azure with a Git-based ASP.NET MVC project hosted using Team Foundation Services. Enabling Continuous Delivery to Windows Azure with Team Foundation Services The project I’m going to enable continuous delivery with is a simple ASP.NET MVC project whose source code I’m hosting using Team Foundation Services.  I did this by creating a “SimpleContinuousDeploymentTest” repository there using Git – and then used the new built-in Git tooling support within Visual Studio 2013 to push the source code to it.  Below is a screen-shot of the Git repository hosted within Team Foundation Services: I can access the repository within Visual Studio 2013 and easily make commits with it (as well as branch, merge and do other tasks).  Using VS 2013 I can also setup automated builds to take place in the cloud using Team Foundation Services every time someone checks in code to the repository: The cool thing about this is that I don’t have to buy or rent my own build server – Team Foundation Services automatically maintains its own build server farm and can automatically queue up a build for me (for free) every time someone checks in code using the above settings.  This build server (and automated testing) support now works with both TFS and Git based source control repositories. Connecting a Team Foundation Services project to Windows Azure Once I have a source repository hosted in Team Foundation Services with Automated Builds and Testing set up, I can then go even further and set it up so that it will be automatically deployed to Windows Azure when a source code commit is made to the repository (assuming the Build + Tests pass).  Enabling this is now really easy.  To set this up with a Windows Azure Web Site simply use the New->Compute->Web Site->Custom Create command inside the Windows Azure Management Portal.  This will create a dialog like below.  I gave the web site a name and then made sure the “Publish from source control” checkbox was selected: When we click next we’ll be prompted for the location of the source repository.  We’ll select “Team Foundation Services”: Once we do this we’ll be prompted for our Team Foundation Services account that our source repository is hosted under (in this case my TFS account is “scottguthrie”): When we click the “Authorize Now” button we’ll be prompted to give Windows Azure permissions to connect to the Team Foundation Services account.  Once we do this we’ll be prompted to pick the source repository we want to connect to.  Starting with today’s Windows Azure release you can now connect to both TFS and Git based source repositories.  This new support allows me to connect to the “SimpleContinuousDeploymentTest” respository we created earlier: Clicking the finish button will then create the Web Site with the continuous delivery hooks setup with Team Foundation Services.  Now every time someone pushes source control to the repository in Team Foundation Services, it will kick off an automated build, run all of the unit tests in the solution , and if they pass the app will be automatically deployed to our Web Site in Windows Azure.  You can monitor the history and status of these automated deployments using the Deployments tab within the Web Site: This enables a really slick continuous delivery workflow, and enables you to build and deploy apps in a really nice way. Developer Analytics: New Relic support for Web Sites + Mobile Services With today’s Windows Azure release we are making it really easy to enable Developer Analytics and Monitoring support with both Windows Azure Web Site and Windows Azure Mobile Services.  We are partnering with New Relic, who provide a great dev analytics and app performance monitoring offering, to enable this - and we have updated the Windows Azure Management Portal to make it really easy to configure. Enabling New Relic with a Windows Azure Web Site Enabling New Relic support with a Windows Azure Web Site is now really easy.  Simply navigate to the Configure tab of a Web Site and scroll down to the “developer analytics” section that is now within it: Clicking the “add-on” button will display some additional UI.  If you don’t already have a New Relic subscription, you can click the “view windows azure store” button to obtain a subscription (note: New Relic has a perpetually free tier so you can enable it even without paying anything): Clicking the “view windows azure store” button will launch the integrated Windows Azure Store experience we have within the Windows Azure Management Portal.  You can use this to browse from a variety of great add-on services – including New Relic: Select “New Relic” within the dialog above, then click the next button, and you’ll be able to choose which type of New Relic subscription you wish to purchase.  For this demo we’ll simply select the “Free Standard Version” – which does not cost anything and can be used forever:  Once we’ve signed-up for our New Relic subscription and added it to our Windows Azure account, we can go back to the Web Site’s configuration tab and choose to use the New Relic add-on with our Windows Azure Web Site.  We can do this by simply selecting it from the “add-on” dropdown (it is automatically populated within it once we have a New Relic subscription in our account): Clicking the “Save” button will then cause the Windows Azure Management Portal to automatically populate all of the needed New Relic configuration settings to our Web Site: Deploying the New Relic Agent as part of a Web Site The final step to enable developer analytics using New Relic is to add the New Relic runtime agent to our web app.  We can do this within Visual Studio by right-clicking on our web project and selecting the “Manage NuGet Packages” context menu: This will bring up the NuGet package manager.  You can search for “New Relic” within it to find the New Relic agent.  Note that there is both a 32-bit and 64-bit edition of it – make sure to install the version that matches how your Web Site is running within Windows Azure (note: you can configure your Web Site to run in either 32-bit or 64-bit mode using the Web Site’s “Configuration” tab within the Windows Azure Management Portal): Once we install the NuGet package we are all set to go.  We’ll simply re-publish the web site again to Windows Azure and New Relic will now automatically start monitoring the application Monitoring a Web Site using New Relic Now that the application has developer analytics support with New Relic enabled, we can launch the New Relic monitoring portal to start monitoring the health of it.  We can do this by clicking on the “Add Ons” tab in the left-hand side of the Windows Azure Management Portal.  Then select the New Relic add-on we signed-up for within it.  The Windows Azure Management Portal will provide some default information about the add-on when we do this.  Clicking the “Manage” button in the tray at the bottom will launch a new browser tab and single-sign us into the New Relic monitoring portal associated with our account: When we do this a new browser tab will launch with the New Relic admin tool loaded within it: We can now see insights into how our app is performing – without having to have written a single line of monitoring code.  The New Relic service provides a ton of great built-in monitoring features allowing us to quickly see: Performance times (including browser rendering speed) for the overall site and individual pages.  You can optionally set alert thresholds to trigger if the speed does not meet a threshold you specify. Information about where in the world your customers are hitting the site from (and how performance varies by region) Details on the latency performance of external services your web apps are using (for example: SQL, Storage, Twitter, etc) Error information including call stack details for exceptions that have occurred at runtime SQL Server profiling information – including which queries executed against your database and what their performance was And a whole bunch more… The cool thing about New Relic is that you don’t need to write monitoring code within your application to get all of the above reports (plus a lot more).  The New Relic agent automatically enables the CLR profiler within applications and automatically captures the information necessary to identify these.  This makes it super easy to get started and immediately have a rich developer analytics view for your solutions with very little effort. If you haven’t tried New Relic out yet with Windows Azure I recommend you do so – I think you’ll find it helps you build even better cloud applications.  Following the above steps will help you get started and deliver you a really good application monitoring solution in only minutes. Service Bus: Support for partitioned queues and topics With today’s release, we are enabling support within Service Bus for partitioned queues and topics. Enabling partitioning enables you to achieve a higher message throughput and better availability from your queues and topics. Higher message throughput is achieved by implementing multiple message brokers for each partitioned queue and topic.  The  multiple messaging stores will also provide higher availability. You can create a partitioned queue or topic by simply checking the Enable Partitioning option in the custom create wizard for a Queue or Topic: Read this article to learn more about partitioned queues and topics and how to take advantage of them today. Billing: New Billing Alert Service Today’s Windows Azure update enables a new Billing Alert Service Preview that enables you to get proactive email notifications when your Windows Azure bill goes above a certain monetary threshold that you configure.  This makes it easier to manage your bill and avoid potential surprises at the end of the month. With the Billing Alert Service Preview, you can now create email alerts to monitor and manage your monetary credits or your current bill total.  To set up an alert first sign-up for the free Billing Alert Service Preview.  Then visit the account management page, click on a subscription you have setup, and then navigate to the new Alerts tab that is available: The alerts tab allows you to setup email alerts that will be sent automatically once a certain threshold is hit.  For example, by clicking the “add alert” button above I can setup a rule to send myself email anytime my Windows Azure bill goes above $100 for the month: The Billing Alert Service will evolve to support additional aspects of your bill as well as support multiple forms of alerts such as SMS.  Try out the new Billing Alert Service Preview today and give us feedback. Summary Today’s Windows Azure release enables a ton of great new scenarios, and makes building applications hosted in the cloud even easier. If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Windows Azure Developer Center to learn more about how to build apps with it. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • From Bluehost to WP Engine, My WordPress Story

    - by thatjeffsmith
    This is probably the longest blog post I’ve written in a LONG time. And if you’re used to coming here for the Oracle stuff, this post is not about that. It’s about my blog, and the stuff under the hood that makes it run, AKA WordPress. If you want to skip to the juicy stuff, then use these shortcuts: My Site Slowed Down How I Moved to WP Engine How WP Engine ‘Hooked’ Me Why WP Engine? I started thatJeffSmith.com on May 28th, 2010. I had been already been blogging for several years, but a couple of really smart people I respected (Andy, Brent – thanks again!) suggested that I take ownership of my content and begin building my personal brand. I thought that was a good idea, and so I signed up for service with bluehost. Bluehost makes setting up a WordPress site very, very easy. And, they continued to be easy to work with for the past 2 years. I would even recommend them to anyone looking to host their own WordPress install/site. For $83.40, I purchased a year’s worth of service and my domain name registration – a very good value. And then last year I paid $107.40 for another year’s services. And when that year expired I paid another $190.80 for an additional two year’s service in advance. I had been up to that point, getting my money’s worth. And then, just a few weeks ago… My Site Slowed to a Crawl That spike was from an April Fool's Day Post, I think Why? Well, when I first started blogging, I had the same problem that most beginner bloggers have – not many readers. In my first year of blogging, I think the highest number of readers on a single day was about 125. I remember that day as I was very excited to break 100! Bluehost was very reliable, serving up my content with maybe a total of 3-4 outages in the past 2 years. Support was usually very prompt with answers and solutions, and I love their ‘Chat now’ technology – much nicer than message boards only or pay-to-talk phone support. In the past 6 months however, I noticed a couple of things: daily traffic was increasing – woohoo! my service was experiencing severe CPU throttling – doh! To be honest, I wasn’t aware the throttling was occuring, but I did know that the response time of my blog was starting to lag. Average load times were approaching 20-30 seconds. Not good when good sites are loading in 5 seconds or less. And just this past week, in getting ready to launch a new website for work that sucked in an RSS feed from my blog, the new page was left waiting for more than a minute. Not good! In fact my boss asked, why aren’t you blogging on Blogger? Ugh. I tried a few things to fix the problem: I paid for a premium WordPress theme – Themify’s Grido (thanks to @SQLRockstar for the heads-up) I installed a couple of WP caching plugins I read every WP optimization blog post I could get my greedy little eyes on However, at the same time I was also getting addicted to WordPress bloggers talking about all the cool things you could do with your blog. As a result I had at one point about 30 different plugins installed. WordPress runs on MySQL, and certain queries running via these plugins were starving for CPU. Plugins that would be called every page load meant that as more people clicked on my site, the more CPU I needed. I’m not stupid, so I eventually figured out that maybe less plugins was better, and was able to go down to just 20. But still, the site was running like a dog. CPU Throttling, makes MySQL wait to run a query Bluehost runs shared servers. Your site runs on the same box that several hundred (or thousand?) other services are running on. If you take more CPU than they think you should have, they will limit your service by making you stand in line for CPU, AKA ‘throttling.’ This is not bad. This business model allows them to serve many, many users for a very fair price. It works great until, well, until it doesn’t. I noticed in the last week that for every minute of service, I was being throttled between 60 and 300 seconds. If there were 5 MySQL processes running, then every single one of them were being held in check. The blog visitor notice this as their page requests would take a minute or more to be answered. Bluehost unfortunately doesn’t offer dedicated server hosting, so there was no real upgrade path for me follow and remain one of their customers. So what was I to do? Uninstall every plugin and hope the site sped up? Ask for people to take turns on my blog? I decided to spend my way out of the problem. I signed up for service with WP Engine and moved ThatJeffSmith.com The first 2 months are free, and after that it’s about $29/month to run my site on their system. My math tells me that’s a good bit more expensive than what Bluehost was charging me – to the tune of about 300% more a month. Oh, and I should just say that my blog is a personal blog even though I talk about work stuff here. I don’t get paid for blogging, I don’t sell ads, and I don’t expense the service fees – this is my personal passion. So is it worth it? In the first 4 days, it seems to be totally worth it. Load times have gone from 20-30 seconds to less than 5 seconds. A few folks have told me via Twitter that they notice faster page loads. I anticipate this will indirectly lead to more traffic as Google penalizes you in search results if your site is too slow, and of course some folks won’t even bother waiting more than 5-10 seconds. I noticed right away that writing posts, uploading pictures, and just using the WordPress dashboard in general was much more responsive. So writing is less of a chore now, which means I won’t have a good reason not to write How I Moved to WP Engine I signed up for the service and registered my domain. I then took a full export of my ‘old’ site by doing a FTP GET of all my files, then did a MySQL database backup, exported my WordPress Theme settings to a .zip file, and then finally used the WordPress ‘Export’ feature. I then used the WordPress ‘Import’ on the new site to load up my posts. Then I uploaded the theme .zip package from Themify. Then I FTP’d the ‘wp-content’ directory up to my new server using SFTP (WP Engine only supports secure FTP – good on them!) Using a temporary URL to see my new site, I was able to confirm that everything looked mostly OK – I’ll detail the challenges and issues of fixing the content next – but then it was time to ‘flip the switch.’ I updated the IP address that the DNS lookup tables use to route traffic to my new server. In a matter of minutes the DNS servers around the world were updated and it was time to see the new site! But It Was ‘Broken’ I had never moved a website before, and in my rush to update the DNS, I had changed the records without really finding out what I was supposed to do first. After re-reading the directions provided by WP Engine and following the guidance of their support engineer, I realized I had needed to set the CNAME (Alias) ‘www’ record to point to a different URL than the ‘www.thatjeffsmith.com’ entry I had set. Once corrected the site was up and running in less than a minute. Then It Was Only Mostly Broken Many of my plugins weren’t working. Apparently just ftp’ing the wp-content directory up wasn’t the proper way to re-install the plugin. I suspect file permissions or file ownership wasn’t proper. Some plug-ins were working, many had their settings wiped to the defaults, and a few just didn’t work again. I had to delete the directory of the plug-in manually via SFTP, and then use the WP Dashboard to install it from scratch. And here was my first ‘lesson’ – don’t switch the DNS records until you’ve completely tested your new site. I wasn’t able to navigate the old WP console to review my plug-in settings. Thankfully I was able to use the Wayback Machine to reverse engineer some things, and of course most plug-ins aren’t that complicated to setup to begin with. An example of one that I had to redo from scratch is the ‘Twitter @Anywhere Plus’ plugin that I use to create the form that allows folks to tweet a post they enjoyed at the end of each story. How WP Engine ‘Hooked’ Me I actually signed up with another provider first. They ranked highly in Google searches and a few Tweeps recommended them to me. But hours after signing up and I still didn’t have sever reyady, I was ready to give up on them. They offered no chat or phone support – only mail and message boards. And the message boards were rife with posts about how the service had gone downhill in the past 6 months. To their credit, they did make it easy to cancel, although I did have to do so via email as their website ‘cancel’ button was non-existent. Within minutes of activating my WP Engine account I had received my welcome message and directions on how to get started. I was able to see my staged website right away. They also did something very cool before I even got started – they looked at my existing site and told me by how much they could improve its performance. The proof is in the web pudding. I like this for a few reasons, but primarily I liked their business model. It told me they knew what they were doing, and that they were willing to put their money where their mouth was. This was further evident by their 60-day money back guarantee. And if I understand it correctly, they don’t even take your money until after that 60 day period is over. After a day, I was welcomed by the WP Engine social media team, and was given the opportunity to subscribe to their newsletter and follow their account on Twitter. I noticed their Twitter team is sure to post regular WordPress tips several times a day. It’s not just an account that’s setup for the sake of having a Twitter presence. These little things add up and give me confidence in my decision to choose them as my hosting partner. ‘Partner’ – that’s a lot nicer word than just ‘service provider,’ isn’t it? Oh, and they offered me a t-shirt. Don’t ever doubt the power of a ‘free’ t-shirt! How awesome is this e-mail, from a customer perspective? I wasn’t really expecting any of this. Exceeding expectations before I have even handed over a single dollar seems like a pretty good business plan. This is how you treat customers. Love them to death, and they reward you with loyalty. But Jeff, You Skipped a Piece Here, Why WP Engine? I found them on one of those ‘Top 10′ list posts, and pulled up their webpage. I noticed they offered a specialized service – they host WordPress installs, and that’s it. Their servers are tuned specifically for running WordPress. They had in bolded text, things like ‘INSANELY FAST. INFINITELY SCALABLE.’ and ‘LIGHTNING SPEED.’ And then they offered insurance against hackers and they took care of automatic backups and restores. The only drawbacks I have noticed so far relate to plugins I used that have been ‘blacklisted.’ In order to guarantee that ‘lightning’ speed, they have banned the use of the CPU-suckiest plugins. One of those is the ‘Related Posts’ plugin. So if you are a subscriber and are reading this in your email, you’ll notice there’s no links back to my blog to continue reading other related stories. Since that referral traffic is very small single-digit for my site, I decided that I’m OK with that. I’d rather have the warp-speed page loads. Again, I think that will lead to higher traffic down the road. In 50+ days I will need to decide if WP Engine is a permanent solution. I’ll be sure to update this post when that time comes and let y’all know how it turns out.

    Read the article

  • CodePlex Daily Summary for Friday, March 23, 2012

    CodePlex Daily Summary for Friday, March 23, 2012Popular ReleasesSSIS Multiple Hash: Multiple Hash V1.4.1: This is a feature release. It adds the ability to select multiple rows, and change the selection of these with a single click in the check box. All lists with check boxes have this ability. It is backwards compatible with previous versions. Please ensure that you select the appropriate download file based on the version of SQL Server that you will be using. If you have used the Denali installer from V1.4 for packages that you wish to now use the SQL 2012 release version, then you MUST ma...SQL Monitor - managing sql server performance: SQLMon 4.2 alpha 14: 1. improved accuracy of logic fault checking in analysisMetodología General Ajustada - MGA: 02.02.02: Cambios John: Cambios en los cálculos del Flujo de Caja, Flujo Económico y Resumen EF y ES. Se visualiza el reporte de Resumen EF y ES en Grilla. Se ajustan formularios de llamado. Cambios Parmenio: Cambios en el formularios de Programaciòn.MapWindow 6 Desktop GIS: MapWindow 6.1.1: MapWindow 6 Desktop GIS is an open source desktop GIS for Microsoft Windows that is built upon the DotSpatial Library. This release requires .Net 4 (Client Profile). Are you a software developer?Instead of downloading MapWindow for development purposes, get started with with the DotSpatial templateDotSpatial: DotSpatial 1.1: This is a Minor Release. See the changes in the issue tracker. Minimal -- includes DotSpatial core and essential extensions Extended -- includes debugging symbols and additional extensions Just want to run the software? End user (non-programmer) version available branded as MapWindow Want to add your own feature? Develop a plugin, using the template and contribute to the extension feed (you can also write extensions that you distribute in other ways). Components are available as NuGet pa...Telerik CAB Enabling Kit for RadControls for WinForms: TCEK 2012.1.321.20: major update, new Workspaces and UIAdapters Workspaces: - RadDockWorkspace - RadPageViewWorkspace - RadFormWorkspace - RadFormMdiWorkspace - RadTabbedMdiWorkspace UI Adapters: - RadCommandBarUIAdapter - RadRibbonBarUIAdapter - RadTreeNodeUiAdapter - RadTreeViewUIAdapter - RadItemCollectionUIAdapter - (RadMenu, RadStatusStrip, all controls that support RadItem collections)Microsoft All-In-One Code Framework - a centralized code sample library: C++, .NET Coding Guideline: Microsoft All-In-One Code Framework Coding Guideline This document describes the coding style guideline for native C++ and .NET (C# and VB.NET) programming used by the Microsoft All-In-One Code Framework project team.WebDAV for WHS: Version 1.0.67: - Added: Check whether the Remote Web Access is turned on or not; - Added: Check for Add-In updates;Phalanger - The PHP Language Compiler for the .NET Framework: 3.0 (March 2012) for .NET 4.0: March release of Phalanger 3.0 significantly enhances performance, adds new features and fixes many issues. See following for the list of main improvements: New features: Phalanger Tools installable for Visual Studio 2011 Beta "filter" extension with several most used filters implemented DomDocument HTML parser, loadHTML() method mail() PHP compatible function PHP 5.4 T_CALLABLE token PHP 5.4 "callable" type hint PCRE: UTF32 characters in range support configuration supports <c...Nearforums - ASP.NET MVC forum engine: Nearforums v8.0: Version 8.0 of Nearforums, the ASP.NET MVC Forum Engine, containing new features: Internationalization Custom authentication provider Access control list for forums and threads Webdeploy package checksum: abc62990189cf0d488ef915d4a55e4b14169bc01 Visit Roadmap for more details.BIDS Helper: BIDS Helper 1.6: This beta release is the first to support SQL Server 2012 (in addition to SQL Server 2005, 2008, and 2008 R2). Since it is marked as a beta release, we are looking for bug reports in the next few months as you use BIDS Helper on real projects. In addition to getting all existing BIDS Helper functionality working appropriately in SQL Server 2012 (SSDT), the following features are new... Analysis Services Tabular Smart Diff Tabular Actions Editor Tabular HideMemberIf Tabular Pre-Build ...Json.NET: Json.NET 4.5 Release 1: New feature - Windows 8 Metro build New feature - JsonTextReader automatically reads ISO strings as dates New feature - Added DateFormatHandling to control whether dates are written in the MS format or ISO format, with ISO as the default New feature - Added DateTimeZoneHandling to control reading and writing DateTime time zone details New feature - Added async serialize/deserialize methods to JsonConvert New feature - Added Path to JsonReader/JsonWriter/ErrorContext and exceptions w...SCCM Client Actions Tool: SCCM Client Actions Tool v1.11: SCCM Client Actions Tool v1.11 is the latest version. It comes with following changes since last version: Fixed a bug when ping and cmd.exe kept running in endless loop after action progress was finished. Fixed update checking from Codeplex RSS feed. The tool is downloadable as a ZIP file that contains four files: ClientActionsTool.hta – The tool itself. Cmdkey.exe – command line tool for managing cached credentials. This is needed for alternate credentials feature when running the HTA...WebSocket4Net: WebSocket4Net 0.5: Changes in this release fixed the wss's default port bug improved JsonWebSocket supported set client access policy protocol for silverlight fixed a handshake issue in Silverlight fixed a bug that "Host" field in handshake hadn't contained port if the port is not default supported passing in Origin parameter for handshaking supported reacting pings from server side fixed a bug in data sending fixed the bug sending a closing handshake with no message which would cause an excepti...SuperWebSocket, a .NET WebSocket Server: SuperWebSocket 0.5: Changes included in this release: supported closing handshake queue checking improved JSON subprotocol supported sending ping from server to client fixed a bug about sending a closing handshake with no message refactored the code to improve protocol compatibility fixed a bug about sub protocol configuration loading in Mono improved BasicSubProtocol added JsonWebSocketSessionSurvey™ - web survey & form engine: Survey™ 2.0: The new stable Survey™ Project 2.0.0.1 version contains many new features like: Technical changes: - Use of Jquery, ASTreeview, Tabs, Tooltips and new menuprovider Features & Bugfixes: Survey list and search function Folder structure for surveys New Menustructure Library list New Library fields User list and search functions Layout options for a survey with CSS, page header and footer New IP filter security feature Enhanced Token Management New Question fields as ID, Alias...Speed up Printer migration using PrintBrm and it's configuration files: BRMC.EXE: Run the tool from the extracted directory of the printbrm backup. You can use the following command to extract a backup file to a directory - PRINTBRM.EXE -R -D C:\TEMP\EXPAND -F C:\TEMP\PRINTERBACKUP.PRINTEREXPORTAppBarUtils for Windows Phone SDK 7.1: AppBarUtils 1.2: This release contains IconUri dependency property for both AppBarItemCommand and AppBarItemTrigger as requested by shawnoster at http://appbarutils.codeplex.com/discussions/321745. When using this IconUri dependency property, please be sure to set the Type property to AppBarItemType.Button or just omit this property entirely, because it is only for app bar icon button. The demo has been updated to show how to use this new IconUri dependency property with a new lock button on the app bar. Wh...Offline Navigation for Windows Phone 7: 0.1 Alpha: This is the 0.1 alpha release of source code.SmartNet: V1.0.0.0: DY SmartNet ?????? V1.0New Projects2Sexy Content for DotNetNuke - great looking and animated content: 2Sexy Content is a DotNetNuke Extension to create attractive and designed content. It solves the common problem, allowing the web designer to create designed templates for different content elements, so that the user must only fill in fields and receive a perfectly designed and animated output. AgileMapper: AgileMapper????????????????????,?????????dto?do???????AksiMata: Berita terbaru dan peristiwa terkini di sekitar Anda... Langsung dari TKP!Caprice: Engineering adaptive privacy: Caprice is a tool aimed at supporting software engineers in the design of applications that appropriately adapt their behaviour to mitigate privacy threats. This tool helps provide software engineers design-time insight on the functional behaviour a system and associated runtime context changes that can threaten privacy.Change default Share-site group SharePoint Online (Office 365): As default when we share a site collection or site with external users, SharePoint Online show default SharePoint groups which are Visitors and Members. By using this feature, you will get a link which you can use to customize the default groups to your custom groups and other default groups. CodePlex Test Project: This is just to test how well CodePlex handles Git.Find Work Items For Source Items (Visual Studio Extension): work4source is a Visual Studio 2010 tool window which finds and lists all TFS work items associated with a specific source control file or folder, avoiding the need to view the details for every changeset. It also includes "versioned item" links which otherwise cannot be found from the source side of the link using Visual Studio. To install run the VSIX package, restart Visual Studio, then select View --> Other Windows --> "Find Work Items for Source Items" and either leave the window ...FSProject: FSProjectGit c9 Test: testing git to c9 integration possibilitiesImage 3D Viewer: A Image 3D Viewer with WPF.Jakarta Guide: Jakarta news featuring up-to-date information on attractions, hotels, restaurants, nightlife, travel tips and more.LinqLucene: Due to the fact to the original LinqToLucene project seems to have died, I have created this new project to carry on it's workLuskyCode: Just some codemaouidatest: testMarktplace: NLocalize MarketplaceMemoryLifter: MemoryLifter - the fastest way to memorize * is a virtual flashcard system, scientifically based on the Leitner card box algorithm * enables the user to lift any kind of information into long term memory * maximizes study efficiency with automation, controlled repetitionMemoryTributary - A replacement for MemoryStream: MemoryTributary is a replacement for MemoryStream that uses multiple memory chunks as its backing store, as opposed to the single byte array used by MemoryStream. The result is it can handle much larger streams and the initial allocations are more efficient. It's developed in C#.my career muse: Project configured for simultaneous use with other developersMyMusicBox: Mini-Projet MBDS Web 2.0 Michel BuffaNucleo.NET ORM: These components provide a unit of work interface that works with LINQ to SQL, Entity Framework, and Entity Framework Code First, with more ORMs to come. The idea is to create one common wrapper and base framework to make it easier to work with the various ORM products.Orchard QnA: A lightweight discussions module for the Orchard CMS.Orchard Shoutbox: A lightweight shoutbox module for the Orchard CMS:PAIN: My projects from PAIN labs, semester 2012L, department of electronics of Warsaw University of Technology.Reactifier: This project is for the windows 8 Shoutcast MediaStreamSource: Shoutcast MediaStreamSource is a MediaStreamSource implementation of the Shoutcast protocol for Silverlight. This MediaStreamSource allows both Silverlight 4+ OOB and Windows Phone 7 applications to consume a Shoutcast stream using a MediaElement. Currently, Mp3 and AAC+ Shoutcast streams are supported on Windows Phone. However, ONLY Mp3 is supported on Desktop Silverlight. There is also limited (i.e. somewhat untested) M3u and PLS playlist support. Please report any issues playing...Simple Task Manager: Politechnika Wroclawska Team Project - 2012SkyGo Media Commander: SkyGo Media Commander permette di usare le frecce della tastiera per cambiare canale e il tasto invio per aprire il menu : "Telecomando". Utile se si usa un telecomando. Perfettamente funzionante con il telecomando CIR del notebook HP DV6 2137el.sunshine Design: sunshinetesting: testing projecttesttom032012git01: testtom032012git01testtom03222012hg01: testtom03222012hg01testtom03222012tfs01: testtom03222012tfs01testtom03222012tfs02: testtom03222012tfs02TileSet Map Editor: a small side project of a tileset map editorTraffic Light Simulation Application: Traffic Light Simulation application simulates traffic on a 2D plane under different traffic light control schemes. The interface will display a 10x10 grid where each street allows one-way traffic and there is one traffic light at each intersection.Type08ScreenCapture: Type08ScreenCapture brings Windows 8-like desktop capture function to Windows 7. If you type [Windows]+[PrintScreen], it capture a main display area and save the bitmap to your picture folder. It's developed in C#/WPF.WikiPlex – a Regex Wiki Engine: A regular expression based wiki engine allowing developers to integrate a wiki experience into an existing .NET applicationWindowsQR: Windows QR is a proof of concept about capturing a qr code using a webcam in a WPF application runinng in a desktop computer with Windows 7 or similars. It uses AForge.Vision to wrap the DirectShow complexity and zxing library to decode the qr code.Working with Social Data: 1)Customizing Tag Cloud By Accountname 2)RatingWPF 3D-Model Viewer: WPF 3D-Model Viewer

    Read the article

  • Cisco 800 series won't forward port

    - by sam
    Hello ServerFault, I am trying to forward port 444 from my cisco router to my Web Server (192.168.0.2). As far as I can tell, my port forwarding is configured correctly, yet no traffic will pass through on port 444. Here is my config: ! version 12.3 service config no service pad service tcp-keepalives-in service tcp-keepalives-out service timestamps debug uptime service timestamps log uptime service password-encryption no service dhcp ! hostname QUESTMOUNT ! logging buffered 16386 informational logging rate-limit 100 except warnings no logging console no logging monitor enable secret 5 -removed- ! username administrator secret 5 -removed- username manager secret 5 -removed- clock timezone NZST 12 clock summer-time NZDT recurring 1 Sun Oct 2:00 3 Sun Mar 3:00 aaa new-model ! ! aaa authentication login default local aaa authentication login userlist local aaa authentication ppp default local aaa authorization network grouplist local aaa session-id common ip subnet-zero no ip source-route no ip domain lookup ip domain name quest.local ! ! no ip bootp server ip inspect name firewall tcp ip inspect name firewall udp ip inspect name firewall cuseeme ip inspect name firewall h323 ip inspect name firewall rcmd ip inspect name firewall realaudio ip inspect name firewall streamworks ip inspect name firewall vdolive ip inspect name firewall sqlnet ip inspect name firewall tftp ip inspect name firewall ftp ip inspect name firewall icmp ip inspect name firewall sip ip inspect name firewall fragment maximum 256 timeout 1 ip inspect name firewall netshow ip inspect name firewall rtsp ip inspect name firewall skinny ip inspect name firewall http ip audit notify log ip audit po max-events 100 ip audit name intrusion info list 3 action alarm ip audit name intrusion attack list 3 action alarm drop reset no ftp-server write-enable ! ! ! ! crypto isakmp policy 1 authentication pre-share ! crypto isakmp policy 2 encr 3des authentication pre-share group 2 ! crypto isakmp client configuration group staff key 0 qS;,sc:q<skro1^, domain quest.local pool vpnclients acl 106 ! ! crypto ipsec transform-set tr-null-sha esp-null esp-sha-hmac crypto ipsec transform-set tr-des-md5 esp-des esp-md5-hmac crypto ipsec transform-set tr-des-sha esp-des esp-sha-hmac crypto ipsec transform-set tr-3des-sha esp-3des esp-sha-hmac ! crypto dynamic-map vpnusers 1 description Client to Site VPN Users set transform-set tr-des-md5 ! ! crypto map cm-cryptomap client authentication list userlist crypto map cm-cryptomap isakmp authorization list grouplist crypto map cm-cryptomap client configuration address respond crypto map cm-cryptomap 65000 ipsec-isakmp dynamic vpnusers ! ! ! ! interface Ethernet0 ip address 192.168.0.254 255.255.255.0 ip access-group 102 in ip nat inside hold-queue 100 out ! interface ATM0 no ip address no atm ilmi-keepalive dsl operating-mode auto ! interface ATM0.1 point-to-point pvc 0/100 encapsulation aal5mux ppp dialer dialer pool-member 1 ! ! interface Dialer0 bandwidth 640 ip address negotiated ip access-group 101 in no ip redirects no ip unreachables ip nat outside ip inspect firewall out ip audit intrusion in encapsulation ppp no ip route-cache no ip mroute-cache dialer pool 1 dialer-group 1 no cdp enable ppp pap sent-username -removed- password 7 -removed- ppp ipcp dns request crypto map cm-cryptomap ! ip local pool vpnclients 192.168.99.1 192.168.99.254 ip nat inside source list 105 interface Dialer0 overload ip nat inside source static tcp 192.168.0.2 444 interface Dialer0 444 ip nat inside source static tcp 192.168.0.51 9000 interface Dialer0 9000 ip nat inside source static udp 192.168.0.2 1433 interface Dialer0 1433 ip nat inside source static tcp 192.168.0.2 1433 interface Dialer0 1433 ip nat inside source static tcp 192.168.0.2 25 interface Dialer0 25 ip classless ip route 0.0.0.0 0.0.0.0 Dialer0 ip http server no ip http secure-server ! ip access-list logging interval 10 logging 192.168.0.2 access-list 1 remark The local LAN. access-list 1 permit 192.168.0.0 0.0.0.255 access-list 2 permit 192.168.0.0 access-list 2 remark Where management can be done from. access-list 2 permit 192.168.0.0 0.0.0.255 access-list 3 remark Traffic not to check for intrustion detection. access-list 3 deny 192.168.99.0 0.0.0.255 access-list 3 permit any access-list 101 remark Traffic allowed to enter the router from the Internet access-list 101 permit ip 192.168.99.0 0.0.0.255 192.168.0.0 0.0.0.255 access-list 101 deny ip 0.0.0.0 0.255.255.255 any access-list 101 deny ip 10.0.0.0 0.255.255.255 any access-list 101 deny ip 127.0.0.0 0.255.255.255 any access-list 101 deny ip 169.254.0.0 0.0.255.255 any access-list 101 deny ip 172.16.0.0 0.15.255.255 any access-list 101 deny ip 192.0.2.0 0.0.0.255 any access-list 101 deny ip 192.168.0.0 0.0.255.255 any access-list 101 deny ip 198.18.0.0 0.1.255.255 any access-list 101 deny ip 224.0.0.0 0.15.255.255 any access-list 101 deny ip any host 255.255.255.255 access-list 101 permit tcp 67.228.209.128 0.0.0.15 any eq 1433 access-list 101 permit tcp host 120.136.2.22 any eq 1433 access-list 101 permit tcp host 123.100.90.58 any eq 1433 access-list 101 permit udp 67.228.209.128 0.0.0.15 any eq 1433 access-list 101 permit udp host 120.136.2.22 any eq 1433 access-list 101 permit udp host 123.100.90.58 any eq 1433 access-list 101 permit tcp any any eq 444 access-list 101 permit tcp any any eq 9000 access-list 101 permit tcp any any eq smtp access-list 101 permit udp any any eq non500-isakmp access-list 101 permit udp any any eq isakmp access-list 101 permit esp any any access-list 101 permit tcp any any eq 1723 access-list 101 permit gre any any access-list 101 permit tcp any any eq 22 access-list 101 permit tcp any any eq telnet access-list 102 remark Traffic allowed to enter the router from the Ethernet access-list 102 permit ip any host 192.168.0.254 access-list 102 deny ip any host 192.168.0.255 access-list 102 deny udp any any eq tftp log access-list 102 permit ip 192.168.0.0 0.0.0.255 192.168.99.0 0.0.0.255 access-list 102 deny ip any 0.0.0.0 0.255.255.255 log access-list 102 deny ip any 10.0.0.0 0.255.255.255 log access-list 102 deny ip any 127.0.0.0 0.255.255.255 log access-list 102 deny ip any 169.254.0.0 0.0.255.255 log access-list 102 deny ip any 172.16.0.0 0.15.255.255 log access-list 102 deny ip any 192.0.2.0 0.0.0.255 log access-list 102 deny ip any 192.168.0.0 0.0.255.255 log access-list 102 deny ip any 198.18.0.0 0.1.255.255 log access-list 102 deny udp any any eq 135 log access-list 102 deny tcp any any eq 135 log access-list 102 deny udp any any eq netbios-ns log access-list 102 deny udp any any eq netbios-dgm log access-list 102 deny tcp any any eq 445 log access-list 102 permit ip 192.168.0.0 0.0.0.255 any access-list 102 permit ip any host 255.255.255.255 access-list 102 deny ip any any log access-list 105 remark Traffic to NAT access-list 105 deny ip 192.168.0.0 0.0.0.255 192.168.99.0 0.0.0.255 access-list 105 permit ip 192.168.0.0 0.0.0.255 any access-list 106 remark User to Site VPN Clients access-list 106 permit ip 192.168.0.0 0.0.0.255 any dialer-list 1 protocol ip permit ! line con 0 no modem enable line aux 0 line vty 0 4 access-class 2 in transport input telnet ssh transport output none ! scheduler max-task-time 5000 ! end any ideas? :)

    Read the article

  • Friday Tips #6, Part 1

    - by Chris Kawalek
    We have a two parter this week, with this post focusing on desktop virtualization and the next one on server virtualization. Question: Why would I use the Oracle Secure Global Desktop Secure Gateway? Answer by Rick Butland, Principal Sales Consultant, Oracle Desktop Virtualization: Well, for the benefit of those who might not be familiar with client connections in Oracle Secure Global Desktop (SGD), let me back up and briefly explain. An SGD client connects to an SGD server using two distinct protocols, which, by default, require two distinct TCP ports. The first is the HTTP protocol, used by the web browser to connect to the SGD webserver on TCP port 80, or if secure connections are enabled (SSL/TLS), then TCP port 443, commonly identified as the "HTTPS" port, that is, "SSL encrypted HTTP." The second protocol from the client to the server is the Adaptive Internet Protocol, or AIP, which is used for displaying applications, transferring drive mapping data, print jobs, and so on. By default, AIP uses the TCP port 3104, or port 5307 when SSL is enabled. When SGD clients need to access SGD over a firewall, the ports that AIP requires are typically "closed"; and most administrators are reluctant, to put it mildly, to change their firewall configurations to allow AIP traffic on 3144/5307.   To avoid this problem, SGD introduced "Firewall Forwarding", a technique where, in effect, both http and AIP traffic are "multiplexed" onto a single "well-known" TCP port, that is port 443, the https port.  This is also known as single-port firewall traversal.  This technique takes advantage of the fact that, as a "well-known service", port 443 is usually "open",   allowing (encrypted) traffic to pass. At the target SGD server, the two protocols are de-multiplexed and routed appropriately. The Secure Gateway was developed in response to requirements from customers for SGD to support multi-stage DMZ's, and to avoid exposing SGD servers and the information they contain directly to connections from the Internet. The Secure Gateway acts as a reverse-proxy in the first-tier of the DMZ, accepting, authenticating, and terminating incoming client connections, and then re-encrypting the connections, and proxying them, routing them on to SGD servers, deeper in the network. The client no longer needs to know the name/IP address of the SGD servers in their network, they connect to the gateway, only. The gateway takes care of those internal network details.     The Secure Gateway supports the same "single-port firewall" capability as does "Firewall Forwarding", but offers the additional advantage of load-balancing incoming client connections amongst SGD array members, which could be cumbersome without a forward-deployed secure gateway. Load-balancing weights and policies can be monitored and tuned using the "Balancer Manager" application, and Apache mod_proxy_balancer directives.   Going forward, our architects recommend the use of the Secure Gateway over "Firewall Forwarding" for single-port firewall traversal, due to its architectural advantages, its greater flexibility and enhanced features.  Finally, it should be noted that the Secure Gateway is not separately priced; any licensed SGD customer may use the Secure Gateway component at no additional cost.   For more information, see the "Secure Gateway Administrator's Guide".

    Read the article

  • Perfect End to a Bad Day

    - by TehGrumpyCoder
    Yesterday's post about A Bad Day at Work actually had an addendum to it. There were apparently a bunch of guys on ice skates last night competing in some sport way the hell and gone over on the other side of the valley, and enough people couldn't live without seeing them that they had all major arteries heading west honked. I mean honked... the traffic guy reported the 101 had 16 miles of backup... yikes. Since I worked downtown for a number of years, my fallback is to cut across the city on surface streets to get to one of my old 'haunts' and just drive it home from there. Of course with the 101 backed up, then I17 would logically be as well, so I kept the news on rather than my Zune and heard where the bad stuff was going North. I popped out on the freeway about 7 miles south of my exit. Got to the exit which is about a mile from the house without killing or maiming me or anyone else. Waited patiently at the light in the inside lane to make a left and go under the freeway proceeding West. The light changed, I had full green, I started through and whoa... I've got someone in a little rat car crossing my bow! A little explanation... I drive a 3/4 ton pickup with a V-10, extended cab and shell on the back. It's not jacked up, but it sits up pretty good and is longer than any parking place I've ever tried to put it into. I consider this truck to be the consolation prize for paying uninsured motorist coverage for 45 years and having Pilar Martinez totally destroy a 3/4 ton Silverado on March 1, 2007 by plowing into me at traffic speed while I was stopped at a light. If you pay for uninsured motorist coverage, ask your insurance agent *exactly* what that means... I bet it's different than what you think it means. But I digress, sorry... So here I am with a car that is shorter from top to road than the hood on my truck, and the driver thought it would be safe to run a red light and see if they could get past me before I got into the lane. The right side of my front bumper was almost into the driver's window when I hit the brakes and wheeled it left. Fortunately for all involved, I saw it soon enough, and pulled into the 2nd lane for making a left to go back South. I looked in my mirror, signalled a move, then moved over behind the yuck in the rat car. I then punched it, and the future hood ornament and I both made it through the next light. I pulled alongside to let her know that she was DEFINITELY Number 1 in my book, and it's a middle-age woman looking at me with a "sorry, it was an accident" show of pouty face and arms held up. Tough $hit lady... that may have worked when you were 18, but it's not working anymore, and it wasn't an accident... you ran a freakin' red light and almost got yourself killed. That just about put a bow on the day... I was home later than usual, pissed off about work stuff, pissed off at traffic, and now that. I ate dinner, watched a little TV, and was asleep about 9:30 exhausted. Hope today is better.

    Read the article

  • PHP-FPM stops responding and dies [migrated]

    - by user12361
    I'm running Drupal 6 with Nginx 1.5.1 and PHP-FPM (PHP 5.3.26) on a 1GB single core VPS with 3GB of swap space on SSD storage. I just switched from shared hosting to this unmanaged VPS because my site was getting too heavy, so I'm still learning the ropes. I have moderately high traffic, I don't really monitor it closely but Google Adsense usually record close to 30K page views/day. I usually have 50 to 80 authenticated users logged in and a few hundred more anonymous users hitting the Boost static HTML cache at any given moment. The problem I'm having is that PHP-FPM frequently stops responding, resulting in Nginx 502 or 504 errors. I swear I have read every page on the internet about this issue, which seems fairly common, and I've tried endless combinations of configurations, and I can't find a good solution. After restarting Nginx and PHP-FPM, the site runs really fast for a while, and then without warning it simply stops responding. I get a white screen while the browser waits on the server, and after about 30 seconds to a minute it throws an Nginx 502 or 504 error. Sometimes it runs well for 2 minutes, sometimes 5 minutes, sometimes 5 hours, but it always ends up hanging. When I find the server in this state, there is still plenty of free memory (500MB or more) and no major CPU usage, the control and worker PHP-FPM processes are still present, and the server is still pingable and usable via SSH. A reload of PHP-FPM via the init script revives it again. The hangups don't seem to correspond to the amount of traffic, because I observed this behavior consistently when I was testing this configuration on a development VPS with no traffic at all. I've been constantly tweaking the settings, but I can't definitively eliminate the problem. I set Nginx workers to just 1. In the PHP-FPM config I have tried all three of the process managers. "Dynamic" is definitely the least reliable, consistently hanging up after only a few minutes. "Static" also has been unreliable and unpredictable. The least buggy has been "ondemand", but even that is failing me, sometimes after as much as 12 to 24 hours. But I can't leave the server unattended because PHP-FPM dies and never comes back on its own. I tried adjusting the pm.max_children value from as low as 3 to as high as 50, doesn't make a lot of difference, but I currently have it at 10. Same thing for the spare servers values. I also have set pm.max_requests anywhere from 30 to unlimited, and it doesn't seem to make a difference. According to the logs, the PHP-FPM processes are not exiting with SIGSEGV or SIGBUS, but rather with SIGTERM. I get a lot of lines like: WARNING: [pool www] child 3739, script '/var/www/drupal6/index.php' (request: "GET /index.php") execution timed out (38.739494 sec), terminating and: WARNING: [pool www] child 3738 exited on signal 15 (SIGTERM) after 50.004380 seconds from start I actually found several articles that recommend doing a graceful reload of PHP-FPM via cron every few minutes or hours to circumvent this issue. So that's what I did, "/etc/init.d/php-fpm reload" every 5 minutes. So far, it's keeping the lights on. But it feels like a dreadful hack. Is PHP-FPM really that unreliable? Is there anything else I can do? Thanks a lot!

    Read the article

  • RDP through TCP Proxy

    - by johng100
    Hi, First time in Stackoverflow and I'm hoping someone can help me. I'm looking at a proof of concept to pass RDP traffic through a TCP Proxy/tunnel which will pass through firewalls using HTTPS. The problem has to do with deploying images to machines and so it can't be assumed that the .NET framework will be present, so C++ is being used at the deployment end of a connection. The basic system I have at present is a program which listens for client connections on a port then passes any data to a WCF service which stores it as a byte array. A deployment machine (using GSoap and C++) polls the WCF service for messages and if it finds them then passes the data onto the target server process via sockets. I know this sounds horrible, but it works for simple test clients and server passing data to and from simple test client and server programs via this WCF/C++/C# proxy layer. But I have to support traffic from RDP, VNC and possibly others, so I need a transparent proxy to do this and am wondering whether the above approach is worth pursuing. I've read up on SSH tunneling and that seems a possibility. My basic question is is it possible to tunnel RDP traffic over HTTPS using custom code. Thanks John

    Read the article

< Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >