Search Results

Search found 14008 results on 561 pages for 'easy marks'.

Page 499/561 | < Previous Page | 495 496 497 498 499 500 501 502 503 504 505 506  | Next Page >

  • How to rate-limit concurrent sessions with nginx or haproxy?

    - by bantic
    I'm currently using nginx to reverse-proxy requests from web clients that are doing long-polling to an upstream. Since we're doing long polling (as opposed to websockets), when a client connects it will make multiple http connections to the server in serial, re-establishing a connection every time the server sends it some data (or timing out and re-establishing if the server has nothing to say for 10 seconds). What I'd like to do is limit the number of concurrent web clients. Since the clients are constantly making new HTTP requests instead of keeping a single request open, it's a little tricky to count the total number of web clients (because it's not the same as total number of concurrently connected http clients). The method I've come up with is to track http requests by the originating IP address, and store the IP address somewhere with a TTL of 20 seconds. If a request comes in whose IP isn't recognized, then we check the total number of unexpired stored IP addresses; if that's less than the maximum then we allow this request through. And if a request comes in with an IP address that we can find in the look-up table that hasn't yet expired, then it is allowed through as well. All requests that are allowed through have their IPs added to the table (if not there before) and the TTL refreshed to 20 seconds again. I had actually whipped something together that worked correctly this way using nginx along with the Redis 2.0 Nginx Module (and the nginx lua module to simplify the conditional branching), using redis to store my IP addresses with a TTL (the SETEX command), and checking the table size with the DBSIZE command. This worked but the performance was horrible. nginx and redis ended up using lots of cpu and the machine could only handle a very small number of concurrent requests. The new stick-table and tracking counters that were added to Haproxy in version 1.5 (via a commission from serverfault) seem like they might be ideal to implement exactly this sort of rate limiting, because the stick-table can track IP addresses and automatically expire entries. However, I don't see an easy way to get a total count of the unexpired entries in the stick table, which would be necessary to know the number of connected web clients. I'm curious if anyone has any suggestions, for nginx or haproxy or even for something else not mentioned here that I haven't thought of yet.

    Read the article

  • Alternative Windows Offline Files + Windows Backup + Previous Version Setup

    - by Herson
    Currently our documents are all hosted in a Windows 7 box. Users can access the files using Windows share and the documents are available offline (windows 7 feature). The documents are being backed up daily by Windows 7 backup and restore utility. Users can access previous versions of the file (from the backups) using Windows Explorer "previous versions" feature. This setup is currently working well, except for the following: We would prefer to have access to hourly versions of the file, not daily. The previous version mechanism is tied up to the backup mechanism. Windows 7 performs a full backup every week and incremental backup everyday. The previous versions of a file is actually what are the available in the backups. If you 20GB documents and want to maintain at least three(3) year history, you will use at minimum 3 years * 52 weeks * 20GB or about 3TB even if there are few changes in the documents. Its pretty inefficient use of space. Looking up previous versions of a file is very slow (tens of minutes). This is probably related to the previous issue - Windows has to traverse its all of its backups. I am considering using SVN + autocommit/autoupdate tortoisesvn. It will have the following advantages: Backups are easy and will also backup the whole history of each documents. (Just backup the repository). Creating previous versions can be frequent. I think svn commit / update can be done every two minutes or so. Users can sync over the net. However, I can see the following issues: More conflicts than the original setup because both multiple users can now edit the same file even both are online, i.e. can connect to the SVN repo. The users can off course lock the file first before editing, but that would mean they have to adjust. Delay on propagation of file changes. On windows 7 file sharing, changes made by one online user will be instantaneously available to other online users. With the SVN setup, changes will only be propagated when the users execute the svn add/commit/update sequence. Delay will be probably a few minutes. This workflow will no longer work: "Hi, I just edited document X, can you have a quick look?" I would like to ask the opinion of the community for alternative setups, or improvements on the above setups to work out the kinks.

    Read the article

  • Looking for a new, free firewall (Sunbelt has a huge hole)

    - by Jason
    I've been using Sunbelt Personal Firewall v. 4.5 (previously Kerio). I've discovered that blocking Firefox connections in the configuration doesn't stop EXISTING Firefox connections. (See my post here yesterday http://superuser.com/questions/132625/sunbelt-firewall-4-5-wont-block-firefox) The "stop all traffic" may work on existing connections - but I'm done testing, as I need to be able to be selective, at any time. I was using the free version, so the "web filtering" option quit working after some time (mostly blocking ads and popups), but I didn't use that anyway. I used the last free version of Kerio before finally having to go to Sunbelt, because Kerio had an unfixed bug where you'd eventually get the BSOD and have to reset Kerio's configuration and start over (configure everything again). So I'm looking for a new Firewall. I don't like ZoneAlarm at all (no offense to all it's users that may be here - personal taste). I need the following: (Sunbelt has all these, except *) - 1. Be able to block in/out to localhost (trusted)/internet selectively for each application with a click (so there's 4 click boxes for each application) [*that effects everything immediately, regardless of what's already connected]. When a new application attempts a connection, you get an allow/deny/remember windows. - 2. Be able to easily set up filter rules for 'individual application'/'all applications,' by protocol, port/address (range), local, remote, in, out. [*Adding a filter rule also doesn't block existing connections in Sunbelt. That needs to work too.] - 3. Have an easy-to-get-to way to "stop all traffic" (like a right click option on the running icon in the task bar). - 4. Be able to set trusted/internet in/out block/allowed (4 things per item) for each of IGMP, ping, DNS, DHCP, VPN, and broadcasts. - 5. Define locahost as trusted/untrusted, define adapter connections as trusted/untrusted. - 6. Block incoming connetions during boot-up and shutdown. - 7. Show existing connections, including local & remote ip/port, protocol, current speed, total bytes transferred, and local ports opened for Listening. - 8. An Intrusion Prevention System which blocks (optionally select each one) known intrustions (long list). - 9. Block/allow applications from starting other applications (deny/allow/remember window). Wish list: A way of knowing what svchost.exe is doing - who is actually using it/calling it. I allowed it for localhost, and selectively allowed it for internet each time the allow/deny window came up. Thanks for any help/suggestions. (I'm using Windows XP SP3.)

    Read the article

  • Enterprise IPv6 Migration - End of proxypac ? Start of Point-to-Point ? +10K users

    - by Yohann
    Let's start with a diagram : We can see a "typical" IPv4 company network with : An Internet acces through a proxy An "Others companys" access through an dedicated proxy A direct access to local resources All computers have a proxy.pac file that indicates which proxy to use or whether to connect directly. Computers have access to just a local DNS (no name resolution for google.com for example.) By the way ... The company does not respect the RFC1918 internally and uses public addresses! (historical reason). The use of internet proxy explicitly makes it possible to not to have problem. What if we would migrate to IPv6? Step 1 : IPv6 internet access Internet access in IPv6 is easy. Indeed, just connect the proxy in Internet IPv4 and IPv6. There is nothing to do in internal network : Step 2 : IPv6 AND IPv4 in internal network And why not full IPv6 network directly? Because there is always the old servers that are not compatible IPv6 .. Option 1 : Same architecture as in IPv4 with a proxy pac This is probably the easiest solution. But is this the best? I think the transition to IPv6 is an opportunity not to bother with this proxy pac! Option 2 : New architecture with transparent proxy, whithout proxypac, recursive DNS Oh yes! In this new architecture, we have: Explicit Internet Proxy becomes a Transparent Internet Proxy Local DNS becomes a Normal Recursive DNS + authorative for local domains No proxypac Explicit Company Proxy becomes a Transparent Company Proxy Routing Internal Routers reditect IP of appx.ext.example.com to Company Proxy. The default gateway is the Transparent Internet proxy. Questions What do you think of this architecture IPv6? This architecture will reveal the IP addresses of our internal network but it is protected by firewalls. Is this a real big problem? Should we keep the explicit use of a proxy? -How would you make for this migration scenario? -And you, how do you do in your company? Thanks! Feel free to edit my post to make it better.

    Read the article

  • Dlink search is hijacking my browser

    - by James
    For months now "DLink search" has been hijacking my search engines. I use google chrome, and I have organized my search engines in the handy dandy "manage search engines" tool about a TRILLION times. It never even says D-link is hacking my search engines. It does not show up! I have read many posts on this forum and others saying that to fix this problem from internet explorer: Setup, internet options, yadayada, magical fairies, and you are solved, but my browser is google chrome! How am I supposed to do this from there! I do not know how to re-setup my Dlink router, which is the cause of the problem! HOW? In those posts with the magical fairies fixing it, HUNDREDS responded saying, "yep, those fairies definitely fixed it right. :)" These people were so satisfied. IT WORKED FOR THEM, WHY NOT ME. I look at it and go ":(" because it does not help me. There are no options for anything to do with this in GOOGLE chrome. PLEASE EXPLAIN and HELP. I see no "SETUP" option, no "Internet Options" button, no anything. BTW the exact posts are these: "Uncheck Advanced DNS in the router internet setup. This will take care of it. I had this problem with my DLink router before." "I had this issue with my DIR-655 and unchecking the Advanced DNS setting in Setup - Internet - Manual Internet Connection Setup fixed it." "If this is just internet explorer, you can go to Tools Internet Options or Internet Options in Control Panel. From here, go to the advanced tab and click the Reset button." "I would set the router's DNS to a site like OpenDNS, and I would ensure the machines are set to get their DNS settings via DHCP or set the machine's DNS setting to OpenDNS. If the router's DNS looks like it was messed with, some bad software know the default passwords for routers and could have changed it. If you don't already I would make sure the password to the router is not default or easy to guess. I've had spyware change a machine's DNS, but the fact it is happening on all machines makes me wonder if it is the router." "Something got into your router and changed the dns server most likely, do a hard reset of the router and then change the password to something strong. Also check for a firmware update for the router and apply it as soon as possible."

    Read the article

  • NGINX rewrite rules help. Redirect not working and want to get rid of index.php in urls

    - by Tamerax
    hey! I have 2 questions for nginx users. 1) I'm trying to setup my joomla server onto my new linode running NGINX and after much (like days) of searching and testing, I finally have a config that works with with SEF url plugins...sorta. I was using an apache system on the old server and it used mod_rewrite and life was fine in terms of SEF. Since NGINX doesn't have mod_rewrite, I found something that works BUT it constantly leaves index.php in the urls. ex: http://mysite.com/index.php/forum i want it to be just http://mysite.com/forum but without mod_rewrite it doesn't seem to be possible in joomla that i'm aware of. I know in wordpress it IS possible but I have to use a plugin. Here is my config file: server { listen 80; server_name mysite.com www.mysite.com; access_log /home/public_html/mysite.com/log/access.log; error_log /home/public_html/mysite.com/log/error.log; root /home/public_html/mysite.com/public/; large_client_header_buffers 4 8k; # prevent some 400 errors index index.php index.html; fastcgi_index index.php; location / { expires 30d; error_page 404 = @joomla; log_not_found off; } # Rewrite location @joomla { rewrite ^(.*)$ /index.php?q=last; } # Static Files location ~* ^.+.(jpg|jpeg|gif|css|png|js|ico)$ { access_log off; expires 30d; } # PHP location ~ \.php { keepalive_timeout 0; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; include /usr/local/nginx/conf/fastcgi_params; fastcgi_param SCRIPT_FILENAME /home/public_html/mysite.com/public /$fastcgi_script_name; } } 2) second question should be easy but i can't get it to work. I want to use the same config I posted above and have either mysite.com or www.mysite.com both forward to mysite.com/portal. Basically when you hit up the front page with or without the www, it all gets forwarded to a sub directory on the server I called Portal. I have tried several variations of: rewrite ^/(.*) http://www.example.com/portal/$1 permanent; but it usually ends with firefox telling me there is some crazy loop happening the address bar saying something like mysite.com/portalportalportalportalportal.........on and on. So, any help on either of these issues would be awesome!! Thanks!!

    Read the article

  • Expendable, Redundant, Easily recoverable

    - by MeIr
    I am desperate at this point, I have been looking for "Big storage" solution for a while on my own and I can't find anything that would suite my needs. But now push came to shove. Current situation: I have about 6TB data storage (already full) - Drobo. Yesterday Drobo died on me and it put me into bad situation - I can't recover my data without buying another Drobo. From extensive research online I realized that Drobo is not the safest bet and by now it seems very poor choice. I ordered new Drobo to try to get my data back, however I don't want to be in the same situation later and continuing using Drobo promises this event to re-occur. What I am looking for: 1) Inexpensive setup. 2) Dynamically extendable - add more drives and/or replace a drive with bigger capacity. 3) Redundant - setup against 1-3 drive failure, will depend on total number of drives. For the sake of argument let's assume for every 4 drives one should be able to fail without data loss. 4) Easy data recovery - let's say unforeseen happens, I would like to be able to recover information without buying new tools or replacements - example: new Drobo. 5) Should be USB or Network Attach Storage 6) No demand on speed. Doesn't have to be fast, I am not doing video editing on the setup. However if option exists, would be nice to have a decent speed. After thoughts: I reviewed few options and FreeNAS looks nice, but it doesn't have #2 - Dynamic extendability. There are work around with Pools but it seems a bit complicated and unnecessary. More over it seems like data safety is a big question - saw some horror stories. Please advise on what options I have and what seems like an optimal solution (if any). I don't care if it has to be Windows or Linux box or any other OS and/or software that has to run on top, but simple solution is more attractive. Thank you! P.S: Feel free to ignore "After thoughts".

    Read the article

  • Laptops on Windows Domain sometimes have problems accessing internet when off-site

    - by FSUScoot
    Hi all-- We've had this problem for a long time. When users travel, sometimes they can't get internet access from a wired or wireless connection. Here are a couple examples: 1) A user goes to a hotel and tries to access the wireless in their room. They can connect to the access point. They open a web browser and they can't get re-directed to the hotel's login page. Because they can't log in, there's no internet access. 2) A user goes to another laboratory/university and tries to access the wired network. They connect, link is fine, PC gets IP from DHCP but no internet access. There's no login page to be re-directed to. It should just "work". What I've found is that it's a DNS issue. Because the computer is on a Windows Domain, it seems it MUST use our DNS servers. Even if you connect to an outside network and do an ipconfig /all, it looks like everything is ok. It'll even show their DNS servers listed in the config. The computer just won't use the other network's DNS server. I found a reg key that keeps our DNS servers listed and it seems that they take priority every time: HKLM\SOFTWARE\Policies\Microsoft\Windows NT\DNSClient All the values under that key are for our AD domain. NameServer and Searchlist never change. What I've found is if the user edits the NameServer string and puts the DNS server of the network they're on, everything works just fine. They get re-directed to the hotel's correct login page or their internet access starts working. It's only a problem if the network they're on blocks outside DNS or a hotel that uses an internal name in their front page redirection that only their DNS server knows about, i.e., not public. If the re-direct page starts with an IP, like 10.10.10.10, it'll work just fine. Obviously this isn't a fix for everyone. Most of my users are pretty knowledgeable so it’s easy for me to walk them through or send them a .reg file that they can edit and run. This problem isn't limited to Windows 7. It was like this with XP as well. It's not hardware related. The problem exists on both wired and wireless, Intel or Broadcom, laptops or desktops. Anyone else have this problem? Is there a GPO I can change that I missed? Got a good work-around for this? Thanks for any help!

    Read the article

  • What router hardware or software should be used when multiple public IPs are routed into the same LAN?

    - by lcbrevard
    I am looking for recommendations to replace a set of consumer grade (Linksys, Netgear, Belkin) routers with something that can handle more traffic while routing more than one static public IP into the same LAN address space. We have a block of static public IPs, 5 usable, with Comcast Business. Currently four of them are in use for: General office access Web server Mail and DNS servers Download and backup web server for separate business All systems (a mixture of physical and virtual) are in the same LAN address space (10.x.y.0/24) to enable easy access between them inside the office. There are 30 or more systems in use depending on which virtual machines are currently active. We have a mixture of Windows, Linux, FreeBSD, and Solaris. Currently a separate consumer grade router is used for each of the four static addresses, with its WAN address set to the specific static address and a different gateway address for each: uses 10.x.y.1 - various ports are forwarded to various LAN IPs on systems with gateway 10.x.y.1 uses 10.x.y.254 - port 80 is forwarded to a server with gateway 10.x.y.254 uses 10.x.y.253 - ports for mail and dns are forwarded to a server with gateway 10.x.y.253 uses 10.x.y.252 - ports as needed are forwarded to server with gateway 10.x.y.252 Only router 1. is allowed to serve DHCP and address reservation based on the MAC is used for most of the internal "server" IP addresses so they are at fixed values. [Some are set static due to limitations in the address reservation capabilities of router 1.] And, yes, this really does work! But... I am looking for: better DHCP with more capable address reservation higher capacity so I don't have to periodically power cycle the routers One obvious improvement would be to have a real DHCP server and not use a consumer grade router for that purpose. I am torn between buying a "professional" router such as Cisco or Juniper or Sonic Wall verus learning to configure some spare hardware to perform this function. The price goes up extremely rapidly with capabilities for commercial routers! Worse, some routers require licensing based on the number of clients - a disaster in our environment with so many virtual machines. Sorry for such a long posting but I am getting tired of having to power cycle routers and deal with shifting IP addresses afterwards!

    Read the article

  • 20GB+ worth of emails in my /home what is a better solution for that?

    - by Skinkie
    My email storage requirements are outgrowing anything reasonable with respect to local mail storage. As we speak 99% of my home partition is filled with personal mail in Thunderbirds mail dirs. Needless to say, this is just painful, badly searchable and as history has proven me that backups work, but Thunderbird is capable of loosing a lot of mail very easily. Currently I have an remote IMAPS server (Dovecot) running for my daily mail, accessible from anywhere, which from my own practice works efficiently up to about 1000 emails. Then some archive directories should be used to move mail around. I have been looking into DBMail, but I wonder if I make my case worse or better which such solution. None of the supported database employ string deduplication or string compression out of the box, so is this going to help me with 20GB+ mail? What about falling back to a plain old IMAP server? A filesystem like ZFS would support stuff like GZIP transparently, which could help. Could someone share their thoughts? The 20GB mostly consists of mailinglists, and normal mail. Not things like attachments. To add some clarifications; As we speak, my mail is not server side indexed at all - only my new mail arrives at a remote IMAP server. It is all local storage from former POP3 accounts, local mirrored Gmail and IMAP accounts. In my perspective it is not Thunderbird that sucks, its fileformat that sucks. Regarding the 1000 mails. On the road I am using Alpine and MobileMail, quite happy with both of them, but some management is required to actually manage the mail. Sieve helps a lot with that, but browing through 10.000 e-mails is not fun, especially not on a mobile client. I am quite happy with Dovecot, never had any issues with it. I just wonder if this is the way to go. Or if there are any other better solutions. What my question is: what is the best practice solution that allows 20GB+ mails and is -on demand remotely accessible, easy to backup and archive worthy. It doesn't need to be available 24x7. The final approach I took was installing a local IMAP server (Dovecot), configured it for being my archive, using the following guide: http://en.gentoo-wiki.com/wiki/Dovecot/InstallThunderbird

    Read the article

  • What DNS server to use for dynamic load-balancing of website?

    - by Marki555
    I will have 2 servers in different datacenters (different countries) and I want to use DNS load-balancing mainly for High Availability of website hosted on those 2 servers. It is just ad tracking site, which records hit in local database and returns few lines on html code. I want to return 2 A records each time because of DNS pinning in browsers (if one server fails, browser will try second A record which it has already cached). Both servers will be acting also as DNS servers for redundancy. Now comes my proposed solution: I will use BIND and have both servers as a master for that zone. On each server there will be running script, which will periodically test availability (http) of both servers and remove IP from DNS in case of failure. Now the questions :) 1) Is BIND suitable for this solution? I think BIND performance is good and it is easy to manipulate the zone file via script. And as I will modify the zone only in case of failure/maintenance, the modifications (and thus bind reload) won't be often. 2) I plan to use TTL of 5 minutes. The website will have about 1000-3000 req/s but from distinct clients (each IP only 1-3 requests), so I think the DNS load won't be too much. I suppose their ISPs will cache the responses for those 5 mins. Is there any reason to lower the TTL even more? 3) Is my master-master approach good? Or should I make one of the servers master and the other one slave? Right now each server can monitor both itself and the other one. If only webservice fails, both DNS nodes will notice it. If the whole server fails, then the remaining DNS node will notice it and the failed node will not answer DNS queries anyway. 4) Is it a big issue when one NS server does not respond to queries? If yes, I can make a third DNS, so anytime at least 2 of them would accept queries... 5) Should I rewrite the zone file via script, or just use dynamic DNS update (for example via nsupdateutility)?

    Read the article

  • VirtualBox Port Forward not working when Guest IP *IS* specified (while doc says opposite)

    - by Patrick
    Trying to port forward from host (Mac OS X) 127.0.0.1:8282 - guest (CentOS)'s 10.10.10.10:8080. Existing port forwards include 127.0.0.1:8181 and 9191 to guest without any IP specified (so whatever it gets through DHCP, as explained in the documentation). Here is how the non-working binding was added: VBoxManage modifyvm "VM name" --natpf1 "rule3,tcp,127.0.0.1,8282,10.10.10.10,8080" Here is how the working ones were added: VBoxManage modifyvm "VM name" --natpf1 "rule1,tcp,127.0.0.1,8181,,80" VBoxManage modifyvm "VM name" --natpf1 "rule2,tcp,127.0.0.1,9191,,9090" And by "non-working", I of course mean not listening (as a prerequisite to forwarding): $ lsof -Pi -n|grep Virtual|grep LISTEN VirtualBo 27050 user 21u IPv4 0x2bbdc68fd363175d 0t0 TCP 127.0.0.1:9191 (LISTEN) VirtualBo 27050 user 22u IPv4 0x2bbdc68fd0e0af75 0t0 TCP 127.0.0.1:8181 (LISTEN) There should be a similar line above but with 127.0.0.1:8282. Just to be clear, this port is listening perfectly fine on the guest itself. And when I remove the guest IP (i.e., clear the 10.10.10.10) the forward works fine, albeit to eth0 (not eth1 where I need it). I can tcpdump and watch the traffic flow back and forth. And yes, I've disabled iptables entirely while testing -- it's not getting blocked anywhere on the guest. As VirtualBox writes in their documentation, you are required to specify the guest IP if it's static (makes sense, no DHCP record it keeps): "If for some reason the guest uses a static assigned IP address not leased from the built-in DHCP server, it is required to specify the guest IP when registering the forwarding rule:". However, doing so (as I need to), seems to break the port forward with nary a report in any log file I can find. (I've reviewed everything in ~/Library/VirtualBox/). Other notes: While I used the above command to add the third rule, I've also verified it showed up correctly in GUI and then removed/re-added from there just to make sure). This forum link -- while very dated -- looks somewhat related in that a port forward to a static IP was not appearing (perhaps they think due to lack of gratuitous arp being sent for host to know IP is there/avail?). Anyway, what gives? Is this still buggy? Any suggestions? If not, easy enough workarounds? What's interesting is that this works perfectly fine on another user's Mac, however he's running a slightly older version (4.3.6 v. 4.3.12).

    Read the article

  • Apache certificates for some urls not working

    - by Vegaasen
    We are having a rather strange problem with a Apache-installation. Here is a short summary: Currently I'm setting up Apache with https, and server-certificates. This is fairly easy and works straight out of the box - as expected. This is the configuration for this setup: Listen 443 SSLEngine on SSLCertificateFile "/progs/apache/ssl/example-site.no.pem" SSLCertificateKeyFile "/progs/apache/ssl/example-site.no.key" SSLCACertificateFile "/progs/apache/ssl/ca/example_root.pem" SSLCADNRequestFile "/progs/apache/ssl/ca/example_intermediate.pem" SSLVerifyClient none SSLVerifyDepth 3 SSLOptions +StdEnvVars +ExportCertData RequestHeader set ssl-ClientCert-Subject-CN "%{SSL_CLIENT_S_DN}s" RewriteEngine On ProxyPreserveHost On ProxyRequests On SSLProxyEngine On ... <LocationMatch /secureStuff/$> SSLVerifyClient require Order deny,allow Allow from All </LocationMatch> ... <Proxy balancer://exBalancer> Header add Set-Cookie "EX_ROUTE=EB.%{BALANCER_WORKER_ROUTE}e; path=/" env=BALANCER_ROUTE_CHANGED BalancerMember http://10.0.0.1:7200 route=ee1 retry=300 flushpackets=off keepalive=on BalancerMember http://10.0.0.2:7200 route=ee2 retry=300 flushpackets=off keepalive=on status=+H ProxySet stickysession=EX_ROUTE scolonpathdelim=Off timeout=10 nofailover=off failonstatus=505 maxattempts=1 lbmethod=bybusyness Order deny,allow Allow from all </Proxy> RewriteCond %{REQUEST_URI} !^/index.html [NC] RewriteRule ^/(.*)$ balancer://exBalancer/$1 [P,NC] ProxyPassReverse / balancer://exBalancer/ Header edit Set-Cookie "(.*)" "$1;HttpsOnly" ... So - everything works fine and as expected for all of the pages that are not a part of the LocationMatch-directive. When requesting something that matches the LocationMatch-directive, I'm asked for a certificate (hence the SSLVerifyClient required attribute) - and getting all the correct certificates in my browser that is based on the root/intermediate chain. After choosing a certificate and clicking "OK", this is what pops up in the apache logs: [ssl:info] [pid 9530:tid 25] [client :43357] AH01998: Connection closed to child 86 with abortive shutdown ( [Thu Oct 11 09:27:36.221876 2012] [ssl:debug] [pid 9530:tid 25] ssl_engine_io.c(1171): (70014)End of file found: [client 10.235.128.55:45846] AH02007: SSL handshake interrupted by system [Hint: Stop button pressed in browser?!] And this just spams the logs. What is happening here? I can see this configuration working on my local machine, but not on one of our servers. There is no configration differences between the servers, only minor application-wise-changes. I've tried the following: 1) Removing CA-certificate-checking (works) 2) Adding required CA-certificate for the whole site (works) 3) Adding "SSLVerifyClient optional" does not work 4) ++ Server/Application Information Local: -OpenSSL v.1.0.1x -Apache 2.4.3 -Ubuntu -mpm: event -every configuration should be turned on (failing) server: -OpenSSL 0.9.8e -Apache 2.4.2 -SunOS -mpm: worker -every configuration should be turned on Please let me know if more information is needed, I'll provide it instantly. Brief sum-up: -Running apache 2.4 -Server certificates works just fine -Client certificates for some /Locations does not work, fails with errors PS: Could it be related with the OpenSSL version and the "Renegotiation" stuff related to TLS/SSLv3?

    Read the article

  • How to prevent dual booted OSes from damaging each other?

    - by user1252434
    For better compatibility and performance in games I'm thinking about installing Windows additionally to Linux. I have security concerns about this, though. Note: "Windows" in the remaining text includes not only the OS but also any software running on it. Regardless of whether it comes included or is additionally installed, whether it is started intentionally or unintentionally (virus, malware). Is there an easy way to achieve the following requirements: Windows MUST NOT be able to kill my linux partition or my data disk neither single files (virus infection) nor overwriting the whole disk Windows MUST NOT be able to read data disk (- extra protection against spyware) Linux may or may not have access to the windows partition both Linux and Windows should have full access to the graphics card this rules out desktop VM solutions for gaming I want the manufacturer's windows graphics card driver Regarding Windows to be unable to destroy my linux install: this is not just the usual paranoia, that has happened to me in the past. So I don't accept "no ext4 driver" as an argument. Once bitten, twice shy. And even if destruction targeted at specific (linux) files is nearly impossible, there should be no way to shred the whole partition. I may accept the risk of malware breaking out of a barrier (e.g. VM) around the whole windows box, though. Currently I have a system disk (SSD) and a data disk (HDD), both SATA. I expect I have to add another disk. If i don't: even better. My CPU is a Intel Core i5, with VT-x and VT-d available, though untested. Ideas I've had so far: deactivate or hide other HDs until reboot at low level possible? can the boot loader (grub) do this for me? tiny VM layer: load windows in a VM that provides access to almost all hardware, except the HDs any ready made software solution for this? Preferably free. as I said: the main problem seems to be to provide full access to the graphics card hardware switch to cut power to disks commercial products expensive and lots of warnings against cheap home built solutions preferably all three hard disks with one switch (one push) mobile racks - won't wear of daily swapping be a problem?

    Read the article

  • Nginx SSL redirect for one specific page only

    - by jjiceman
    I read and followed this question in order to configure nginx to force SSL for one page (admin.php for XenForo), and it is working well for a few of the site administrators but is not for myself. I was wondering if anyone has any advice on how to improve this configuration: ... ssl_certificate example.net.crt; ssl_certificate_key example.key; server { listen 80 default; listen 443 ssl; server_name www.example.net example.net; access_log /srv/www/example.net/logs/access.log; error_log /srv/www/example.net/logs/error.log; root /srv/www/example.net/public_html; index index.php index.html; location / { if ( $scheme = https ){ rewrite ^ http://example.net$request_uri? permanent; } try_files $uri $uri/ /index.php?$uri&$args; index index.php index.html; } location ^~ /admin.php { if ( $scheme = http ) { rewrite ^ https://example.net$request_uri? permanent; } try_files $uri /index.php; include fastcgi_params; fastcgi_pass 127.0.0.1:9000; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param HTTPS on; } location ~ \.php$ { try_files $uri /index.php; include fastcgi_params; fastcgi_pass 127.0.0.1:9000; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param HTTPS off; } } ... It seems that the extra information in the location ^~ /admin.php block is unecessary, does anyone know of an easy way to avoid duplicate code? Without it it skips the php block and just returns the php files. Currently it applies https correctly in Firefox when I navigate to admin.php. In Chrome, it downloads the admin.php page. When returning to the non-https website in Firefox, it does not correctly return to http but stays as SSL. Like I said earlier, this only happens for me, the other admins can go back and forth without a problem. Is this an issue on my end that I can fix? And does anyone know of any ways I could reduce duplicate configuration options in the configuration? Thanks in advance! EDIT: Clearing the cache / cookies seemed to work. Is this the right way to do http/https redirection? I sort of made it up as I went along.

    Read the article

  • How to move partition with Users to a new SATA controller?

    - by Al Kepp
    Our current situation: X79 chipset. Windows 7 installed on SATA SSD running on Marvell controller. C:\Users contains just Public and computer-name folders, all the other users are created in E:\Users. That partition is on two HDD's running on an Intel RAID controller. (The motherboard has got two SATA/RAID onboard controllers.) The goal is: Without reinstallation, I want to move SSD to a standard SATA controller. Then I want to get rid of RAID1, and move one HDD to the same internal Intel SATA controller and the current E:\Users move to that new place. I want it to stay as E:\users, but I need to reinstall the HDD to let it work in SATA mode without RAID. So I face several problems. I am sure all are solvable with free software utilities, but I don't know how exactly to do it. I can see the particular problems: I have got all users at E:\Users. When I turn off that E:\ disk, I won't be able to login to Windows. I need at least one admin account to be placed back at C:\users. The current C: runs on standard 120 GB SSD, but it is connected to a Marvell SATA/RAID controller. I am affraid the Windows won't let me put it to Intel SATA controller due to hardware/licence check, and I won't be able to use standard W7 recovery disk either, because there probably isn't marvel SATA/RAID driver on it. I haven't tried anything yet, because I am affraid I can end up with computer not working at all. (I want to move it to a standard ICH10 Intel SATA controller to let us have no problems in future with it. I think it is not very safe when we use any nonstandard hardware to boot the computer.) I need to somehow backup current E:\ disk and restore it to a new E:. I hope this will be the easy part (as long as the admin account will reside on C:). The E: RAID array is very large, but it is almost empty (less than 100 GB of data.) So I can make the partition smaller so it can easily fit to a single SATA HDD.

    Read the article

  • Need a script/batch/program that runs a command that won't be killed when the parent is killed

    - by billc.cn
    The scenario I use Zabbix to monitor my servers and recently I wanted to add some more metrics for the Windows ones. For security reasons, I used Zabbix's User Parameter feature, but it limits the execution of external commands to about 3 seconds. After that, the command is forcibly killed. I want to run some long run commands, so I used the trick from Zabbix's forum: run the command in the background, write the results to a file and use Zabbix to collect them. This is rather easy under *nix thanks to the "&" operator, but there is no such support in Windows' shell. To make things worse, when Zabbix kills forcibly kill the cmd.exe it used to evaluate the commands, all child processes die including the unfinished background tasks. Thus I need something that can sever all the ties with its children so they won't be affected in the cascading kill. What I've tried start and start /B - They do nothing as the child always die with the parent WScript.Shell.Run as in invis.vbs from StackOverflow - Sometimes work. If the wscript process is forcibly killed as opposed to quitting on its own, the children will die as well. hstart - similar results to invis.vbs At command - This requires you to set an absolution time for the task to run as opposed to an offset, so the code would be quite messy due to the limited shell scripting capability of Windows. (Edit) PsExec.exe from the SysInternals suite - It uses a service to launch the command, so it is not affected by the kill; however, it prints some banner and log info to StdErr and there's no switch to disable this. When I use 2>NUL to redirect them, Zabbix reports an error. After trying the above in different combinations, I noticed if I call hstart from invis.vbs, the command started by the former will be left alone as a parent-less process when invis.vbs is killed. However, since I need to redirect the output, the command I want to run is always in the form of cmd.exe /c ""command" "args"" >log. The vbs also removes all the quotes, so I have to encode the command with self-defined escape sequences. The end result involves about five levels of escaping/quoting, which is almost impossible to maintain. Anyone know any better solutions? Some requirements Any bat/vbs/js/Win32 binary is acceptable Better not require multiple levels of escaping No .Net (including PowerShell) because it is not installed

    Read the article

  • Glassfish JSF/EAR Apache 2.2 proxy_ajp_mod Referred Content Missing (images/links/etc)

    - by BillR
    Full disclosure: Since this seems to be more of a configuration issue, I deleted this from Stack (where it wasn't getting any response) and reposted here. The problem is how to change the requestContextPath served up by Glassfish behind mod_proxy_ajp. The site/app runs fine if connecting directly to Glassfish port 8080 which is ultimately not what I want to do. So I need help with configuration for my servers and jsf deployment. I can see the issue but don't know how to resolve it. It has to do with the requestContextPath. Simply put, Apache directs to http://mysite.com/welcome.xhtml which is correct and what I want, but the page is minus the images and styles. The issue is Glassfish itself is still pointing to http://mysite.com/myapp/*. So all links it serves in the app/site still refer via the requestContextPath. That is the /myapp/* part of http://mysite.com/myapp/welcome.xhtml. When I look in the page source, images which are referred to with relative links still point to the requestContextPath (that is, /myapp/). This is fixable but a real pain. However with page links I can't set the relative path. If I hover over the contact page link I see http://mysite.com/myapp/contact.xhtml, and if I click it, I get 404. You can see the /myapp/ context path in the page source as well. If I type in the URL http://mysite.com/contact.xhtml I get the page minus its referred links (requestContextPath). On Apache ProxyPass / ajp://littlewalterserver:8009/myapp-web/ ProxyPassReverse / ajp://littlewalterserver:8009/myapp_Project-web On Glassfish asadmin create-network-listener --listenerport 8009 --protocol http-listener-1 --jkenabled true jk-connector I have tried going in to Glassfish and setting the web app as the default web app. I have changed the / in glassfish-web.xml (and checked to make sure it was the same in the EAR file). How can I get Glassfish to not include the /myapp/ context in the URLs? This has to be easy if you know how, but I don't know how, can someone help out here? Thanks.

    Read the article

  • HttpWebRequest and Ignoring SSL Certificate Errors

    - by Rick Strahl
    Man I can't believe this. I'm still mucking around with OFX servers and it drives me absolutely crazy how some these servers are just so unbelievably misconfigured. I've recently hit three different 3 major brokerages which fail HTTP validation with bad or corrupt certificates at least according to the .NET WebRequest class. What's somewhat odd here though is that WinInet seems to find no issue with these servers - it's only .NET's Http client that's ultra finicky. So the question then becomes how do you tell HttpWebRequest to ignore certificate errors? In WinInet there used to be a host of flags to do this, but it's not quite so easy with WebRequest. Basically you need to configure the CertificatePolicy on the ServicePointManager by creating a custom policy. Not exactly trivial. Here's the code to hook it up: public bool CreateWebRequestObject(string Url) {    try     {        this.WebRequest =  (HttpWebRequest) System.Net.WebRequest.Create(Url);         if (this.IgnoreCertificateErrors)            ServicePointManager.CertificatePolicy = delegate { return true; };}One thing to watch out for is that this an application global setting. There's one global ServicePointManager and once you set this value any subsequent requests will inherit this policy as well, which may or may not be what you want. So it's probably a good idea to set the policy when the app starts and leave it be - otherwise you may run into odd behavior in some situations especially in multi-thread situations.Another way to deal with this is in you application .config file. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} <configuration>   <system.net>     <settings>       <servicePointManager           checkCertificateName="false"           checkCertificateRevocationList="false"                />     </settings>   </system.net> </configuration> This seems to work most of the time, although I've seen some situations where it doesn't, but where the code implementation works which is frustrating. The .config settings aren't as inclusive as the programmatic code that can ignore any and all cert errors - shrug. Anyway, the code approach got me past the stopper issue. It still amazes me that theses OFX servers even require this. After all this is financial data we're talking about here. The last thing I want to do is disable extra checks on the certificates. Well I guess I shouldn't be surprised - these are the same companies that apparently don't believe in XML enough to generate valid XML (or even valid SGML for that matter)...© Rick Strahl, West Wind Technologies, 2005-2011Posted in .NET  CSharp  HTTP  

    Read the article

  • ERROR: Linux route add command failed: external program exited with error status: 4

    - by JohnMerlino
    A remote machine running fedora uses openvpn, and multiple developers were successfully able to connect to it via their client openvpn. However, I am running Ubuntu 12.04 and I am having trouble connecting to the server via vpn. I copied ca.crt, home.key, and home.crt from the server to my local machine to /etc/openvpn folder. My client.conf file looks like this: ############################################## # Sample client-side OpenVPN 2.0 config file # # for connecting to multi-client server. # # # # This configuration can be used by multiple # # clients, however each client should have # # its own cert and key files. # # # # On Windows, you might want to rename this # # file so it has a .ovpn extension # ############################################## # Specify that we are a client and that we # will be pulling certain config file directives # from the server. client # Use the same setting as you are using on # the server. # On most systems, the VPN will not function # unless you partially or fully disable # the firewall for the TUN/TAP interface. ;dev tap dev tun # Windows needs the TAP-Win32 adapter name # from the Network Connections panel # if you have more than one. On XP SP2, # you may need to disable the firewall # for the TAP adapter. ;dev-node MyTap # Are we connecting to a TCP or # UDP server? Use the same setting as # on the server. ;proto tcp proto udp # The hostname/IP and port of the server. # You can have multiple remote entries # to load balance between the servers. remote xx.xxx.xx.130 1194 ;remote my-server-2 1194 # Choose a random host from the remote # list for load-balancing. Otherwise # try hosts in the order specified. ;remote-random # Keep trying indefinitely to resolve the # host name of the OpenVPN server. Very useful # on machines which are not permanently connected # to the internet such as laptops. resolv-retry infinite # Most clients don't need to bind to # a specific local port number. nobind # Downgrade privileges after initialization (non-Windows only) ;user nobody ;group nogroup # Try to preserve some state across restarts. persist-key persist-tun # If you are connecting through an # HTTP proxy to reach the actual OpenVPN # server, put the proxy server/IP and # port number here. See the man page # if your proxy server requires # authentication. ;http-proxy-retry # retry on connection failures ;http-proxy [proxy server] [proxy port #] # Wireless networks often produce a lot # of duplicate packets. Set this flag # to silence duplicate packet warnings. ;mute-replay-warnings # SSL/TLS parms. # See the server config file for more # description. It's best to use # a separate .crt/.key file pair # for each client. A single ca # file can be used for all clients. ca ca.crt cert home.crt key home.key # Verify server certificate by checking # that the certicate has the nsCertType # field set to "server". This is an # important precaution to protect against # a potential attack discussed here: # http://openvpn.net/howto.html#mitm # # To use this feature, you will need to generate # your server certificates with the nsCertType # field set to "server". The build-key-server # script in the easy-rsa folder will do this. ns-cert-type server # If a tls-auth key is used on the server # then every client must also have the key. ;tls-auth ta.key 1 # Select a cryptographic cipher. # If the cipher option is used on the server # then you must also specify it here. ;cipher x # Enable compression on the VPN link. # Don't enable this unless it is also # enabled in the server config file. comp-lzo # Set log file verbosity. verb 3 # Silence repeating messages ;mute 20 But when I start server and look in /var/log/syslog, I notice the following error: May 27 22:13:51 myuser ovpn-client[5626]: /sbin/route add -net 10.27.12.1 netmask 255.255.255.252 gw 10.27.12.37 May 27 22:13:51 myuser ovpn-client[5626]: ERROR: Linux route add command failed: external program exited with error status: 4 May 27 22:13:51 myuser ovpn-client[5626]: /sbin/route add -net 172.27.12.0 netmask 255.255.255.0 gw 10.27.12.37 May 27 22:13:51 myuser ovpn-client[5626]: /sbin/route add -net 10.27.12.1 netmask 255.255.255.255 gw 10.27.12.37 And I am unable to connect to the server via openvpn: $ ssh [email protected] ssh: connect to host xxx.xx.xx.130 port 22: No route to host What may I be doing wrong?

    Read the article

  • Install XP Mode with VirtualBox Using the VMLite Plugin

    - by Mysticgeek
    Would you like to run XP Mode, but prefer Sun’s VirtualBox for virtualization?  Thanks to the free VMLite plugin, you can quickly and easily run XP Mode in or alongside VirtualBox. Yesterday we showed you one method to install XP Mode in VirtualBox, unfortunately in that situation you lose XP’s activation, and it isn’t possible to reactivate it. Today we show you a tried and true method for running XP mode in VirtualBox and integrating it seamlessly with Windows 7. Note: You need to have Windows 7 Professional or above to use XP Mode in this manner. Install XP Mode Make sure you’re logged in with Administrator rights for the entire process. The first thing you’ll want to do is install XP Mode on your system (link below). You don’t need to install Windows Virtual PC. Go through and install XP Mode using the defaults. Install VirtualBox Next you’ll need to install VirtualBox 3.1.2 or higher if it isn’t installed already. If you have an older version of VirtualBox installed, make sure to update it. During setup you’re notified that your network connection will be reset. Check the box next to Always trust software from “Sun Microsystems, Inc.” then click Install.   Setup only takes a couple of minutes, and does not require a reboot…which is always nice. Install VMLite XP Mode Plugin The next thing we’ll need to install is the VMLite XP Mode Plugin. Again Installation is simple following the install wizard. During the install like with VirtualBox you’ll be asked to install the device software. After it’s installed go to the Start menu and run VMLite Wizard as Administrator. Select the location of the XP Mode Package which by default should be in C:\Program Files\Windows XP Mode. Accept the EULA…and notice that it’s meant for Windows 7 Professional, Enterprise, and Ultimate editions. Next, name the machine, choose the install folder, and type in a password. Select if you want Automatic Updates turned on or not. Wait while the process completes then click Finish.   The VMLite XP Mode will set up to run the first time. That is all there is to this section. You can run XP Mode from within the VMLite Workstation right away. XP Mode is fully activated already, and the Guest Additions are already installed, so there’s nothing else you need to do!  XP Mode is the whole way ready to use. Integration with VirtualBox Since we installed the VMLite Plugin, when you open VirtualBox you’ll see it listed as one of your machines and you can start it up from here.   Here we see VMLite XP Mode running in Sun VirtualBox. Integrate with Windows 7 To integrate it with Windows 7 click on Machine \ Seamless Mode…   Here you can see the XP menu and Taskbar will be placed on top of Windows 7. From here you can access what you need from XP Mode.   Here we see XP running on Virtual Box in Seamless Mode. We have the old XP WordPad sitting next to the new Windows 7 version of WordPad. This works so seamlessly you forget if your working in XP or Windows 7. In this example we have Windows Home Server Console running in Windows 7, while installing MSE from IE 6 in XP Mode. At the top of the screen you will still have access to the VMs controls.   You can click the button to exit Seamless Mode, or simply hit the right “CTRL+L” Conclusion This is a very slick way to run XP Mode in VirtualBox on any machine that doesn’t have Hardware Virtualization. This method also doesn’t lose the XP Mode activation and is actually extremely easy to set up. If you prefer VMware (like we do), Check out how to run XP Mode on machines without Hardware Virtualization capability, and also how to create an XP Mode for Vista and Windows 7 Home Premium. Links Download XP Mode Download VirtualBox Download VMLite XP Mode Plugin for VirtualBox (Site Registration Required) Similar Articles Productive Geek Tips Search for Install Packages from the Ubuntu Command LineHow To Run XP Mode in VirtualBox on Windows 7 (sort of)Install and Use the VLC Media Player on Ubuntu LinuxInstall Monodevelop on Ubuntu LinuxInstall Flash Plugin Manually in Firefox on Vista TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Enable Check Box Selection in Windows 7 OnlineOCR – Free OCR Service Betting on the Blind Side, a Vanity Fair article 30 Minimal Logo Designs that Say More with Less LEGO Digital Designer – Free Create a Personal Website Quickly using Flavors.me

    Read the article

  • Install XP Mode with VirtualBox Using the VMLite Plugin

    - by Mysticgeek
    Would you like to run XP Mode, but prefer Sun’s VirtualBox for virtualization?  Thanks to the free VMLite plugin, you can quickly and easily run XP Mode in or alongside VirtualBox. Yesterday we showed you one method to install XP Mode in VirtualBox, unfortunately in that situation you lose XP’s activation, and it isn’t possible to reactivate it. Today we show you a tried and true method for running XP mode in VirtualBox and integrating it seamlessly with Windows 7. Note: You need to have Windows 7 Professional or above to use XP Mode in this manner. Install XP Mode Make sure you’re logged in with Administrator rights for the entire process. The first thing you’ll want to do is install XP Mode on your system (link below). You don’t need to install Windows Virtual PC. Go through and install XP Mode using the defaults. Install VirtualBox Next you’ll need to install VirtualBox 3.1.2 or higher if it isn’t installed already. If you have an older version of VirtualBox installed, make sure to update it. During setup you’re notified that your network connection will be reset. Check the box next to Always trust software from “Sun Microsystems, Inc.” then click Install.   Setup only takes a couple of minutes, and does not require a reboot…which is always nice. Install VMLite XP Mode Plugin The next thing we’ll need to install is the VMLite XP Mode Plugin. Again Installation is simple following the install wizard. During the install like with VirtualBox you’ll be asked to install the device software. After it’s installed go to the Start menu and run VMLite Wizard as Administrator. Select the location of the XP Mode Package which by default should be in C:\Program Files\Windows XP Mode. Accept the EULA…and notice that it’s meant for Windows 7 Professional, Enterprise, and Ultimate editions. Next, name the machine, choose the install folder, and type in a password. Select if you want Automatic Updates turned on or not. Wait while the process completes then click Finish.   The VMLite XP Mode will set up to run the first time. That is all there is to this section. You can run XP Mode from within the VMLite Workstation right away. XP Mode is fully activated already, and the Guest Additions are already installed, so there’s nothing else you need to do!  XP Mode is the whole way ready to use. Integration with VirtualBox Since we installed the VMLite Plugin, when you open VirtualBox you’ll see it listed as one of your machines and you can start it up from here.   Here we see VMLite XP Mode running in Sun VirtualBox. Integrate with Windows 7 To integrate it with Windows 7 click on Machine \ Seamless Mode…   Here you can see the XP menu and Taskbar will be placed on top of Windows 7. From here you can access what you need from XP Mode.   Here we see XP running on Virtual Box in Seamless Mode. We have the old XP WordPad sitting next to the new Windows 7 version of WordPad. This works so seamlessly you forget if your working in XP or Windows 7. In this example we have Windows Home Server Console running in Windows 7, while installing MSE from IE 6 in XP Mode. At the top of the screen you will still have access to the VMs controls.   You can click the button to exit Seamless Mode, or simply hit the right “CTRL+L” Conclusion This is a very slick way to run XP Mode in VirtualBox on any machine that doesn’t have Hardware Virtualization. This method also doesn’t lose the XP Mode activation and is actually extremely easy to set up. If you prefer VMware (like we do), Check out how to run XP Mode on machines without Hardware Virtualization capability, and also how to create an XP Mode for Vista and Windows 7 Home Premium. Links Download XP Mode Download VirtualBox Download VMLite XP Mode Plugin for VirtualBox (Site Registration Required) Similar Articles Productive Geek Tips Search for Install Packages from the Ubuntu Command LineHow To Run XP Mode in VirtualBox on Windows 7 (sort of)Install and Use the VLC Media Player on Ubuntu LinuxInstall Monodevelop on Ubuntu LinuxInstall Flash Plugin Manually in Firefox on Vista TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Enable Check Box Selection in Windows 7 OnlineOCR – Free OCR Service Betting on the Blind Side, a Vanity Fair article 30 Minimal Logo Designs that Say More with Less LEGO Digital Designer – Free Create a Personal Website Quickly using Flavors.me

    Read the article

  • How to Create Auto Playlists in Windows Media Player 12

    - by DigitalGeekery
    Are you getting tired of the same old playlists in Windows Media Player? Today we’ll show you how to create dynamic auto playlists based on criteria you choose in WMP 12 in Windows 7. Auto Playlists In Library view, click on Create playlist dropdown arrow and select Create auto playlist. On the New Auto Playlist window type in a name for the playlist in the text box. Now we need to choose our criteria by which to filter your playlist. Select Click here to add criteria. For our example, we will create a playlist of songs that were added to the library in the last week from the Alternative genre. So, we will first select Date Added from the dropdown list. Many criteria will have addition options to configure. In the example below you will see that we have a few options to fine tune.   We will filter all the songs added to the library in the last 7 days. We will select Is After from the first dropdown list. Then select Last 7 Days from the second dropdown list. You can add multiple criteria to further filter your playlist. If you can’t find the criteria you are looking for, select “More” at the bottom of the dropdown list.   This will pull up a filter window with all the criteria. Select a filter and then click OK when finished.   From the Genre dropdown, we will select Alternative. If you’d like to add Pictures, Videos, or TV Shows to your auto playlists you can do so by selecting them from the dropdown list under And also include. You will then be able to select criteria for your pictures, videos, or TV shows from the dropdown list.   Finally, you can also add restrictions to your music such as the number of items, duration, or total size. We will limit the duration of our playlist to one hour by selecting Limit Total Duration To… Then type in 1 hour…Click OK.   Our library is automatically filtered and a playlist is created based on the criteria we selected. When additional songs are added to the Windows Media Player library, any of new songs that fit the criteria will automatically be added to the New Songs playlist. You can also save a copy of an auto playlist as a regular playlist. Switch to Playlists view by clicking Playlists from either the top menu or the navigation bar. Select the Play tab and then click Clear list to remove any tracks from the list pane.   Right-click on the playlist you want to save, select Add to, and then Play list. The songs from your auto playlist will appear as an Unsaved list on the list pane. Click Save list. Type in a name for your playlist. Your auto playlist will continue to change as you add or remove items from your Media Player library that meet the criteria you established. The new saved playlist we just created will stay as it is currently. Editing a Auto playlist is easy. Right-click on the playlist and select Edit. Now you are ready to enjoy your playlist. Conclusion Auto playlists are great way to keep your playlists fresh in Windows Media Player 12. Users can get creative and experiment with the wide variety of criteria to customize their listening experience. If you are new to playlists in Windows Media Player, you may want to check our our previous post on how to create custom playlists in Windows Media Player 12. Are you looking to get better sound from WMP 12? Take a look at how to improve playback using enhancements in Windows Media Player 12. Similar Articles Productive Geek Tips Create Custom Playlists in Windows Media Player 12Fixing When Windows Media Player Library Won’t Let You Add FilesInstall and Use the VLC Media Player on Ubuntu LinuxMake Windows Media Player Automatically Open in Mini Player ModeMake VLC Player Look like Windows Media Player 10 TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips VMware Workstation 7 Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Error Goblin Explains Windows Error Codes Twelve must-have Google Chrome plugins Cool Looking Skins for Windows Media Player 12 Move the Mouse Pointer With Your Face Movement Using eViacam Boot Windows Faster With Boot Performance Diagnostics Create Ringtones For Your Android Phone With RingDroid

    Read the article

  • How To Switch Back to Outlook 2007 After the 2010 Beta Ends

    - by Matthew Guay
    Are you switching back to Outlook 2007 after trying out Office 2010 beta?  Here’s how you can restore your Outlook data and keep everything working fine after the switch. Whenever you install a newer version of Outlook, it will convert your profile and data files to the latest format.  This makes them work the best in the newer version of Outlook, but may cause problems if you decide to revert to an older version.  If you installed Outlook 2010 beta, it automatically imported and converted your profile from Outlook 2007.  When the beta expires, you will either have to reinstall Office 2007 or purchase a copy of Office 2010. If you choose to reinstall Office 2007, you may notice an error message each time you open Outlook. Outlook will still work fine and all of your data will be saved, but this error message can get annoying.  Here’s how you can create a new profile, import all of your old data, and get rid of this error message. Banish the Error Message with a New Profile To get rid of this error message, we need to create a new Outlook profile.  First, make sure your Outlook data files are backed up.  Your messages, contacts, calendar, and more are stored in a .pst file in your appdata folder.  Enter the following in the address bar of an Explorer window to open your Outlook data folder, and replace username with your user name: C:\Users\username\AppData\Local\Microsoft\Outlook Copy the Outlook Personal Folders (.pst) files that contain your data. Its name is usually your email address, though it may have a different name.  If in doubt, select all of the Outlook Personal Folders files, copy them, and save them in another safe place (such as your Documents folder). Now, let’s remove your old profile.  Open Control Panel, and select Mail.  In Windows Vista or 7, simply enter “Mail” in the search box and select the first entry. Click the “Show Profiles…” button. Now, select your Outlook profile, and click Remove.  This will not delete your data files, but will remove them from Outlook. Press Yes to confirm that you wish to remove this profile. Open Outlook, and you will be asked to create a new profile.  Enter a name for your new profile, and press Ok. Now enter your email account information to setup Outlook as normal. Outlook will attempt to automatically configure your account settings.  This usually works for accounts with popular email systems, but if it fails to find your information you can enter it manually.  Press finish when everything’s done. Outlook will now go ahead and download messages from your email account.  In our test, we used a Gmail account that still had all of our old messages online.  Those files are backed up in our old Outlook data files, so we can save time and not download them.  Click the Send/Receive button on the bottom of the window, and select “Cancel Send/Receive”. Restore Your Old Outlook Data Let’s add our old Outlook file back to Outlook 2007.  Exit Outlook, and then go back to Control Panel, and select Mail as above.  This time, click the Data Files button. Click the Add button on the top left. Select “Office Outlook Personal Folders File (.pst)”, and click Ok. Now, select your old Outlook data file.  It should be in the folder that opens by default; if not, browse to the backup copy we saved earlier, and select it. Press Ok at the next dialog to accept the default settings. Now, select the data file we just imported, and click “Set as Default”. Now, all of your old messages, appointments, contacts, and everything else will be right in Outlook ready for you.  Click Ok, and then open Outlook to see the change. All of the data that was in Outlook 2010 is now ready to use in Outlook 2007.  You won’t have to wait to re-download all of your emails from the server since everything’s still here ready to be used.  And when you open Outlook, you won’t see any error messages, either! Conclusion Migrating your Outlook profile back to Outlook 2007 is fairly easy, and with these steps, you can avoid seeing an error message every time you open Outlook.  With all your data in tact, you’re ready to get back to work instead of getting frustrated with Outlook.  Many of us use webmail and keep all of our messages in the cloud, but even on broadband connections it can take a long time to download several gigabytes of emails. Similar Articles Productive Geek Tips Opening Attachments in Outlook 2007 by KeyboardQuickly Create Appointments from Tasks with Outlook 2007’s To-Do BarFix For Outlook 2007 Constantly Asking for Password on VistaPin Microsoft Outlook to the Desktop BackgroundOur Look at the LinkedIn Social Connector for Outlook TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Download Free MP3s from Amazon Awe inspiring, inter-galactic theme (Win 7) Case Study – How to Optimize Popular Wordpress Sites Restore Hidden Updates in Windows 7 & Vista Iceland an Insurance Job? Find Downloads and Add-ins for Outlook

    Read the article

  • SQL SERVER – Retrieve and Explore Database Backup without Restoring Database – Idera virtual databas

    - by pinaldave
    I recently downloaded Idera’s SQL virtual database, and tested it. There are a few things about this tool which caught my attention. My Scenario It is quite common in real life that sometimes observing or retrieving older data is necessary; however, it had changed as time passed by. The full database backup was 40 GB in size, and, to restore it on our production server, it usually takes around 16 to 22 minutes, depending on the load server that is usually present. This range in time varies from one server to another as per the configuration of the computer. Some other issues we used to have are the following: When we try to restore a large 40-GB database, we needed at least that much space on our production server. Once in a while, we even had to make changes in the restored database, and use the said changed and restored database for our purpose, making it more time-consuming. My Solution I have heard a lot about the Idera’s SQL virtual database tool.. Well, right after we started to test this tool, we found out that it really delivers what it promises. Using this software was very easy and we were able to restore our database from backup in less than 2 minutes, sparing us from the usual longer time of 16–22 minutes. The needful was finished in a total of 10 minutes. Another interesting observation is that there is no need to have an additional space for restoring the database. For complete database restoration, the single additional MB on the drive is not required anymore. We can use the database in the same way as our regular database, and there is no need for any additional configuration and setup. Let us look at the most relevant points of this product based on my initial experience: Quick restoration of the database backup No additional space required for database restoration virtual database has no physical .MDF or .LDF The database which is restored is, in fact, the backup file converted in the virtual database. DDL and DML queries can be executed against this virtually restored database. Regular backup operation can be implemented against virtual database, creating a physical .bak file that can be used for future use. There was no observed degradation in performance on the original database as well the restored virtual database. Additional T-SQL queries can be let off on the virtual database. Well, this summarizes my quick review. And, as I was saying, I am very impressed with the product and I plan to explore it more. There are many features that I have noticed in this tool, which I think can be very useful if properly understood. I had taken a few screenshots using my demo database afterwards. Let us see what other things this tool can do besides the mentioned activities. I am surprised with its performance so I want to know how exactly this feature works, specifically in the matter of why it does not create any additional files and yet, it still allows update on the virtually restored database. I guess I will have to send an e-mail to the developers of Idera and try to figure this out from them. I think this tool is very useful, and it delivers a high level of performance way more than what I expected. Soon, I will write a review for additional uses of SQL virtual database.. If you are using SQL virtual database in your production environment, I am eager to learn more about it and your experience while using it. The ‘Virtual’ Part of virtual database When I set out to test this software, I thought virtual database had something to do with Hyper-V or visualization. In fact, the virtual database is a kind of database which shows up in your SQL Server Management Studio without actually restoring or even creating it. This tool creates a database in SSMS from the backup of the same database. The backup, however, works virtually the same way as original database. Potential Usage of virtual database: As soon as I described this tool to my teammate, I think his very first reaction was, “hey, if we have this then there is no need for log shipping.” I find his comment very interesting as log shipping is something where logs are moved to another server. In fact, there are no updates on the database from log; I would rather compare it with Snapshot Replication. In fact, whatever we use, snapshot replicated database can be similarly used and configured with virtual database. I totally believe that we can use it for reporting purpose. In fact, after this database was configured, I think the uses of this tool are unlimited. I will have to spend some more time studying it and will get back to you. Click on images to see larger images. virtual database Console Harddrive Space before virtual database Setup Attach Full Backup Screen Backup on Harddrive Attach Full Backup Screen with Settings virtual database Setup – less than 60 sec virtual database Setup – Online Harddrive Space after virtual database Setup Point in Time Recovery Option – Timeline View virtual database Summary No Performance Difference between Regular DB vs Virtual DB Please note that all SQL Server MVP gets free license of this software. Reference: Pinal Dave (http://blog.SQLAuthority.com), Idera (virtual database) Filed under: Database, Pinal Dave, SQL, SQL Add-On, SQL Authority, SQL Backup and Restore, SQL Data Storage, SQL Query, SQL Server, SQL Tips and Tricks, SQL Utility, SQLAuthority News, T SQL, Technology Tagged: Idera

    Read the article

< Previous Page | 495 496 497 498 499 500 501 502 503 504 505 506  | Next Page >