Is there a way in nginx to redirect everything from domain1.com to domain2.com, except for the homepage?
Right now I have:
server {
listen 80;
server_name www.domain1.com domain1.com;
rewrite ^ http://domain2.com$uri permanent;
}
This works, except that I'd like http://domain1.com (without any additional path) to be left alone and not redirected. Basically, I need to redirect everything, to avoid broken links, but I want to use the homepage of domain1 to serve a static file.
Have received some reports from a customer (a very large company) they reported issues from clients who are using Facebook.
These clients claim that once in a while when they log in to Facebook they end up in someone else's session.
I know that network is NATed then Proxied before getting to Facebook.com.
Although I'm not able to explain how this issue can occur.
Is it possible that the Proxy is not sending the right session back to the clients?
How can they end up with someone else's session since Facebook is cookie based session??
Anyone seen this before?
I have an 1 TB data disk and the bios and Windows are reporting a "Smart" error. At least, I get a Smart event but it doesn't indicate how serious the failure could be. My system is about 6 months old, including the disk so the warranty will cover the damage. Unfortunately, I lack a second disk of 1 TB in size which I can use to make a full backup. The most important data on this disk is safe, but there's a lot of work data which can be regenerated but this would cost a lot of time.
So I ordered an USB disk of 1 TB which will arrive in three days. By then I can make a full backup of the data and afterwards, it can crash.
But will the disk live that long? (Well, I won't use the PC as long as I can't make a backup.) How serious is such a Smart event? I know it's serious enough to have it replaced, but will it live for another week or could it die any moment?Update: I purchased an 1 TB external disk and spent most of the day making a backup of the 1 TB disk. It survived that. I then received a new disk, since it was still under warranty and replaced the hard disk. Then I had to spend most of a day again to put back the backup. I need to send back the faulty disk and now have an additional external disk, which could always be practical. :-)
The Smart Error report did not cause any failures on the original disk. I won't advise to ignore these warnings, but the disk still has enough life in it to last a few more days. (Just make sure you have a good back-up.)
And oh, the horror of having to make a complete backup such a huge disk. :-) If your data is important, make sure you have something that supports incremental backups and lots of space. (In my case, the data wasn't very important, just practical to have on-disk together.)
We are trying to renew our existing web site certificate on our IIS 7 site under Windows Server 2008 R2, but we continue to get the "Access is denied" error that others have posted.
However, when we have gone to implement the common fix of making sure the Administrator group has full access to all folders and subfolders on the C:\ProgramData\Microsoft\Crypto\RSA folder, we get an "Access is Denied" error on changing those permissions.
Yes, we are logged in as Administrator user - it just seems to not allow us to modify the group permissions to this folder. Help! We need to renew our certificate before March 2011!
I have an old Ubuntu 8.10 32-bit with MySQL 5.0.67.
There's 5.7GB of data in it and it grows by about 100MB every day.
About 3 days ago, the MySQL instance begin dying suddenly and quitely (no log entry) during the nightly mysqldump.
What could be causing it?
Upgrading MySQL is a long-term project for me, unless there happens to be a specific bug in 5.0.67 then I guess I'll just need to reprioritize.
I'm hoping somebody might be familiar with this problem since this is a fairly popular version bundled with Ubuntu 8.10.
Thanks
I am using Ubuntu 11.04 and attached a Garmin data cable. The device gets recognized
[17718.502138] USB Serial support registered for pl2303
[17718.502181] pl2303 2-1:1.0: pl2303 converter detected
[17718.513416] usb 2-1: pl2303 converter now attached to ttyUSB0
[17718.513443] usbcore: registered new interface driver pl2303
[17718.513446] pl2303: Prolific PL2303 USB to serial adaptor driver
... but when I do a strace cat /dev/ttyUSB0 it hangs on the open part and does not continue any more
open("/dev/ttyUSB0", O_RDONLY|O_LARGEFILEC
If I do the same on Ubuntu 12.04 it stops on fread(" ... ") which is okay, as there is currently no data comming in at this port.
I am not sure if it is just a different configuration of the system or an driver related problem. How can I track this down further? Unfortunately I can not update the old Ubuntu 11.04 system for different reasons at the moment.
Can someone comment what they would add to the following list of SOP in terms of best practices? This is being set up on AWS, and then after further testing, back in our datacenter.
Standard Operation Procedure (SOP):
Installation Part: 2 - Installation of Software Components in Windows 2008 R2 (Updated).
Step: 1 Logon to the host through Remote Desktop.
Strp: 2 Open Server Manager - Server Roles - Install Web Server IIS 7.5 with compatible of IIS 6 features and Management compatibility mode.
Step: 3 Open IE/Mozilla to Download the below listed software's and save all installation files to folder called "AWS Server Install Files" for future reference..
Net Framework 2.0 (Download that from internet)
Crystal reports for .Net Framework 2.0 (x64) (Download that from internet)
SQL Server 2005 (AWS Image)
Step: 4 Once all software's saved on local drive, then Install it one by one.
Step: 5 Navigate to Desktop folder to install the below listed softwares.
Microsoft Asp.net 2.0 AjaxExtention 1.0 (placed on Desktop \Softwares)
WebEx recorder. (placed on Desktop \Softwares)
Winrar(placed on Desktop \Softwares)
Step: 6 Make sure all the software are working fine.
Step: 7 Inspect the server once entirely.
Step: 8 Logoff & Stop the Instance.
I am a solo developer and the sites I'm deploying are very small, usually hobby sites and I have a few questions about the Amazon services.
Is there a reason for me to use beanstalk or should I just stick with a single ec2 instance?
Should I use RDS for database? I heard someone say that I could just install a database on my ec2 instance, making it cheaper. I'm trying to keep everything as cheap as possible.
I need to point custom domains to my sites. Pretty sure that means I have to deal with elastic IPs. Do those work with beanstalk or only with individual ec2 instances?
Thanks in advance!
Hi,
At work I have EXCHANGE Server, so I am getting emails (Work related). I wanted to add my personal email (POP or IMAP) and get those emails too.
I am afraid that if i do that, my work can still have access to my personal emails? and they will see if I send or receive emails.
Any suggestions?
I'm in the middle of upgrading, and purchasing licensing for 3 of our Servers.
One will be a Windows Server 2008 machine, running SQL Server 2008.
The other two machines will be domain controllers, both running Windows 2003.
Our organisation has 30 Users.
I understand (through our reseller) that a Windows 2008 licence gives "downgrade" rights to use 2003.
Realistically, for the above setup of 3 machines, will I just need one set of 30 CALs for 2008?
We have setup ipsec and l2tp on linux. One question came up (due to firewall management policy) is whether it's possible to have 1 virtual interface instead of one per connected client.
Now we have:
ppp0 serverip clientip1
ppp1 serverip clientip2
Want to have:
l2tp_tun serverip serverip
like with OpenVPN's tun interfaces and then to be able to push IP address and route to each client.
I've installed nginx server on my Mac from MacPorts: sudo port install nginx.
Then I followed the recommendation from the port installation console and created the launchd startup item for nginx, then started the server. It works fine (after I renamed nginx.conf.example to nginx.conf and renamed mime.types.example to mime.types), but I couldn't stop it... I tried sudo nginx -s stop - this doesn't stop the server, I can still see "Welcome to nginx!" page in my browser on http://localhost, also I still see master and worker processes of nginx with ps -e | grep nginx.
What is the best way to start/stop nginx on Mac?
BTW, I've added "daemon off;" into nginx.conf - as recommended by various resources.
Thank you.
We have central HQ building and a lot of small branch offices connecting via VPN and want to implement AD (If you can believe we still haven't). We want everyone to log in using domain accounts and be policed centrally.
We are OK with having a RODC in a branch office with like 10 computers. But we have these small branches with two to four PCs only. Some of these branches connect to HQ via IPSec site-to-site VPN, some via remote access (client-based) VPN.
So there is no problem with ones that have local RODC or connecting to HQ DCs via VPN router. But how about small branches? We don't really want to set up a machine there, neither we want to invest into Windows Server licenses or fancy network equipment.
Also, the problem is that we cannot access HQ DCs via VPN because we are not logged in and connected to HQ internal network yet, so DCs aren't reachable.
What is typically done in that situation if it is needed to have central management over policies on those PCs? Or is it better to let 'em loose and use local policies and accounts in this situation?
My Mac Mini outputs to my two new monitors - Dell U2311Hs.
The LED on the bezel displays blue when receiving a signal, or yellow otherwise. Both screens are displaying blue.
It also seems my Mini can see both of them...
However, one of them is black. It just displays black, but appears to be receiving a signal (when I turn the Mac off, it then displays No Signal).
To make things weirder, on startup, the boot up (white with Apple logo) appears on the right monitor (the one that now displays black).
Occasionally, it flickers up on the black screen for 1 second.
I have tried Detect Displays. It appears to do nothing.
I'm also running a dual monitor KVM. Video connections are DVI-D.
How can I fix this situation?
Thanks.
Update
This is the weirdest thing - I used the DVI-D cable that came with the KVM and it seems to have fixed it - I didn't both because it looks identical to any other DVI cable (in form an pin out).
So, I will accept an answer if someone can tell me what may be the difference in these cables?
I have a thinkpad t61 with a UPEK fingerprint reader. I'm running ubuntu 9.10, with fprint installed. Everything works fine (I am able to swipe my fingerprint to authenticate any permission dialogues or "sudo" prompts successfully) except for actually logging onto my laptop when I boot up or end my session.
I receive an error below the gnome login that says
"Could not locate any suitable fingerprints matched to available hardware."
What is causing this?
here are the contents of /etc/pam.d/common-auth file
#
# /etc/pam.d/common-auth - authentication settings common to all services
#
# This file is included from other service-specific PAM config files,
# and should contain a list of the authentication modules that define
# the central authentication scheme for use on the system
# (e.g., /etc/shadow, LDAP, Kerberos, etc.). The default is to use the
# traditional Unix authentication mechanisms.
#
# As of pam 1.0.1-6, this file is managed by pam-auth-update by default.
# To take advantage of this, it is recommended that you configure any
# local modules either before or after the default block, and use
# pam-auth-update to manage selection of other modules. See
# pam-auth-update(8) for details.
# here are the per-package modules (the "Primary" block)
auth sufficient pam_fprint.so
auth [success=1 default=ignore] pam_unix.so nullok_secure
# here's the fallback if no module succeeds
auth requisite pam_deny.so
# prime the stack with a positive return value if there isn't one already;
# this avoids us returning an error just because nothing sets a success code
# since the modules above will each just jump around
auth required pam_permit.so
# and here are more per-package modules (the "Additional" block)
auth optional pam_ecryptfs.so unwrap
# end of pam-auth-update config
#auth sufficient pam_fprint.so
#auth required pam_unix.so nullok_secure
My company already has a "local" backup strategy, but is willing to also backup data on our remote dedicated server as an additional "plus".
Some info:
Both machines are Windows Server (client is 2003, server is 2008)
Administrator rights on both machines
Valid SSL Certificate available
FTP/IIS Server available and in use
Required cryptation during transfer & storage
Free space is not a problem
Which software (both client and server side) you advice us to take?
So, we're wiping clean all PCs at our office and migrating them to a new server cluster and a new domain. Last night I tested on PC and it mostly worked except it refuses to join to the domain.
Now, our domain is named like EXAMPLE.COM. When I just type EXAMPLE the PC can't find the domain controller, even though I can ping it find. If I type EXAMPLE.COM it seems to work. How can I get it to work with just EXAMPLE? That's how I got all the new servers int he cluster to work (about 20 of them) and I haven't had any issues...
The only difference between the Windows 7 PC(s) and the servers is that the clients will be on a 10.0.3.X network where as the servers are on a 10.0.1.X network.
Oh, the domain controller and all the other servers are Windows Server 2008 R2.
Suggestions will be highly appreciated!
I thought it would be cool to use Mozilla's Prism to create a webapp for min.us, but drag and drop is disallowed because the site doesn't see the program as Firefox, Chrome or Safari, those of which are apparently the only browsers allowed to do drag and drop for fear that something will be horribly broken.
I'm pretty sure Prism runs on the same engine as Firefox, yet I wouldn't doubt it if Prism is running on an older version since it's kind of a forgotten beta.
Anyways, like the title says, I want to be able to make Prism webapps appear look like Firefox to websites to unlock awesome features.
Also, if it can only be done with Fluid, then I answers regarding that will be fine. I'm not sure what engine it's running though.
What do you need to know to at the very least get your foot in the door? We're assuming for someone who doesn't have a college degree (yet) but will eventually get one.
My guess is html, css, javascript, and php, and photoshop and dreamweaver, and sql.
And being familiar with using a web host to have sites live, like knowing how to use cpanel. It's probably a very inaccurate and narrow guess but that's what i think right now. I don't know exactly.
I have a web Linux-based infrastructure which consists of 15 virtual machines and over 50 various services. It is fully controlled by Chef. Most of the services are developed internally.
Basically the current deployment process is triggered by a shell script. A build system (a mix of Python and shell scripts) packages the services as .deb files and puts these packages into a repo. It runs apt-get update on all 15 nodes then because the standard Chef apt cookbook only runs apt-get once per day and we definitely do not want to run apt-get update unconditionally on each chef-client wake. The build system restarts chef-client daemons on all 15 nodes finally (we need this step because of pull Chef nature).
The current process has a number of drawbacks we want to address. First off, it is asynchronous because the deployment script does not check chef-client logs after restart so we don't even know if the deployment was successful. It does not even wait for Chef clients to complete the cycle. Second, we definitely do not want to force chef-client restarts on all nodes because we usually deploy only a small number of packages. And third, I am not quite sure using chef-client for deployment is legitimate, probably we are just doing it wrong from the start.
Please share your thoughts/experience.
Facebook photo privacy is more complex than most people think - including the bloggers who fill the Internet trying to explain it in simple terms.
Not only there is the basic album-level privacy setting to consider, but also what happens with Tagging (and related privacy settings) as well as the Share button when clicked by a Friend.
Has anybody seen a good, engineering-type (e.g. UML) diagram? I envision it should include the various privacy "states" a photo can be in, what causes state transitions, and the characteristics of each state?
Thanks
I'm looking for a very good example of a very poorly designed web site. For example: use of <blink> mixed with many 'cute' animated GIFs (a common home page in the mid-'90s).
It needs to display relatively correctly in the popular web browsers of today.
Thank you!
When I open firefox, it goes to what I closed it as.
I want it to open my home page and nothing else.
Not the stuff I closed it with.
How do I do that? thanks.