Search Results

Search found 21908 results on 877 pages for 'content catalog'.

Page 721/877 | < Previous Page | 717 718 719 720 721 722 723 724 725 726 727 728  | Next Page >

  • Postfix character encoding?

    - by Camran
    I use Postfix as a mailserver. I have Ubuntu OS. Then I use PHP to send emails. Problem is that none of my emails are encoded properly by a mailsoftware which my VPS provider uses. According to them, the problem lies with me. It is only the name field which isn't encoded properly. For example "Björn" becomes "Björn" in my emails. However, when I echo the $name, it outputs "Björn" which is correct. Also, gmail and hotmail does show it correctly. The strange part is that the "text" (the message itself) is encoded properly. I use the following for sending mail: $headers="MIME-Version: 1.0"."\n"; $headers.="Content-type: text/plain; charset=UTF-8"."\n"; $headers.="From: $name <$email>"."\n"; $name= iconv(mb_detect_encoding($name), "UTF-8//IGNORE//TRANSLIT", $name); //// I HAVE TRIED WITH AND WITHOUT THE LINE ABOVE, NO DIFFERENCE mail($to, '=?UTF-8?B?'.base64_encode($subject).'?=', $text, $headers, '[email protected]'); I have tried with and without the iconv line also, no luck. The last thing I can think of is POSTFIX, could there be a setting for character encoding there? Anybody knows?

    Read the article

  • One server with multiple desktops "heads" with VNC

    - by Alexis K
    I am managing a system of kiosks. Each kiosk currently is running a web browser with the application for the kiosk running in the browser. Each kiosk needs to be able to display separate content. At times, the application running in the web browser freezes. Thus, I have to go out to the site to refresh the page. I want to see if there is a way to have one central server that has multiple browser heads. Then each kiosk would run a program like VNC to display one of the heads. This way when the program freezes, I just have to login to the central server and refresh the page. Getting VNC or another remote desktop software installed on the clients is no problem. What I am looking for is a way to have VNC remote into a specific head of a head of a web browser. Does such a thing exist? Or do I have to run a VM for each kiosk to remote into? Any advice, pointers, or solutions would be helpful.

    Read the article

  • Suddenly can't send E-Mails with Apple Mail to Gmail SMTP

    - by slhck
    Hi all, I have a weird problem that started just today. I am using Apple Mail on a Leopard machine, connecting to Gmail. Fetching e-mail works just fine. My SMTP settings are also correct. Still, I can't send mail, it will display a pop up saying that "transferring the content to the mail server" failed (translation from German, could be different in English OS X versions). I have verified the following: My SMTP settings are definitely correct. I have not changed them and the issue appeared today. Also, I went through the Apple online configuration for Gmail accounts and did not have to adjust any setting. I can run network diagnosis and it will connect to both POP and SMTP servers without a problem (all green lights) The Telnet details will show me the HELO message from the Gmail servers, so there's no authentication failure. Console.app will not show any messages related to "mail" when I try to send the mail, so there's no specific error message The mail I'm trying to send does not have an attachment, it is plaintext only I can login to gmail.com and send mails without a problem The recipient address exists and contains no syntax errors I can also not send mails to myself When using another IP and ISP (through VPN), it still doesn't work As for my settings: I connect to smtp.gmail.com and for advanced settings I choose password-based authentication with user: [email protected] and my password. I let Apple Mail try the default ports (for SSL and TLS, respectively). Again: I have not changed a thing between yesterday and today. What is causing that strange behavior? Any help would be much appreciated.

    Read the article

  • Find and Replace String in filenames

    - by shekhar
    I have thousands of files with no specific extensions. What I need to do is to search for a sting in filename and replace with other string and further search for second string and replace with any other string and so on. I.e.: I have multiple strings to replace with other multiple strings. It may be like: "abc" in filename replaced with "def" * String "abc" may be in many files "jkl" in filename replaced with "srt" * String "jkl" may be in many files "pqr" in filename replaced with "xyz" * String "pqr" may be in many files I am currently using excel macro to get the file names in excel and then preserving original names in one column and replacing desired in the content copied in other column. then I create a batch file for the same. Like: rename Path\OriginalName1 NewName1 rename Path\OriginalName2 NewName2 Problem with the above procedure is that it takes a lot of time as the files are many. And As I am using excel 2003 there is limitation on number of rows as well. I need a script in batch like: replacestr abc with def replacestr pqr with xyz in a single directory. Will it be better to do in unix script?

    Read the article

  • SendMail not working in CentOs 6.4

    - by Kane
    I am trying to send e-mails from my CentOS 6.4 but it does not work. My knowledge about servers is quite limited, so I hope someone can help me. Here is what I did: First i tried to send an email using the "mail" command, but it was not in the OS so I installed it. # yum install mailx After that, I tried sending an email using the "mail" command, but it did not send anything. I checked it on the internet and I realized I needed an e-mail server like sendmail, so I installed it. # yum install sendmail sendmail-cf sendmail-doc sendmail-devel After that, I configured it following some tutorials. First, sendmail.mc file. # vi /etc/mail/sendmail.mc Commented out the next line: BEFORE # DAEMON_OPTIONS('Port=smtp, Name=MTA') dnl AFTER # dnl DAEMON_OPTIONS('Port=smtp, Name=MTA') dnl Check that the next lines are correct: # FEATURE(`virtusertable', `hash -o /etc/mail/virtusertable.db')dnl # ... # FEATURE(use_cw_file)dnl # ... # FEATURE(`access_db', `hash -T<TMPF> -o /etc/mail/access.db')dnl Update sendmail.cf # m4 /etc/mail/sendmail.mc > /etc/mail/sendmail.cf Open the port 25 adding the proper line in the iptables file # vi /etc/sysconfig/iptables # -A INPUT -m state --state NEW -m tcp --dport 25 -j ACCEPT restart iptables and sendmail # service iptables restart # service sendmail restart So i thought that would be ok, but when i tried: # mail '[email protected]' # Subject: test subject # test content #. I checked the mail log: # vi /var/log/maillog And that is what I found: Aug 14 17:36:24 dev-admin-test sendmail[20682]: r7D8RItS019578: to=<[email protected]>, ctladdr=<[email protected]> (0/0), delay=1+00:09:06, xdelay=00:00:00, mailer=esmtp, pri=2460500, relay=alt4.gmail- smtp-in.l.google.com., dsn=4.0.0, stat=Deferred: Connection timed out with alt4.gmail-smtp-in.l.google.com. I do not understand why there is a connection time out. Am I missing something? Can anyone help me, please? Thank you.

    Read the article

  • I want to use OpenVPN to access the web and email from China. How?

    - by gaoshan88
    My question: How do I use my already existing OpenVPN setup to enable secure, remote web surfing and email checking from open wireless hotspots? Some long winded details: I am running Ubuntu and have OpenVPN up and working fine as a server. My client machine connects fine as well. However, that just gets me a secure connection to my home network. What I want is to be able to access my VPN server and surf the web or check email securely from anywhere with an open wireless connection. I am frequently in China and having secure, unblocked access would be a boon (especially since I like to work from tea houses and coffee shops and I've already had a password sniffed and hacked once). I already know how to tunnel over SSH via a SOCKS proxy using something like: ssh -ND 8887 -p 22 [email protected] but since I have OpenVPN I figure why not try it? So... what are the steps involved in making it so I can connect to my VPN and the surf and check mail to my hearts content (slowly to be sure but at least it wold be secure). Thx!

    Read the article

  • How to avoid index.php in Zend Framework route using Nginx rewrite

    - by Adam Benayoun
    I am trying to get rid of index.php from the default Zend Framework route. I think it should be corrected at the server level and not on the application. (Correct me if I am wrong, but I think doing it on the server side is more efficient). I run Nginx 0.7.1 and php-fpm 5.3.3 This is my nginx configuration server { listen *:80; server_name domain; root /path/to/http; index index.php; client_max_body_size 30m; location / { try_files $uri $uri/ /index.php?$args; } location /min { try_files $uri $uri/ /min/index.php?q=; } location /blog { try_files $uri $uri/ /blog/index.php; } location /apc { try_files $uri $uri/ /apc.php$args; } location ~ \.php { include /usr/local/etc/nginx/conf.d/params/fastcgi_params_local; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param PATH_INFO $fastcgi_script_name; fastcgi_param SERVER_NAME $http_host; fastcgi_pass 127.0.0.1:9000; } location ~* ^.+\.(ht|svn)$ { deny all; } # Static files location location ~* ^.+\.(jpg|jpeg|gif|png|ico|css|zip|tgz|gz|rar|bz2|doc|xls|exe|pdf|ppt|txt|tar|mid|midi|wav|bmp|rtf|js)$ { expires max; } } Basically www.domain.com/index.php/path/to/url and www.domain.com/path/to/url serves the same content. I'd like to fix this using nginx rewrite. Any help will be appreciated.

    Read the article

  • Why isn't my phone charging with some micro usb cables?

    - by Jacxel
    I ordered 3 microUSB cables over ebay. My phone was at about 50% battery and I wanted to use it as a hot spot for some browsing on my laptop, so I plugged it in, a charging icon appeared on the phone, my laptop showed it as a connected usb device and so I went about my business. About 30 minutes later I checked the phone and to my dismay saw 45% battery. But ahh, I thought, I have been putting the poor little thing under too much pressure, acting as a WiFi hotspot must drain the battery quicker than it can charge via usb, perhaps even using my laptops usb port wouldn't output enough power. Unscathed I continued on and when I was going to bed I plugged the usb cable into a mains adapter and switched everything battery consuming off and content, went to sleep. The next morning I was awoken by my phones alarm which got cut off unexpectedly. I attempted to unlock my phone which showed no more signs of life. Why isn't my phone charging with these new USB cables? For clarity: They transfer data with no problems The phone appears to be charging, showing all the signs and lights it normally would, the cable that came with the phone works as you would expect, so its not a fault with the phone, I think they slow the discharging of the phone, but I could be wrong. Are these just bad quality cables? Is there a way to fix this issue?

    Read the article

  • Cannot Connect To VMWare Guest OS Using Either RDP or VNC

    - by Humanier
    I have a PC (Windows XP SP3) with VMWare Workstation 7 installed. The VMWare hosts Windows Server 2003 Enterprise Edition R2. RealVNC (4.1.3) is installed on both OS'es. Both of them use Hamachi2. Host OS (WinXP) also runs ZoneAlarm Firewall. Hamachi network is set as trusted. My goal is to allow RDP and VNC connections to be made to the guest OS (Windows Server 2003). Both options work absolutely fine if I connect from the host OS. However I have problems when other computers from our Hamachi network try to connect the guest OS (Win2K3). RDP connections. RDP window opens, shows black content and after 15-20 seconds displays following error: RealVCN connections. Users are able to connect but all they see is a black screen inside VNC window. At the same their input (keystrokes or mouse moves/clicks) are visible when looking at the console window of the Win2K3. I really appreciate any ideas on how to resolve the mentioned problems.

    Read the article

  • How can I store logs and meet compliance requirements for free?

    - by Martin
    I am trying to keep long-term logs of an app in such a way, that it could plausibly demonstrated to third parties/court that the application has processed certain data at a given time. The data can be represented in XML or text format. A simple gzipped log is not plausible, as I may have added or modified data afterwards, whereas an external logging service would be an overkill. Cost is an issue, we are not dealing with financial data or so, but rather some simple user generated content, where some malicious users tried to blame the operator in the past when things escalated and went to court. My question: Is there some kind of signing software for Linux that signs each element of a log in such a way, that it can be easily shown that no element can be added or modified afterwards? Plug-Ins into some free Splunk Alternatives would be fine too. Ideally the software I am looking for should be under a GPL or similar license. I could probably achive something like this by using PGP/GPG sgning functions and including the previous elements signituares within the following element, but I would prefer to use some program where you do not have to argue about the validity of your own code. Note to mods: I am not asking this question on Stackoverflow, because I am not looking for writing own code for reasons described above. I think this question rather fits into serverfault than superuser, as server-side logging software is discussed rather here than on superuser.

    Read the article

  • Correct use of SMTP "Sender" header?

    - by Eric Rath
    Our web application sends email messages to people when someone posts new content. Both sender and recipient have opted into receiving email messages from our application. When preparing such a message, we set the following SMTP headers: FROM: [email protected] TO: [email protected] SENDER: [email protected] We chose to use the author's email address in the FROM header in an attempt to provide the best experience for the recipient; when they see the message in their mail client, the author is clear. To avoid the appearance of spoofing, we added the SENDER header (with our own company email address) to make it clear that we sent the message on the author's behalf. After reading RFCs 822 and 2822, this seems to be an intended use of the sender header. Most receiving mail servers seem to handle this well; the email message is delivered normally (assuming the recipient mailbox exists, is not over quota, etc). However, when sending a message FROM an address in a domain TO an address in the same domain, some receiving domains reject the messages with a response like: 571 incorrect IP - psmtp (in reply to RCPT TO command) I think this means the receiving server only saw that the FROM header address was in its own domain, and that the message originated from a server it didn't consider authorized to send messages for that domain. In other words, the receiving server ignored the SENDER header. We have a workaround in place: the webapp keeps a list of such domains that seem to ignore the SENDER header, and when the FROM and TO headers are both in such a domain, it sets the FROM header to our own email address instead. But this list requires maintenance. Is there a better way to achieve the desired experience? We'd like to be a "good citizen" of the net, and all parties involved -- senders and recipients -- want to participate and receive these messages. One alternative is to always use our company email address in the FROM header, and prepend the author's name/address to the subject, but this seems a little clumsy.

    Read the article

  • RDP Connection to Windows 7 stays really slow

    - by Pavlo
    I have an Issue with connecting to Windows 7 via RDP. I can open an RDP Session, but regardless of any settings, the responce times are really long. This in particulary is the case when opening a web page in a browser. I've tried IE, Firefox and Google Chrome. I also use RDP connection to a Windows 2008 Server from the same client machine, and the speed is very normal with all features turned on. We have Gigabit Ethernet here. So I think it can not be the client's fault. What concerns Windows 7 Machine, I've tried shutting all the sraphic features off and turning the color levels to 256 colors. Result - the same. If I work locally on the machine - I can not see any lags. What else have I tried: Using old RDP 5 Client from Microsoft Setting network autotuninglevel as seen here Do You have some ideas? Thanks in advance! Update the problem seems to be with rendering window contents. All the window borders and pannes are rendered pretty quickly, but the content shows up very slowly. Also mouse movements are recognised by the Win 7 box only after some period. Are there some hidden settings in the RDP, where one could turn some advanced features off or turn some caching on? I use Bitmap Caching, but this apparently doesn't help.

    Read the article

  • Servers / ram for social network- how many?

    - by Marty
    I am launching my social network soon an looking into hosting. The question i am lost is: Do i need separate servers for web vs database vs image handling since there is photo sharing? Or does 1 server handle it all? Also is more ram better? If i get 50GB ram is that better than having 8 gb ram? EDIT: It is PHP codeignitor and MySQL for now. (switch to NoSQL DB later if demand calls fr it.) I will be using memcache also. Concept wise it is similar to yelp, so geographic based with lots of user content and image sharing + live feeds an privacy levels. User plan is open question. Without testing the demand for this i cant give a number. But the concept is unique, no one out there with the set of features i am releasing so it could grow. Ideally I want to plan for handling about 1-2 million views / month from launch. If it goes more than that then I will upgrade.

    Read the article

  • How to include worksheet 3 and 4 in a cell formula provided?

    - by user21255
    I have been kindly given this formula with an explanation on how it works: Insert this formula into the cell B4 of the sheet "Cases": =IF(NOT(ISBLANK('1st'!B25)),'1st'!B25,IF(NOT(ISBLANK(INDIRECT("'2nd'!R" & (ROW($B4)-(COUNTA('1st'!$B:$B)-COUNTA('1st'!$B$1:$B$24))-4+25) & "C" & COLUMN(B4),FALSE))),INDIRECT("'2nd'!R" & (ROW($B4)-(COUNTA('1st'!$B:$B)-COUNTA('1st'!$B$1:$B$24))-4+25) & "C" & COLUMN(B4),FALSE),"")) Copy the formula to the other cells in the worksheet; the relative addresses will adjust automatically. The formula works like this: Check if there is content in 1st. If yes, copy it. If no, find out how many entries there are in 1st in total. (This is done by using the COUNTA function on the whole B column in 1st and subtracting the number of non-empty cells above the actual case data.) Use this information together with the current cells's number to find out the location of the cell that has to be copied from 2nd. Create the address of the cell and use the ISBLANK function on the INDIRECT function with that address to check if the cell is empty. If it is not, use the INDIRECT function again to display it. If it is empty, just display an empty string. Now this works fine when I have only 2 sheets. But lets say I want to include a third and fourth sheet (name as 3rd and 4th respectively), then what and should I put the formula for this in the formula above? There are actually 31 sheets but if I know how to add 3rd and 4th sheet in the formula, then I can figure out how to do the rest. Thanks

    Read the article

  • Client unable to access OWA website after temporarily changing SSL certificate on the server

    - by Lorenz Meyer
    I have the following issue: One client computer (Windows XP) cannot access the OWA website. All other client computers can (Except another one in the same remote office). How this happened: I temporarily changed the SSL certificate on the Exchange Server yesterday. After a few minutes, I reverted back, an now the same certificate that was installed for years is back again. During these few minutes, they were in OWA on this computer and got a certificate error. What exactly happens: Internet Explorer displays the error Internet Explorer cannot display the webpage, Firefox displays The connection was reset and Crome shows This webpage is not available. The connection to ... was interrupted. What I already did to try to get this working: Restart the client computer Restart the exchange server Deleted Internet Explorer browsing history In IE, Internet Options, tab Content, under Certificates deletes SSL cache Restored Internet Explorer to the default parameters I looked into certmgr.msc, but did not find a certificate related to the issue What could I do else to narrow down the origin of this problem (or better: resolve it) ? Can you give any advice ?

    Read the article

  • Index a low-cost NAS on Windows 7

    - by JcMaco
    Has anyone found a way to index the files stored on a Networked Attached Storage on Windows 7 so that the files can be available in Windows Search and Libraries? I am referring to the cheap and available NAS like the Western Digital My Book series that use an embedded linux server. Similar question: http://windows7forums.com/windows-7-networking/6700-indexing-nas-drive-libraries.html EDIT Windows help proposes to make the files stored on the NAS available offline. This is obviously not a good solution if the NAS has more data than what the client can store. If the folder is on a network device that is not part of your homegroup, it can be included as long as the content of the folder is indexed. If the folder is already indexed on the device where it is stored, you should be able to include it directly in the library. If the network folder is not indexed, an easy way to index it is to make the folder available offline. This will create offline versions of the files in the folder, and add these files to the index on your computer. Once you make a folder available offline, you can include it in a library. When you make a network folder available offline, copies of all the files in that folder will be stored on your computer's hard disk. Take this into consideration if the network folder contains a large number of files.

    Read the article

  • Windows 7 ICS client web failure

    - by n8wrl
    I have several windows 7 PC's connected on a LAN via a hub. One has a Verizon 3G connection and works great. I have internet connection sharing enabled on it, which automagically set the LAN connection to 192.168.137.1 and enabled DHCP. I am trying to get the client PC's working one at a time. The others are off. The client is able to: Get an IP via DHCP with correct settings. Ping any web address I can throw at it, so DNS and routing are working. Windows update works. But web sites hang in IE. All but google.com! I type www.msn.com, microsoft.com, amazon.com, etc. etc. All ping via a cmd window but IE just hangs - it says web site found but the green progress bar just slowly creeps and no content displays. www.google.com comes up even after clearing browser and dns cache. I am pulling my hair out - what am I missing? EDIT: After some more gyrations with a router I'm back to ICS. Same symptoms, only now I have an answer to Andrew's question, YES I can do Google searches but clicking on any of the result links hangs! Let one sit for half an hour with no timeout or error.

    Read the article

  • What is wrong with those crontabs?

    - by Guillaume Boudreau
    I want my projectors to Power On before the mall opens, and Power Off when the mall closes. So I created crontab entries (that I placed in /etc/cron.d/mall), but today (Thu Nov 22 18:58:29 EST 2012 is the current date on that server), the power-off.sh script got executed at 17:20 (see syslog excerpt below). Being Thu, Nov. 22, I would have expected the power-off.sh script to be executed at 21:20, per the 4th crontab line below. Why did power-off.sh execute at 17:20? What is wrong with my crontab entries? Content of /etc/cron.d/mall: 40 9 22-30 Nov Mon-Sat myuser /usr/local/projectors/power-on.sh 40 10 22-30 Nov Sun myuser /usr/local/projectors/power-on.sh 20 18 22-30 Nov Mon-Wed myuser /usr/local/projectors/power-off.sh 20 21 22-30 Nov Thu-Fri myuser /usr/local/projectors/power-off.sh 20 17 22-30 Nov Sat-Sun myuser /usr/local/projectors/power-off.sh 40 9 1-22 Dec Mon-Sat myuser /usr/local/projectors/power-on.sh 40 10 1-22 Dec Sun myuser /usr/local/projectors/power-on.sh 20 21 1-22 Dec Mon-Fri myuser /usr/local/projectors/power-off.sh 20 17 1-22 Dec Sat-Sun myuser /usr/local/projectors/power-off.sh syslog excerpt: $ grep power-off.sh /var/log/syslog Nov 22 17:20:01 lanner-ubu-c2d CRON[23007]: (myuser) CMD (/usr/local/projectors/power-off.sh)

    Read the article

  • Mavericks permission issues with Windows Server deduplicated shares

    - by dmohlmaster
    We have a number of 10.9-10.9.3 - Mavericks - machines installed throughout our facility. Much of the user content is pulled from shares stored on our Windows Server 2012 fileservers with deduplication enabled. I have found that files newly written or unoptimized are able to be accessed without issue - read, written, modified, etc. Once the file gets optomized/deduplicated and Windows adds the P & L attributes - sparse and symlink - the Macs running Mavericks begin to have access issues. Once the files get deduplicated, users begin receiving read access errors when copying files (see error1 below). This happens when copying to folders within the current folder tree or copying somewhere to the local system. If you 'stop' the copy operation and retry a few more times, it may eventually work for the specific instance but fail again later. I am however, able to copy these files without issue via the terminal. Other systems running 10.7 do not experience the same issues and are able to access file server resources without issue. Many of the systems having issues are newer and thus not able to be downgraded to 10.8 or 10.7. I have tried finder replacements such as Pathfinder but the results are the same. I know this is at least similar to the issues many Mac users are already experiencing and posting about but I haven't seen it directly linked to deduplication and the attributes written by Windows server. Has anyone seen this issue? Have any solutions been found? Error 1: When copying files after the PL attributes have been set by deduplication. "One or more items can't be copied to "Foler" because you don't have permissions to read them. ******************************************' Via the system.log, I am also seeing the following error when accessing these deduplicated file shares. The reparse point tag listed below is "IO_REPARSE_TAG_DEDUP" Reported error: "smbfs_nget: filename.ext - unknown reparse point tag 0x80000013"

    Read the article

  • Managing persistent data on an Amazon EC2 web server

    - by Derek
    I've just started trying out Amazon's EC2 service for running an asp.net web app which uses a SQL Server 2005 Express database. I have some questions about how to configure and operate it best for reliability, and I'm hoping to tap into some collective wisdom here as this is my first foray into EC2. Here's how I have it configured currently: OS: Windows 2003 SQL Server Express 2005 Web content stored on an EBS Volume (E Drive) Database Data stored on an EBS Volume (E Drive) Database backups to "C Drive" and then copied off to S3. Elastic IP Address attached to the production instance. Now when I make a change to the OS configuration, I make a new AMI using the bundle feature. Unfortunately, I found that this results in significant downtime. While the bundle is created and the new instance is started. It seems that when I'm ready to make a new AMI, I should: Start up a new temporary instance. Detach the EBS volume from the production instance. Detach the IP Address from the production instance. Attach the IP Address to the temporary instance. Attach the EBS volume to the temporary instance. Create an AMI from the production instance. After the production instance restarts, reverse the attach/detach steps to put it back in production. Is this the right order of events to prevent any chance to corrupt the EBS volume? Will the EBS volume become corrupt if I detach it while a database Write is taking place? Should I snapshot the EBS volume of the production instance and attach it to the temporary instance instead? Or could taking a snapshot of the EBS volume while it's in use cause corruption? Any suggestions to improve the reliability and operations?

    Read the article

  • Configure Apache + Passenger to serve static files from different directory

    - by Rory Fitzpatrick
    I'm trying to setup Apache and Passenger to serve a Rails app. However, I also need it to serve static files from a directory other than /public and give precedence to these static files over anything in the Rails app. The Rails app is in /home/user/apps/testapp and the static files in /home/user/public_html. For various reasons the static files cannot simply be moved to the Rails public folder. Also note that the root http://domain.com/ should be served by the index.html file in the public_html folder. Here is the config I'm using: <VirtualHost *:80> ServerName domain.com DocumentRoot /home/user/apps/testapp/public RewriteEngine On RewriteCond /home/user/public_html/%{REQUEST_FILENAME} -f RewriteCond /home/user/public_html/%{REQUEST_FILENAME} -d RewriteRule ^/(.*)$ /home/user/public_html/$1 [L] </VirtualHost> This serves the Rails application fine but gives 404 for any static content from public_html. I have also tried a configuration that uses DocumentRoot /home/user/public_html but this doesn't serve the Rails app at all, presumably because Passenger doesn't know to process the request. Interestingly, if I change the conditions to !-f and !-d and the rewrite rule to redirecto to another domain, it works as expected (e.g. http://domain.com/doesnt_exist gets redirected to http://otherdomain.com/doesnt_exist) How can I configure Apache to serve static files like this, but allow all other requests to continue to Passenger?

    Read the article

  • Generalized strategy for file server virtualization in Xenserver

    - by Jamie
    I'm not shopping as much as I'm looking for some guidance on good idea / bad idea strategies. I'm sure I'm not in the "best practices" budget range. Currently, I have 3 dell poweredges running xenserver in a pool. Each node has a ubuntu file server, serving about 6TB. One is the primary, the other two are rsync targets for backup. The 6TB is stored on their respective local storage disks as an LVM of 3x2tb virtual disks. The fileserver VM disks are also stored on the node local disks. Each node also runs a smattering of light-weight VMs for web, development, windows VMs, and stuff like that. Several of those VM's disks reside on a QNAP NAS to play with live migration. These VM's are often clients of the primary file server (like all the mail, web content, user files are stored on the file server, not on the mail, web, and samba VMs). This all works fine, and is a major step up for us. The downside is that the QNAP is a single point of failure. And the only thing the QNAP is doing is serving migratable VM images, not client data. Someday the poweredge local arrays will be full, and we will have to reinvent ourselves again. Is it wise to have heavywieght vms (like the fileserver, with its 6+ TB disks) on a SAN or NAS? Would it be better to keep the VMs lightweight, have the VM images on a SAN or NAS, and use 2 or more NAS act as NFS-serving file appliances? A hybrid SAN/NAS that can serve iscsi for images and NFS for the client vms? It seems like live-magration would be a misnomer if you have to migrate a fileserver with its entire 6+ TB disk. I recognize there are plenty of ways to skin the cat. We've already skinned it a few ways. What makes sense?

    Read the article

  • Firefox: Unload a tab manually

    - by unor
    Firefox has a setting "Don't load tabs until selected" (see How do I make Firefox 13 Load All My Tabs on Startup or when Resuming Reload). I like that behaviour. I am searching for a way to "deload"/deactivate a tab manually for a session (until I reload it). It should stop all running JavaScript functions and plugins (like Flash). The whole webpage content may disappear until I reload/re-activate the tab, but that is not a requirement. The title has to be displayed as tab label (like it is the case with the startup setting, too). The workaround would be to restart Firefox and don't switch to the tab I want to be deactivated. This is pretty annoying, of course. EDIT: Here is what I found so far (thanks, @bytebuster!) BarTab no longer being maintained (see why) BarTab Lite seems to miss this functionality from BarTab Dormancy experimental; comes with warning that it "may eat your session" Tab Mix Plus Feature request: Unload Tab feature Tab Utilites seems to offer this functionality only for automatic unloading Feature request: Add "unload tab" to tab context menu. UnloadTab removed from addons.mozilla.org (who knows why)

    Read the article

  • Removing extended partition without deleting logical in it

    - by HisDudeness
    I'm running a Linux-based laptop, and in order to multi-boot several distros in it, I created an extended partition which contains a bunch of logical ones with GParted. Now, after quite a long time with this setup, I've changed my mind because of the consequent lack of storing space for my data partition. Now I want to keep one distro alone like it's normal, and eventually have some other operating systems stored in external supports to plug in and use if I want. Obviously, also this partition I want to keep (and to enlarge a little too) is just a logical inside the extended I want to keep. For what concerns the number I'm ok, meaning I currently have this big distro dedicated extended, the swap and the data partitions, so there's space for another primary before I delete the extended, but I don't know how to delete it without touching the logical in it, I don't want to reinstall the system losing all changes and settings, and I don't want to keep an extended partition for a logical alone. How can I do? Do I have to create a new primary, copy the logical content in it and then delete everything? Will the system boot and maintain exactly all the features it has now? Or is there a way to convert an extended into a primary once it contains just one logical? Or can I directly move a logical out of an extended turning it into a primary? Or, again, am I screwed?

    Read the article

  • sharepoint crawl not indexing main site

    - by user22215
    Guys I'm having some strange search issues' going on with my main portal application. First off let me give you a little back ground on the problem web app. Our Sharepoint environment was originally set up by a consultant that did not follow best practices. She used one web app to house our companies' intranet site, ssp, and mysites. Since than I have provisioned a new ssp that I have segmented correctly I moved all of our other sites over to the new ssp with out any problems . However, I could not assign the main portal app to the new ssp since the portal app housed the ssp site collection. So I deleted the ssp site collection after that I deleted the ssp and assigned the portal app to my new ssp. Now this is where the problem starts when I attempt to crawl this application the crawl starts than stops 5 seconds later with a status of success also it reports that 1 item was successfully crawled. The funny thing is the main portal app has nearly 30000 items. I have tracked the problem down to the web app if I create a test web app than restore the content I have no problem crawling all 30000 items. Also all of my other web apps that use the same ssp have no problem completing crawls. I don't see anything in the ULS logs or server 2003's event viewer. Also I'm using a separate dedicated index server that's configured to crawl itself via host file configuration. I would like to fix this problem with out having to recreate our main portal site due to the fact that we have several custom code modifications where DLL's were registered to the IIS bin folder also I don't even want to get into the Silverlight mods that were done. Any help with this problem is much appreciated Same problem as minehttp://www.experts-exchange.com/OS/Microsoft_Operating_Systems/Server/MS-SharePoint/Q_23885820.html

    Read the article

< Previous Page | 717 718 719 720 721 722 723 724 725 726 727 728  | Next Page >