Search Results

Search found 14702 results on 589 pages for 'dolby pro logic'.

Page 211/589 | < Previous Page | 207 208 209 210 211 212 213 214 215 216 217 218  | Next Page >

  • php application deployment to redhat [closed]

    - by Subhash Dike
    I am a .net background person with most of the work with Windows environment and IIS etc. I have been given a task to deploy one php application on RedHat Linux box. All I have is the credentials and the server IP of that box. Can someone help about how to get started creating a new website on Apache 2.0 on redhat box and then putting php code over to that. I do not have physical access to that server as well. I know it's very basis question, but I am in dark for these things. Even if you can point me to some documentation that's fine or else at least the analogy with IIS should also help to certain extent.

    Read the article

  • How can I optimize Apache to use 1GB of RAM on my website? [closed]

    - by Markon
    My VPS plan gives me 1GB of RAM burstable to 2GB. Of course I cannot use 2 GB, nor 1 GB, everyday, so I'm planning to optimize the performance of my webserver. The average of hits-per-hour is about 8'000-10'000. This means about 2 connections-per-second. Max hits-per-hour reached until now is about 60'000. That means about 16 connections-per-second. Unluckily my current apache configuration uses too much memory (when there are not connected clients - usually during the night - it uses about 1GB) so I've tried to customize the apache installation to fit to my needs. I'm using Ubuntu, kernel 2.6.18, with apache2-mpm-worker, since I've read it requires less memory, and fcgid ( + PHP). This is my /etc/apache2/apache2.conf: Timeout 45 KeepAlive on MaxKeepAliveRequests 100 KeepAliveTimeout 10 <IfModule mpm_worker_module> StartServer 2 MinSpareThreads 25 MaxSpareThreads 75 MaxClients 100 MaxRequestsPerChild 0 </IfModule> This is the output of ps aux: www-data 9547 0.0 0.3 423828 7268 ? Sl 20:09 0:00 /usr/sbin/apache2 -k start root 17714 0.0 0.1 76496 3712 ? Ss Feb05 0:00 /usr/sbin/apache2 -k start www-data 17716 0.0 0.0 75560 2048 ? S Feb05 0:00 /usr/sbin/apache2 -k start www-data 17746 0.0 0.1 76228 2384 ? S Feb05 0:00 /usr/sbin/apache2 -k start www-data 20126 0.0 0.3 424852 7588 ? Sl 19:24 0:02 /usr/sbin/apache2 -k start www-data 24260 0.0 0.3 424852 7580 ? Sl 19:42 0:01 /usr/sbin/apache2 -k start while this is ps aux for php5: www-data 7461 2.9 2.2 142172 47048 ? S 19:39 1:39 /usr/lib/cgi-bin/php5 www-data 23845 1.3 1.7 135744 35948 ? S 20:17 0:15 /usr/lib/cgi-bin/php5 www-data 23900 2.0 1.7 136692 36760 ? S 20:17 0:22 /usr/lib/cgi-bin/php5 www-data 27907 2.0 2.0 142272 43432 ? S 20:00 0:43 /usr/lib/cgi-bin/php5 www-data 27909 2.5 1.9 138092 40036 ? S 20:00 0:53 /usr/lib/cgi-bin/php5 www-data 27993 2.4 2.2 142336 47192 ? S 20:01 0:50 /usr/lib/cgi-bin/php5 www-data 27999 1.8 1.4 135932 31100 ? S 20:01 0:38 /usr/lib/cgi-bin/php5 www-data 28230 2.6 1.9 143436 39956 ? S 20:01 0:54 /usr/lib/cgi-bin/php5 www-data 30708 3.1 2.2 142508 46528 ? S 19:44 1:38 /usr/lib/cgi-bin/php5 As you can see it use a lot of memory. How can I reduce it to fit to just 1GB of RAM? PS: I also think about the switch to nginx, if Apache can't fit to my needs...

    Read the article

  • Webmaster tools showing 404 for non existent folder pages

    - by Jody
    Google webmaster tools is reporting some/many 404 urls that don't exist on my site. The links are things such as domain.com/xyz/ However that doesn't exist, but domain.com/xyz/index.html does exist. The "linked from" pages all show proper links to the "/xyz/index.html". The page without index.html DOES 404, but why is google even trying these urls if they are not linked to? My real question, is there a way to have google stop attempting to load these pages, and ultimately remove these from the crawl errors report. Thanks.

    Read the article

  • How to choose, set and use keywords while structuring a website?

    - by mechdeveloper
    I have been working on my personal website for sometime, I think I have been doing a good technical job, but, unfortunately I did a terrible job while structuring the website because I didn't care about the keywords I was going to use. Although it is my personal website, I'd like to mention the main objective is the blog of the website, so I'd like that the keywords were related to the content that it is in the blog, at present google webmaster tools is displaying a lot of keywords that has nothing to do with the content of the website, and some SEO reporting websites such as woorank says that the keyword optimization of the website is awful, So I have 3 questions: How to choose, set and use keywords while structuring a website? OPTIONAL: which are all the methods and sources used by search engines to collect the keywords of a website? there are some high profile websites that aren't optimized on this as well, should I concerned about this anyway?, is there anything more important that I should be concerned about? (if you want to see the website please check my profile)

    Read the article

  • Redirect public traffic to a different subfolder, while local traffic remains unchanged

    - by ecnepsnai
    I would like to have local (intranet) HTTP traffic go to the /var/www/html folder while any public traffic goes to the subfolder, /var/www/html/public I've tried this configuration, with some variation, in httpd.conf <VirtualHost PRIVATE-IP> DocumentRoot /var/www/html ServerName ecn ErrorLog /var/www/logs/error/private CustomLog /var/www/logs/access/private common </VirtualHost> <VirtualHost PUBLIC-IP> DocumentRoot /var/www/html/public ServerName PUBLIC-DOMAIN-NAME ErrorLog /var/www/logs/error/public CustomLog /var/www/logs/access/public common </VirtualHost> PUBLIC-IP, PRIVATE-IP, and PUBLIC-DOMAIN name are all replaced with the correct values in the actual document. The problem is, local traffic can browse fine but remote traffic is directed to the root folder and getting 403d (because I have that folder blocked off through my .htaccess file). If I append /public to the URL it works fine.

    Read the article

  • Object-based content management system

    - by Adam Maras
    I remember hearing within the last year or two about a content management system either being released or developed that was centralized around product/item information. I'm aware that there are several CMSes that have this capability, but this particular one was built specifically for that task. Also, I remember it winning some sort of award or recognition for upcoming software products. However, I can't for the life of me remember what this CMS was called or who was developing it. Does anyone know what package I'm talking about?

    Read the article

  • setting up freedns with an existing domain

    - by romeovs
    I've been running a webserver off of a pc at a static IP succesfully for the past 5 months. recently however, I've moved into another appartment and my ISP only provides a dynamic IP (my IP changes from time to time). I'm not an internet genius but I was thinking to fix this by using a Dynamic DNS provider. So I got on the web and found freedns. I'm a bit confused about how to set up everything though. I've managed to succesfully install the IP updater daemon on my web server. Then, in my registrars control panel, I set the NS records to point at ns1 through ns4.afraid.org (removing the old NS records). I'm not certain what I should do with the A records though (for now they are still pointing to the old static IP address). I have A records for www, blog, irc, etc. but I cannot point them at my new IP address, because it isn't Could someone explain this in the clearest possible sense (perhaps elaborating on what happens at each step of the DNS process). I never really knew what the A records are for anyway. (note that I haven't really found any documentation at the freedns website, or on google)

    Read the article

  • How do I make my hosting detect _escaped_fragment_ and fetch the corresponding HTML? [on hold]

    - by Eric
    I have an AJAX site and I'm using shebangs (#!) in my urls with the intention of then providing the correct HTML versions when google bots replace the #! with _escaped_fragment_. How do I go about routing/proxying/redirecting the url with _escaped_fragment_ to the corresponding html pages? I can't find documentation on this part of the process specifically, and my first thought was that I should be using a 301 or 302 redirect, but I was told that wasn't the case, albeit not given any more info.

    Read the article

  • SEO for landing page of three different language sites

    - by Zahid
    I have three sites running under the main domain example.com/en/ example.com/ar/ example.com/ur/ And there is a main HTML landing page example.com which have some introduction in three languages and links to three sites. Now i want this landing page to have a good SEO. and i want this page to appear in search in three languages. if someone searchs Arabic it should be in results with Arabic title and description, if someone searches in English it should response in English. Is it possible? or suggest me other way to make a landing page for these sites.

    Read the article

  • Tab navigation and double content

    - by Guisasso
    I have a website in which i use tabs to navigate between pages. For example, page a displays A as an active tab and B and C background tabs. If the visitor gets to the website via page B, i also would like to display to page d, but not a and c. Question: I know i can just create index2 for b for example, so when the visitor gets to b from a, i display a,b,c and index1 when visitor gets to b from d for example. Is that a bad practice? I know double content isn't good, but in which other way can i or should i approach this problem? The tab navigation i designed uses < li and id tag do display active tab, defined in the < body tag.

    Read the article

  • Switching hosting from Google Apps to GoDaddy

    - by Sahir M.
    I am in a pickle here. I just bought a hosting service from GoDaddy and I already have a domain registered with Google Apps. So I went into the domain control center from Google Apps and I changed the nameservers to point to GoDaddy's and then I deleted all the A records and also removed the www and main CNAME. Then I created an A record with host "@" which points to the IP address I got from GoDaddy (this is the server address I see under the category Server in the GoDaddy Hosting Control Center). Also, I see that in this domain center, the domain forwarding is set to forward to my address "www.domainsite.com". So right now when I go to my website I see the error "This website is temporarily unavailable, please try again later." Does anyone know how to set this up properly or know what problem I could be having? Thanks!

    Read the article

  • Googlebot can't access my site when crawling from rootdomain

    - by PéCé
    I can't explain why I get this message for my rootdomain result in Google : trocmalin.com/ A description for this result is not available because of this site's robots.txt – learn more. Here is my site specifics : vide-greniers.trocmalin.com is the site address www.trocmalin.com redirects (301) to vide-greniers.trocmalin.com trocmalin.com redirects (301) to vide-greniers.trocmalin.com too... User-agent: * Disallow: /orga/ User-agent: * Disallow: /sitemap-update Google results for vide-greniers.trocmalin.com are well rendered, as well as sub pages allowed for bots. But the result for my rootdomain (trocmalin.com) gives this message... Can you help me ?

    Read the article

  • Can web applications running on IIS7 Windows Server 2008 R2 be forced to immediately detect changes to hosts file?

    - by Brenda Bell
    We have several web applications running on several load-balanced servers. We want to have our web applications communicate with each other without first traversing outside the load balancer. For example: http://appA.example.com is running on 192.0.2.1 and 192.0.2.2 http://appB.example.com is also running on 192.0.2.1 and 192.0.2.2 The load balancer's public IP address is 198.51.100.3 By default, when appA on 192.0.2.1 makes a call to a WCF service hosted in appB, the HTTP request is routed to 192.51.100.3; this establishes a new session and the load balancer will direct the call to either of the two servers We want the call to be routed to the instance of appB running on the same server so we add 192.0.2.1 appB.example.com to the hosts file on 192.0.2.1. This eventually works, but we either have to wait for the app pool to naturally recycle or do a manual reset before appA sees the new address. Is there any way to have the change automatically detected without having to recycle the app pool?

    Read the article

  • Is it "acceptable" for a sub domain to be hyphenated?

    - by Homunculus Reticulli
    I am putting together a site for a portal. Some of the subdomains have rather long names and I am thinking that maybe I should use hyphens to make the subdomain names more readable. For instance: alternative-medicine.mysite.com instead of alternativemedice.mysite.com However, I can't recall ever seing a hyphenated subdomain - is this because it is generally frowned upon - or are there technical (SEO) reasons why this appears to be the case? In short, will hyphenating my subdomains have a negative impart on SEO?

    Read the article

  • What does Enable/Disable mean in Bing's URL Normalization feature?

    - by DisgruntledGoat
    I'm in Bing Webmaster Tools, under Index URL Normalization. Many parameters are listed in the table with 3 other columns: Status, Source, Date. The "Source" column says "Webmaster" where I have added parameters, and "Bing" where I assume the parameter has been auto-detected. "Date" is probably the last date it detected the parameter. I've tried searching the help files but I can't find what the Status column means. The top of the page says: This feature allows you to specify query parameters for Bing’s crawler to ignore. But it's not clear whether "Enable" or "Disable" is related to this, and if so what happens in each case. Does anyone know?

    Read the article

  • URL-rewriting on Plesk using ISAPI_rewrite3 Lite

    - by Anusha
    I am using Plesk Windows based web server with Windows 2008 server OS with IIS-6 for my e-commerce website. I want to rewrite URLs for all dynamic pages, So I installed ISAPI_Rewrite 3 Lite on my web server also I had uploaded the .htaccess file with the basic rules as follows RewriteEngine on RewriteRule ^contact\.html$ contactus.php? [NC,R] I never worked before with ISAPI neither on URL- rewriting. My doubt is How should I proceed after installation. Should I upload .htaccess or httpd.conf file OR This s/w has ISAPI_Rewrite Manager which gives place to edit httpd.conf, Should I write rules on this. Anyways I had tried all these steps but unfortunately I couldn't find any remedies. Any immediate solution will be appreciable.

    Read the article

  • wget not respecting my robots.txt. Is there an interceptor?

    - by Jane Wilkie
    I have a website where I post csv files as a free service. Recently I have noticed that wget and libwww have been scraping pretty hard and I was wondering how to circumvent that even if only a little. I have implemented a robots.txt policy. I posted it below.. User-agent: wget Disallow: / User-agent: libwww Disallow: / User-agent: * Disallow: / Issuing a wget from my totally independent ubuntu box shows that wget against my server just doesn't seem to work like so.... http://myserver.com/file.csv Anyway I don't mind people just grabbing the info, I just want to implement some sort of flood control, like a wrapper or an interceptor. Does anyone have a thought about this or could point me in the direction of a resource. I realize that it might not even be possible. Just after some ideas. Janie

    Read the article

  • No Obvious Answer - Query-Strings and Javascript

    - by nchaud
    Say I have this main page /my-site/all-my-bath-soaps which lists all my products. It has a search filter text box that uses javascript to filter the products they want to see on that page (the URL doesn't change as they filter). Now from many other parts of the site I want to navigate to this products-page and see specific products. E.g. <a href="/my-site/all-my-bath-soaps?filter='Nivea-Soap'"> will go to /all-my-bath-soaps and apply javascript filtering to see just that product and hide all dom nodes for the other products. The problem is if the user changes the text in the filter from 'Nivea-Soap' to 'Lynx' the javascript will work fine and show the new products but the URL stays at ?filter='Nivea-Soap'. Is there anything I can do about this? Of course, I don't want to reload the page with a new query string every time they change the search criteria. Somehow it'd be great to move the ?filter=... criteria into POST data instead - but how can I do this with a link I don't know...

    Read the article

  • Is there any good reason I would want my website to be framed?

    - by minitech
    I'm building a website that's not security-critical in any way at all, so having somebody put a page in an <iframe> is not particularly dangerous to its users. However, as my website doesn't have script plugins that will be used anywhere else, is there any reason why I shouldn't just apply: X-Frame-Options: Deny to every page on my website? Is there any valid reason for any other website to embed mine? I've seen plenty of content-stealing ones and attempts to hijack user accounts, but never an actual good usage of frames that's not an explicit feature of the website.

    Read the article

  • How can I edit (in MS Expression Web ) FrontPage Site Parameters (Substitutions)?

    - by Clay Nichols
    Or, asked another way: Where are the values for MS Front Page Substitions (Site Parameters) stored? (so that I can edit in Web Expressions) Background I'm ashamed to admit that I've been maintaining our company's website in MS FrontPage for over 9 years. I'm moving it to Expression Web, which will display the Substitutions (stored as Site Parameters) but I can't figure out where to edit them. I tried searching the source folders for the website (on my development PC) for the name of the parameter (s-Variable=hoursOfOperation) but did not find it (other than in the files it was actually used in.

    Read the article

  • Is there a limit of emails/pictures per Gravatar account?

    - by Steve Taylor
    I'm building a site to connect patients to doctors. Each doctor will have a profile picture. I'm quite happy to manually maintain the profile pictures as there won't be that many doctors nor will they have a need to change their picture very often, if at all. I thought of using Gravatar to host all these profile pictures. The idea is to create a single Gravatar account then keep adding email addresses to it in the form [email protected] and associating each one with a new image. Does anyone know, however, if I will run into any per-account limit? If so, it wouldn't be feasible because I would end up with a bunch of Gravatar accounts instead of just the one.

    Read the article

  • Doing affiliate program with shops who don't have a program already set up

    - by Jacobo Polavieja
    I am developing an online shop which has managed to agree with other shops to a comission per sale. Now, the problem, is these other shops don't have any kind of affiliate system. So my question is, is there any way we could arrange an easy way for this? They don't plan to develop anything as they are small shops, so... my only guess right now is to control on my site how many times the links to them have been clicked to have an estimate of potential clients, but don't know how they can know that user came through my site and purchased something. Thank you very much for your help!

    Read the article

  • Parse text file on click - and then display

    - by John R
    I am thinking of a methodology for rapid retrieval of code snippets. I imagine an HTML table with a setup like this: one two ... ten one oneTwo() oneTen() two twoOne() twoTen() ... ten tenOne() tenTwo() When a user clicks a function in this HTML table, a snippet of code is shown in another div tag or perhaps a popup window (I'm open to different solutions). I want to maintain only one PHP file named utitlities.php that contains a class called 'util'. This file & class will hold all the functions referenced in the above table (it is also used on various projects and is functional code). A key idea is that I do not want to update the HTML documentation everytime I write/update a new function in utilities.php. I should be able to click a function in the table and have PHP open the utilities file, parse out the apropriate function and display it in an HTML window. Questions: 1) I will be coding this in PHP and JavaScript but am wondering if similar scripts are available (for all or part) so I don't reinvent the wheel. 2) Quick & easy Ajax suggestions appreciated too (probably will use jquery, but am rusty). 3) Methodology for parsing out the functions from the utilities.php file (I'm not to good with regex).

    Read the article

  • Should I give preferential treatment to proxy users on my ecommerce site?

    - by Question Overflow
    I am setting up an ecommerce site that caters to a worldwide audience. I would imagine that visitors would come from everywhere, and for whatever reasons, some would be connecting through proxy servers. My site uses a server that is configured to rate limit connections from the same ip address to protect itself from a DOS attack. So, if a proxy server is heavily used by my visitors, then it would appear to be a DOS. This is problematic in a sense that it is hard to tell whether the users are genuinely browsing my site or if a DOS is taking place. So my question is, should I give preferential treatment to proxy users on my ecommerce site? If yes, how should this be done. If not, why not?

    Read the article

  • Validation Meta tag for Bing [closed]

    - by Yannis Dran
    Note of the author: I did my research before posting and the old "duplicate" generated over a year ago and things have changed since then. In addition, it was generic question but mine is targeting only ONE search engine machine: BING. FAQ is not clear about how should we deal with these, "duplicate" cases. The "duplicate" can be found here: Validation Meta tags for search engines Should I remove the validation meta tag for the Bing search engine after I validate the website?

    Read the article

< Previous Page | 207 208 209 210 211 212 213 214 215 216 217 218  | Next Page >