Search Results

Search found 3750 results on 150 pages for 'joomla sef urls'.

Page 74/150 | < Previous Page | 70 71 72 73 74 75 76 77 78 79 80 81  | Next Page >

  • Test a site with a static subdomain locally

    - by bcmcfc
    How can I test a site that uses one or more static domains for serving images locally? e.g. domain.tld with images servered from static.domain.tld Local working copy of the site on WAMP checked out from SVN: URLs will be pointing at static.domain.tld rather than static.domain.local

    Read the article

  • Do URL shorteners affect Google page rank?

    - by DLux
    With the number of people passing around shortened URLs (through goo.gl, bit.ly, etc), I was wondering how these shortening services affect page rank in Google. Do they count as inbound links to your content or are they completely ignored by Google and other search engines?

    Read the article

  • Unable to clone repository with Git

    - by kim3er
    I am trying to clone a repository on my machine that I have just created on GitHub. I am new to Git, but have been using SVN for a while. I've set up an RSA key as per instructions but am unable to clone with either the SSH or HTTP Urls. When I use HTTP, I get the following error: Password: fatal: Out of memory, realloc failed I'm using Windows 7 with MSysGit (using Bash & PuTTY).

    Read the article

  • Bolding/underlining/strikethrough-ing text in mutt

    - by Steve K
    Is there any way to bold, strike, or underline text in mutt? For instance, I currently have a couple of lines in my muttrc to make URLs and email addresses blue text on white background: color body blue white regex but I'd rather have that be blueunderline on white background. Likewise, I'd like to be able to bold unread mails in the index. (dunno if it's relevant, but I'm using Ubuntu's mutt-patched, which is compiled with ncurses.)

    Read the article

  • Need Varnish configuration advice

    - by Patrick
    Hello fellows, I need some advice here for default.vcl. Here's the rules: Only cache pages with urls that contains '/c/', the rest will pass Set the cache expiry to 3 hours Only cache and serve from cache if cookie 'abc' and cookie 'xyz' is empty Thank you!

    Read the article

  • htaccess rewrite different folder url, two index files

    - by Andrew
    I've been searching for awhile now and haven't found anything that comes close to what I'm trying to accomplish. Right now my URL's look like this: www.website.com/something which are using the root folder /index.php Now I have created plugins within folders: /plugins/PLUGINNAME/index.php I want to be able to have URLs like: www.website.com/plugins/PLUGINNAME/anything/iwant/here which are all using /plugins/PLUGINNAME/index.php and not the root directory index.php. Currently www.website.com/plugins/PLUGINNAME/ works, but anything after /PLUGINNAME/xxx defaults to the /index.php.

    Read the article

  • Copy and paste twice

    - by Shane
    Is there a way, to copy and paste twice? For example, is there a way, for me to copy one url, store it, and then copy another url, and then for the urls to be pasted respectively? I read somewhere, that this is possible, but have not ben able to figure it out.

    Read the article

  • Servers behind load balancer

    - by Tom
    We have a CISCO hardware load balancer with two web servers behind it. We'd like to force some URLs to only be served by one of the machines. Firstly, is the job of the load balancer? or would a better approach be create a subdomain such as http://assets.example.com which would be automatically be routed to one of the servers?

    Read the article

  • Rewriting URL for tomcat through an apache AJP connector.

    - by StudentKen
    I've tried several attempts to resolve this, but all have come up naught. Currently I have apache setup to forward all urls at and past the /portal/ tag to tomcat. Unfortunately, tomcat receives these requests through /portal/appName, a subdirectory in webapps rather than the webapps root directory where my wars are deployed. Is there a simple solution to this that I'm not seeing? I've been trying to use mod_rewrite to ^/portal/ $ / but that doesn't yield the expected results (perhaps I'm doing this wrong?).

    Read the article

  • Need Varnish configuration advise

    - by Patrick
    Hello fellows, I need some advise here for default.vcl. Here's the rules: 1) Only cache pages with urls that contains '/c/', the rest will pass 2) Set the cache expiry to 3 hours 3) Only cache and server from cache if cookie 'abc' and cookie 'xyz' is empty Thank you!

    Read the article

  • configuring mod_proxy_html properly?

    - by tobinjim
    I have an apache2 web server that handles reverse proxy for Rails3 app running on another machine. The setup works except URLs generated within the webapp aren't getting rewritten by my configuration for mod_proxy_html. The ["Reverse Proxy Scenario"][1] is exactly what I'm trying to do, so I've followed the tutorial as completely as I know how. I've applied or tried answers supplied here on stackoverflow, to no effect. According to the "Reverse Proxy Scenario" you want a number of modules loaded. All those instructions are in my httpd.conf file and when I examine the output from apactectl -t -D DUMP_MODULES all the expected modules show in amongst the listing. My external web server doing the reverse proxy is at www.ourdomain.org and the Rails app is internally available at apphost.local (the server is Mac OS X Server 10.6, the rails app server is Mac OS X 10.6). What's working right now is access to the webapp via the reverse proxy as: http://www.ourdomain.org/apphost/railsappname/controllername/action But none of the javascript files, css files or other assets get loaded, and links internal to the web app come out missing the apphost portion of the URL, as if my rewrite rule is configured incorrectly (so of course I've focused on that and can't seem to get anything to be added or deleted in the process of passing the html in from the apphost and out through the Apache server). For instance, hovering over an action link in the html returned by the web app you'll get: http://www.ourdomain.org/railsappname/controllername/action Here's what my Apache directives look like: LoadModule proxy_html_module /usr/libexec/apache2/mod_proxy_html.so LoadModule xml2enc_module /usr/libexec/apache2/mod_xml2enc.so ProxyHTMLLogVerbose On LogLevel Debug ProxyPass /apphost/ http://apphost.local/ <Location /apphost/> SetOutputFilter INFLATE;proxy-html;DEFLATE ProxyPassReverse / ProxyHTMLExtended On ProxyHTMLURLMap railsappname/ apphost/railsappname/ RequestHeader unset Accept-Encoding </Location> After every change I make to httpd.conf I religiously check apachectl -t just to be sane. I'm definitely not an Apache expert, but all the directives that follow mine seem to not overrule what I'm doing here. But then nothing that I try seems to alter the URLs I see in my browser after hitting the Apache server with a request for my web app. Even if you can't tell what I've done incorrectly, I'd welcome ideas on how to get Apache to help see what it's working on and doing to the html coming from my web app. That's what I understood the ProxyHTMLLogVerbose On and LogLevel Debug to be setting up, but I'm not seeing anything in the log files.

    Read the article

  • Using mod_rewrite to hide tomcat port

    - by user123181
    I have apps on Tomcat that use URLs like this: http://xxx:8080/myapp I don't want the users to see the port in the URL. Hi can do a rewrite rule like this: RewriteRule ^/myapp(.*) http://xxx:8080/myapp$1 [P,L] This way, if a user goes to the URL http://xxx/myapp he can enter the app fine, but the port will still show up on the browser. I want the URL that the user sees to be always http://xxx/myapp How can I do this using mod_rewrite?

    Read the article

  • Improving HTML scrapper efficiency with pcntl_fork()

    - by Michael Pasqualone
    With the help from two previous questions, I now have a working HTML scrapper that feeds product information into a database. What I am now trying to do is improve efficiently by wrapping my brain around with getting my scrapper working with pcntl_fork. If I split my php5-cli script into 10 separate chunks, I improve total runtime by a large factor so I know I am not i/o or cpu bound but just limited by the linear nature of my scraping functions. Using code I've cobbled together from multiple sources, I have this working test: <?php libxml_use_internal_errors(true); ini_set('max_execution_time', 0); ini_set('max_input_time', 0); set_time_limit(0); $hrefArray = array("http://slashdot.org", "http://slashdot.org", "http://slashdot.org", "http://slashdot.org"); function doDomStuff($singleHref,$childPid) { $html = new DOMDocument(); $html->loadHtmlFile($singleHref); $xPath = new DOMXPath($html); $domQuery = '//div[@id="slogan"]/h2'; $domReturn = $xPath->query($domQuery); foreach($domReturn as $return) { $slogan = $return->nodeValue; echo "Child PID #" . $childPid . " says: " . $slogan . "\n"; } } $pids = array(); foreach ($hrefArray as $singleHref) { $pid = pcntl_fork(); if ($pid == -1) { die("Couldn't fork, error!"); } elseif ($pid > 0) { // We are the parent $pids[] = $pid; } else { // We are the child $childPid = posix_getpid(); doDomStuff($singleHref,$childPid); exit(0); } } foreach ($pids as $pid) { pcntl_waitpid($pid, $status); } // Clear the libxml buffer so it doesn't fill up libxml_clear_errors(); Which raises the following questions: 1) Given my hrefArray contains 4 urls - if the array was to contain say 1,000 product urls this code would spawn 1,000 child processes? If so, what is the best way to limit the amount of processes to say 10, and again 1,000 urls as an example split the child work load to 100 products per child (10 x 100). 2) I've learn that pcntl_fork creates a copy of the process and all variables, classes, etc. What I would like to do is replace my hrefArray variable with a DOMDocument query that builds the list of products to scrape, and then feeds them off to child processes to do the processing - so spreading the load across 10 child workers. My brain is telling I need to do something like the following (obviously this doesn't work, so don't run it): <?php libxml_use_internal_errors(true); ini_set('max_execution_time', 0); ini_set('max_input_time', 0); set_time_limit(0); $maxChildWorkers = 10; $html = new DOMDocument(); $html->loadHtmlFile('http://xxxx'); $xPath = new DOMXPath($html); $domQuery = '//div[@id=productDetail]/a'; $domReturn = $xPath->query($domQuery); $hrefsArray[] = $domReturn->getAttribute('href'); function doDomStuff($singleHref) { // Do stuff here with each product } // To figure out: Split href array into $maxChilderWorks # of workArray1, workArray2 ... workArray10. $pids = array(); foreach ($workArray(1,2,3 ... 10) as $singleHref) { $pid = pcntl_fork(); if ($pid == -1) { die("Couldn't fork, error!"); } elseif ($pid > 0) { // We are the parent $pids[] = $pid; } else { // We are the child $childPid = posix_getpid(); doDomStuff($singleHref); exit(0); } } foreach ($pids as $pid) { pcntl_waitpid($pid, $status); } // Clear the libxml buffer so it doesn't fill up libxml_clear_errors(); But what I can't figure out is how to build my hrefsArray[] in the master/parent process only and feed it off to the child process. Currently everything I've tried causes loops in the child processes. I.e. my hrefsArray gets built in the master, and in each subsequent child process. I am sure I am going about this all totally wrong, so would greatly appreciate just general nudge in the right direction.

    Read the article

  • Firefox plugin to block scripts of only specified websites

    - by user23392
    I'm looking for a Firefox plugin that blocks JavaScript from specified URLs. Example: I add: "google-analytics.com" It then blocks all scripts coming from Google Analytics. Essentially a blacklist of sites that I don't want to allow JavaScript from. Note: I know of NoScript which blocks all scripts from all websites. But I don't want that.

    Read the article

  • Using Array Variables For FILE_GET_CONTENTS

    - by Whoshooter
    I have a script all done now and everything has been debugged and it works, except for the last hurdle. This script grabs pertinent information on bank web sites, takes that data and uses it to populate a template and then it's all posted to Wordpress - BUT I get an error because the file_get_contents function fails taking each url from the Array. I've var_dumped the array and all the URLS are there in the [0] key, so this is what I tried. master data is the scraped URL source the script uses urlscrape_array is the collection of URLS $master_data = file_get_contents($urlscrape_array[0]); When I run the script using a URL like below it works beautifully every time; $master_data = file_get_contents("http://www.somesite/somepage.html"); This is the error I get when I try to use the first example' Warning: file_get_contents() expects parameter 1 to be string, array given in /home3/path/public_html/mysite.com/boise_project/scriptmainpage.php on line 13 As requested here is a sample of the var_dump on $urlscrape_array[0] array(504) { [0]=> string(56) "http://www.somepage.com/somepage-3178.html" [1]=> string(54) "http://www.somepage.com/somepage-16.html" [2]=> string(56) "http://www.somepage.com/somepage-3202.html" [3]=> string(56) "http://www.somepage.com/somepage-4324.html" [4]=> string(56) "http://www.somepage.com/somepage-4777.html" [5]=> string(56) "http://www.somepage.com/somepage-5140.html" [6]=> string(56) "http://www.somepage.com/somepage-5220.html" [7]=> string(56) "http://www.somepage.com/somepage-9205.html" [8]=> string(56) "http://www.somepage.com/somepage-3251.html" [9]=> string(56) "http://www.somepage.com/somepage-3323.html" [10]=> string(56) "http://www.somepage.com/some-page-3797.html" [11]=> string(56) "http://www.somepage.com/some-page-4145.html" [12]=> string(56) "http://www.somepage.com/some-page-3191.html" [13]=> string(55) "http://www.somepage.com/some-page-329.html" [14]=> string(56) etc.... Error as per the foreach statement provided by Uptown Warning: Invalid argument supplied for foreach() in /home3/bettyt45/public_html/bdbud.com/boise_project/boise-wordpress.php on line 12 NULL print_r resulst below; Array ( [0] => Array ( [0] => http://www.somesite.com/some-page-3178.html [1] => http://www.somesite.com/some-page-16.html [2] => http://www.somesite.com/some-page-3202.html [3] => http://www.somesite.com/some-page-4324.html [4] => http://www.somesite.com/some-page-4777.html [5] => http://www.somesite.com/some-page-5140.html [6] => http://www.somesite.com/some-page-5220.html [7] => http://www.somesite.com/some-page-9205.html [8] => http://www.somesite.com/some-page-3251.html [9] => http://www.somesite.com/some-page-3323.html [10] => http://www.somesite.com/some-page-3797.html [11] => http://www.somesite.com/some-page-4145.html [12] => http://www.somesite.com/some-page-3191.html [13] => http://www.somesite.com/some-page-329.html [14] => http://www.somesite.com/some-page-3341.html [15] => http://www.somesite.com/some-page-3758.html [16] => http://www.somesite.com/some-page-4180.html [17] => http://www.somesite.com/some-page-9014.html [18] => http://www.somesite.com/some-page-5987.html [19] => http://www.somesite.com/some-page-1542.html [20] => http://www.somesite.com/some-page-3004.html [21] => http://www.somesite.com/some-page-9034.html [22] => http://www.somesite.com/some-page-3385.html [23] => http://www.somesite.com/some-page-3435.html [24] => http://www.somesite.com/some-page-6389.html [25] => http://www.somesite.com/some-page-6992.html [26] => http://www.somesite.com/some-page-7051.html HERE IS THE CODE I USED TO CREATE THE ARRAY ABOVE; $urlscrape_data = file_get_contents('http://www.bdbud.com/boise_project/boise-urls.htm'); preg_match_all('~http\:\/\/www.creditunionsonline.com\/credit\-union\-\d{1,4}?\.html~', $urlscrape_data, $urlscrape_matches); $urlscrape_array = $urlscrape_matches;

    Read the article

  • Recommended Practices for Managing httpd.conf for different environments

    - by James Kingsbery
    We have several different environments (developer dekstop, integration, QA, prod) which should have slightly different variants of the same httpd.conf file. As an example, the httpd.conf file configures that httpd should act as a reverse proxy and proxy certain URLs to Jetty, but the hostname of the Jetty instance is different in each environment. Is there a recommended practice for mangaging these kinds of differences? I looked around the apache documentation for the httpd.conf file and I didn't see anything that does what I need.

    Read the article

  • Running localhost webapp projects under domain name using fiddler2

    - by user01
    I have a Tomcat server running on my local dev machine(running Windows8) & I use fiddler2 to assign an alias to localhost as my domain name (www.mydomainName.com), so my application webpages open in the browser like this: http://www.mydomainName.com/myAppName/welcome.html instead of http://localhost:8080/myAppName/welcome.html But I want to my webapp pages urls to omit 'myAppName' & be something like : http://www.mydomainName.com/welcome.html How could I configure to do this ?

    Read the article

  • NGINX: dynamic locations stored in DB

    - by chimpanzee
    Is there a possibility to store nginx locations in DB instead of the config to serve them dynamically? The task is to create dynamic URLs for video files based on user's IP and video ID. The idea is when the user visits my website such an dynamic URL is created and added to the db as a new nginx location that exists just for this user and not for others. Or nginx doesn't fit my task and I need to use another tool? Thanks.

    Read the article

  • Google chrome disable url suggestions from history

    - by Tural Aliyev
    I was searching for a solution which will help me to disable URL auto suggestion (from history) while I type url on adressbar. But I haven't found anything about this solution. I tryied to uncheck Use a prediction service to help complete searches and URLs typed in the address barin privacy settings, but it doesn't help. Is there any way to disable history or disable url suggestions from history?

    Read the article

  • How to ensure precedence of files over directories with Apache?

    - by janeden
    My httpd.conf uses the MultiViews option to serve HTML files for URLs like http://server/blog. This works fine, unless there are directories with the same name – Apache will then try to serve the directory. Is there any way to ensure precedence of blog.html over blog/, or rather: can I make Apache process content negotiation according to MultiView although a matching entity (the directory) is present? In nginx, I can do this explicitly: try_files $uri $uri.html $uri/ =404;

    Read the article

< Previous Page | 70 71 72 73 74 75 76 77 78 79 80 81  | Next Page >