Search Results

Search found 3750 results on 150 pages for 'joomla sef urls'.

Page 37/150 | < Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >

  • How to write ½ in php

    - by skarama
    Hi everyone, Quick question, how can I make this valid : if($this->datos->bathrooms == "1½"){$select1 = JText::_( 'selected="selected"' );} The ½ doesn't seem to be recognized. I tried to write it as &#189; but then it looks for &#189; literally, and not the ½ sign. Any ideas?

    Read the article

  • Tools and tips for switching CMS

    - by Jimmy
    I work for a university, and in the past year we finally broke away from our static HTML site of several thousand pages and moved to a Drupal site. This obviously entails massive amounts of data entry. What if you're already using a CMS and are switching to another one that better suits your needs? How do you minimize the mountain of data entry during such a huge change? Are there tools built for this, or some best practices one should follow?

    Read the article

  • Django - problem with {% url facebook_xd_receiver %}

    - by Gaurav
    I'm using {% url facebook_xd_receiver %} in one of my HTML files. This works just fine when I run my project using the command python manage.py runserver But the same project stops running and gives me a "TemplateSyntaxError" at the line {% url facebook_xd_receiver %} Can anyone please tell me what could be the difference between the dev server run through the command line and the apache server. Is there anything I'm missing out on while configuring the Apache server? Or is it a Django problem?

    Read the article

  • Filter Phrase Query

    - by alsuelo
    I try to filter a phrase to make a search in my website i've this query, this code working with one word but when i type wit more than one isn't working becuase the print is without spaces. $phrase = $this->getState($this->context.".filter_phrase"); printf("Original string: %s\n", $phrase); if(!empty($phrase)) { $escaped = $db->escape($phrase, true); printf("Escaped string: %s\n", $escaped); $quoted = $db->quote("%" . $escaped . "%" , false); $query->where ('a.title LIKE ' .$quoted); } Example i type king and the output is king , when i type the king the output is theking, i want to know if exist any way to conserve the blank spaces.

    Read the article

  • how to get category value from xml params?

    - by C-link Nepal
    I have the following field in xml: <field name="cat1" type="category" extension="COM_CONTENT" label="MOD_ITEM_CAT1" description="MOD_ITEM_CAT_DESC" /> Now, I wanted to get selected value: echo $params->get('cat1'); Which shows me 8 (the value of selected option) not it's title. I've category title people and it should show people instead of 8. So, how to extract the title of the category?

    Read the article

  • How to handle redirections with codeigniter?

    - by SinneR
    Hi, im having problems starting a codeigniter project, the problem is that when i do something in a controller and then i want a page to display the result, an example: i have a form to add a item to the database, i get all the data in the controller and save it to database and then i want (if all went well) to redirect to the main page with a success msg, i was doing this with $this->load->view('admin', $data); the problem is that the url keeps saying admin/addItem so everytime the page gets refreshed it adds another item, now i found the: redirect('admin','refresh'); but this only helps me when i dont need to display any msg because this function dont allow to send a $data var. Any ideas? Probably this is really easy to fix but i cant find a way to handle the flow of the application the way i want, any help is apreciated. thanks ;)

    Read the article

  • 500 internal server error while trying to use the registration form

    - by Prachi Mazumdar
    I am using the registration form that is already available in the user manager option in my website. But once I click on the register button I get this message: Server error The website encountered an error while retrieving http://prachi.matrimony4me.in/index.php/sample-sites-2?task=registration.register. It may be down for maintenance or configured incorrectly. Here are some suggestions: Reload this webpage later. HTTP Error 500 (Internal Server Error): An unexpected condition was encountered while the server was attempting to fulfill the request.

    Read the article

  • Tab Sweep: HTML5 Attributes, MDB, JasperReports, Delphi, Security, JDBCRealm, Joomla, ...

    - by arungupta
    Recent Tips and News on Java, Java EE 6, GlassFish & more : • JMS and MDB in Glassfish for 20 minutes (nik_code) • Installing Java EE 6 SDK with Glassfish on a headless system (jvmhost) • JSF + JPA + JasperReports (iReport) Part 2 (Rama krishnnan E P) • Serving Static Content on WebLogic and GlassFish (cdivilly) • Whats the problem with JSF? A rant on wrong marketing arguments (Über Thomas Asel) • JPA 2.1 will support CDI Injection in EntityListener - in Java EE 7 (Craig Ringer) • Java Delphi integration with Glassfish JMS OpenMQ (J4SOFT) • Java EE Security using JDBCRealm Part1 (acoustic091409) • Adding HTML5 attributes to standard JSF components (Bauke Scholtz) • Configuring SAS 9.1 to Use Java 5 or above on Windows (Java EE Tips) • Inject Java Properties in Java EE Using CDI (Piotr Nowicki) • NoClassDefFoundError in Java EE Applications - Part 2 (Java Code Geeks) • NoClassDefFoundError in Java EE Applications - Part 1 (Java Code Geeks) • EJB 3 application in Glassfish 3x (Anirban Chowdhury) • How To Install Mobile Server 11G With GlassFish Server 3.1 (Oracle Support) • Joomla on GlassFish (Survivant)

    Read the article

  • Sortie de Joomla! 3.0.0, responsive design, installation simplifiée et nouveau module de statistiques

    Voilà qui est fait, six mois après la sortie de Joomla! 2.5 : la version 3. Tout d'abord, un mot sur le release cycle : la version 2.5 est une LTS (long term support), pas la 3.0. Cette dernière permet de casser pas mal de choses et ne sera donc pas supportée à long terme : il faudra attendre la 3.5 pour cela (au rythme d'une version mineure tous les six mois). La 2.5 sera supportée jusqu'à la sortie de la 3.5, au moins : il est toujours sûr de l'utiliser, elle sera toujours supportée? mais ne verra pas d'évolution technique. Avec la sortie de cette version 3.0, par contre, c'est le support de la vieillissante 1.5 qui est arrêté. La migration de la 2.5 vers la 3.0 devrait se passer sans heurts, sauf en cas d'utilisation intensive d'extensions (quelques modifications...

    Read the article

  • SEO and search result changes when switching to SSL on Joomla site?

    - by jeffery_the_wind
    I am thinking about purchasing an SSL certificate for a website. The most noticeable difference for the user would be the http now becomes https and there is that lock icon in most browsers. Will there be any adverse affects on the website's current SEO or recognition by search engines when I make the switch? Also this is a Joomla site, which has an option in the settings to use SSL. It is supposed to make it easier but not sure if it takes care of everything. Thanks!

    Read the article

  • htaccess rewrite doesn't work

    - by Raimond van Mouche
    I'm trying to redirect url's in my /joomla/ folder containing "rsform" to the same url but except for /joomla/ /formulieren/. However my tried .htaccess rewrite doesn't work. I tried: RewriteEngine on RewriteCond %{REQUEST_URI} rsform RewriteRule ^(.+)$ http://watervriendengeleen.nl/joomla/ [L,R=301] And other URL related rewrites like Redirect /joomla/index.php?option=com_rsform&formId=12&Itemid=99999 http://sitename.com/formulieren/index.php?option=com_rsform&formId=12&Itemid=99999 which didn't work either. Any thoughts?

    Read the article

  • How to get httpd to forward to multiple tomcats for different urls, including / ?

    - by Nick Foote
    Ok So I've got multiple tomcat instances setup on several AJP ports, I also have Apache httpd listening on port 8090 (cos I've got another app already using 8080 at the moment). I've successfully mapped urls such as mydomain.com:8090/demo and mydomain.com:8090/preprod to their respective tomcat instances using Jk Mount and the following vhosts config; <VirtualHost *:8090> JkMount /preprod* preprod JkMount /demo* demo </VirtualHost> But I also want the "root" address to map to another tomcat instance, what will become live/production, ie I want mydomain.com:8090/ to map a 3rd tomcat instance. At the moment nothing happens or changes if I just add to the above config a line; JkMount /* rootwar if I browse to mydomain.com:8090 I just get the same boring apache httpd landing page letting me know its running (ie index.html in httpd/htdocs) Is it possible to use JkMount to redirect the "root" address stuff to a tomcat instance? I can see that a rule like /* will also match URLs like mydomain.com/preprod but I was hoping the rules are applied in order so if /* appears at the end it effectively would be a "if its not one of the other environments, then direct to root/production" Just to be clear I'm trying to setup the following; mydomain.com:8090/preprod --> myApp running in tomcat1 mydomain.com:8090/demo --> myApp running in tomcat2 mydomain.com:8090 --> myApp running in tomcat3

    Read the article

  • Is SEO affected negatively by having densely encoded identifiers of content in URLs?

    - by casperOne
    This isn't about where to put the id of a piece of unique content in URLs, but more about densely packing the URL (or, does it just not matter). Take for example, a hypothetical post in a blog: http://tempuri.org/123456789/seo-friendly-title The ID that uniquely identifies this is 123456789. This corresponds to a look-up and is the direct key in the underlying data store. However, I could encode that in say, hexadecimal, like so: http://tempuri.org/75bcd15/seo-friendly-title And that would be shorter. One could take it even further and have more compact encodings; since URLs are case sensitive, one could imagine an encoding that uses numbers, lowercase and uppercase letters, for a base of 62 (26 upper case + 26 lower case + 10 digits): 0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz For a resulting URL of: http://tempuri.org/8M0kX/seo-friendly-title The question is, does densely packing the ID of the content (the requirement is that an ID is mandatory for look-ups) have a negative impact on SEO (and dare I ask, might it have any positive impact), or is it just not worth the time? Note that this is not for a URL shortening service, so saving space in the URL for browser limitation purposes is not an issue.

    Read the article

  • Massive 404 attack with non existent URLs. How to prevent this?

    - by tattvamasi
    The problem is a whole load of 404 errors, as reported by Google Webmaster Tools, with pages and queries that have never been there. One of them is viewtopic.php, and I've also noticed a scary number of attempts to check if the site is a WordPress site (wp_admin) and for the cPanel login. I block TRACE already, and the server is equipped with some defense against scanning/hacking. However, this doesn't seem to stop. The referrer is, according to Google Webmaster, totally.me. I have looked for a solution to stop this, because it isn't certainly good for the poor real actual users, let alone the SEO concerns. I am using the Perishable Press mini black list (found here), a standard referrer blocker (for porn, herbal, casino sites), and even some software to protect the site (XSS blocking, SQL injection, etc). The server is using other measures as well, so one would assume that the site is safe (hopefully), but it isn't ending. Does anybody else have the same problem, or am I the only one seeing this? Is it what I think, i.e., some sort of attack? Is there a way to fix it, or better, prevent this useless resource waste? EDIT I've never used the question to thank for the answers, and hope this can be done. Thank you all for your insightful replies, which helped me to find my way out of this. I have followed everyone's suggestions and implemented the following: a honeypot a script that listens to suspect urls in the 404 page and sends me an email with user agent/ip, while returning a standard 404 header a script that rewards legitimate users, in the same 404 custom page, in case they end up clicking on one of those urls. In less than 24 hours I have been able to isolate some suspect IPs, all listed in Spamhaus. All the IPs logged so far belong to spam VPS hosting companies. Thank you all again, I would have accepted all answers if I could.

    Read the article

  • Data extract from website URL

    - by user2522395
    From this below script I am able to extract all links of particular website, But i need to know how I can generate data from extracted links especially like eMail, Phone number if its there Please help how i will modify the existing script and get the result or if you have full sample script please provide me. Private Sub btnGo_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles btnGo.Click 'url must be in this format: http://www.example.com/ Dim aList As ArrayList = Spider("http://www.qatarliving.com", 1) For Each url As String In aList lstUrls.Items.Add(url) Next End Sub Private Function Spider(ByVal url As String, ByVal depth As Integer) As ArrayList 'aReturn is used to hold the list of urls Dim aReturn As New ArrayList 'aStart is used to hold the new urls to be checked Dim aStart As ArrayList = GrabUrls(url) 'temp array to hold data being passed to new arrays Dim aTemp As ArrayList 'aNew is used to hold new urls before being passed to aStart Dim aNew As New ArrayList 'add the first batch of urls aReturn.AddRange(aStart) 'if depth is 0 then only return 1 page If depth < 1 Then Return aReturn 'loops through the levels of urls For i = 1 To depth 'grabs the urls from each url in aStart For Each tUrl As String In aStart 'grabs the urls and returns non-duplicates aTemp = GrabUrls(tUrl, aReturn, aNew) 'add the urls to be check to aNew aNew.AddRange(aTemp) Next 'swap urls to aStart to be checked aStart = aNew 'add the urls to the main list aReturn.AddRange(aNew) 'clear the temp array aNew = New ArrayList Next Return aReturn End Function Private Overloads Function GrabUrls(ByVal url As String) As ArrayList 'will hold the urls to be returned Dim aReturn As New ArrayList Try 'regex string used: thanks google Dim strRegex As String = "<a.*?href=""(.*?)"".*?>(.*?)</a>" 'i used a webclient to get the source 'web requests might be faster Dim wc As New WebClient 'put the source into a string Dim strSource As String = wc.DownloadString(url) Dim HrefRegex As New Regex(strRegex, RegexOptions.IgnoreCase Or RegexOptions.Compiled) 'parse the urls from the source Dim HrefMatch As Match = HrefRegex.Match(strSource) 'used later to get the base domain without subdirectories or pages Dim BaseUrl As New Uri(url) 'while there are urls While HrefMatch.Success = True 'loop through the matches Dim sUrl As String = HrefMatch.Groups(1).Value 'if it's a page or sub directory with no base url (domain) If Not sUrl.Contains("http://") AndAlso Not sUrl.Contains("www") Then 'add the domain plus the page Dim tURi As New Uri(BaseUrl, sUrl) sUrl = tURi.ToString End If 'if it's not already in the list then add it If Not aReturn.Contains(sUrl) Then aReturn.Add(sUrl) 'go to the next url HrefMatch = HrefMatch.NextMatch End While Catch ex As Exception 'catch ex here. I left it blank while debugging End Try Return aReturn End Function Private Overloads Function GrabUrls(ByVal url As String, ByRef aReturn As ArrayList, ByRef aNew As ArrayList) As ArrayList 'overloads function to check duplicates in aNew and aReturn 'temp url arraylist Dim tUrls As ArrayList = GrabUrls(url) 'used to return the list Dim tReturn As New ArrayList 'check each item to see if it exists, so not to grab the urls again For Each item As String In tUrls If Not aReturn.Contains(item) AndAlso Not aNew.Contains(item) Then tReturn.Add(item) End If Next Return tReturn End Function

    Read the article

  • How can I open urls on my host machine with VMWARE?

    - by Yanamon
    I am running a Windows 7 vm with WMware player from Fedora. I have VMWare tools installed successfully and I have successfully some of it's features like Unity mode so it seems to be installed correctly. That being said I still cannot get urls to open up in my host machine's browsers, these are the steps I have taken: Within the vm I set "Default Host Application" to be the application to open urls. Within my host machine I have set Chrome to be my preferred application for opening urls. Enabled Shared Folders in the vm (Not sure if that really helped anything but I saw it suggested on a forum post) After doing that when I click on a link I get the following error message: Default host Application: Make sure the virtual machine's configuration allows the guest to open host applications. I cannot find any option like that in my vm's configuration so I am not sure what the error message is referring to.

    Read the article

  • Preserving URLs after CMS migration with redirect database and apache http?

    - by stwissel
    We will migrate an entire intranet from one CMS to another. All URLs will change in a non-predictable pattern, but I can capture a file with original,new URLs I can feed into anything. I have hundred thousands of URLs, not just a few hundred. What I would like to do: every URL that is not found (404) should be checked against the database and if a new URL found a 301/308 issued instead. Some trickery to suggest similar pages in the 404 message if the lookup was unsuccessful would be an added bonus. Is that the right approach or should I check redirection first all the time? How would I do that in Apache2? Is that a custom 404 module?

    Read the article

  • What's wrong with my .htaccess? Trying to simplify actual code

    - by AlexV
    This is my actual .htaccess: #If the requested URI does not end with an extension RewriteCond %{REQUEST_URI} !\.(.*) #If the requested URI is not in an excluded location RewriteCond %{REQUEST_URI} !^/(excluded1|excluded2)/ #Then serve the URI via the mapper RewriteRule .* /seo-urls/seo-urls-mapper.php?uri=%{REQUEST_URI} [L,QSA] #If the requested URI ends with .php* RewriteCond %{REQUEST_URI} \.php.*$ [NC] #If the requested file is not seo-urls-mapper.php (avoid .htaccess loop) RewriteCond %{REQUEST_FILENAME} (?<!seo-urls-mapper)\.php.*$ #Then serve the URI via the mapper RewriteRule .* /seo-urls/seo-urls-mapper.php?uri=%{REQUEST_URI} [L,QSA] Since all conditions are compatibles except the 1st ones (no extension and *.php* match) all I should have to do is to add the [OR] condition to these 2 lines, but when I'm adding it it's not working (my no extension rule don't work anymore). This is my new (not working) code: #If the requested URI does not end with an extension OR if the URI ends with .php* RewriteCond %{REQUEST_URI} !\.(.*) [OR] RewriteCond %{REQUEST_URI} \.php.*$ [NC] #If the requested file is not seo-urls-mapper.php (avoid .htaccess loop) RewriteCond %{REQUEST_FILENAME} (?<!seo-urls-mapper)\.php.*$ #If the requested URI is not in an excluded location RewriteCond %{REQUEST_URI} !^/(excluded1|excluded2)/ #Then serve the URI via the mapper RewriteRule .* /seo-urls/seo-urls-mapper.php?uri=%{REQUEST_URI} [L,QSA] Hopefully someone will be able to clarify this issue... I guess I don't fully understand the use of [OR]. Thanks!

    Read the article

  • What URLs must be in IE's Trusted Sites list to allow Windows Update?

    - by ewall
    Particularly for servers in which IE Enhanced Security Configuration is enabled, you need to have all the Windows Update/Microsoft Update URLs in your "Trusted Sites" list in order to use the site. (Furthermore, for domain member servers where Group Policy enforces Internet Explorer's list of "Trusted Sites", you don't have the option to edit the Trusted Sites yourself... so all the necessary URLs should be listed in the GPO.) So, what is the full list of URLs I'll need in IE's Trusted Sites? So far I have the following: http(s)://*.update.microsoft.com http://download.windowsupdate.com http://windowsupdate.microsoft.com I seem to remember there being several more...

    Read the article

  • problem with Webmaster Google Sitemap

    - by Alex
    I have a wp mu 3.6.1 with domain mapping (0.5.4.3) with w3tc (0.9.3) and Google XML Sitemaps (4.0 BETA). I have 4 different sitemaps. sub-1.com/sitemap.xml sub-2.com/sitemap.xml sub-3.com/sitemap.xml sub-4.com/sitemap.xml on google webmaster i got 59 errors & 14 warnings. Sitemap errorsErrors: We encountered an error while trying to access your Sitemap. Please ensure your Sitemap follows our guidelines and can be accessed at the location you provided and then resubmit. General HTTP error: 404 not found Sitemap: sub-2.com/sitemap-pt-post-2011-02.xml etc (but when i click on my sitemap links they work fine) Sitemap errorsWarnings: URLs not accessible When we tested a sample of the URLs from your Sitemap, we found that some URLs were not accessible to Googlebot due to an HTTP status error. All accessible URLs will still be submitted. Sitemap: sub-2.com/sitemap-misc.xml HTTP Error: 404 URL: /sitemap.html (but when i click on my sitemap links they work fine) Sitemap errorsIndex errors URLs not accessible When we tested a sample of the URLs from your Sitemap, we found that some URLs were not accessible to Googlebot due to an HTTP status error. All accessible URLs will still be submitted. HTTP Error: 404 URL: /sitemap-pt-post-2010-09.xml (but when i click on my sitemap links they work fine) Web pages 3,276 Submitted 3,247 Indexed what do I have to put on network adminperformance(w3tc)page cachecache preloadSitemap URL: ? i have add "/sitemap.xml" my robots.txt: http://pastebin.com/3K2U0mQa my .htaccess: http://pastebin.com/efJJ6zwy How can I make it work right?

    Read the article

  • Will ranking be affected with a mobile XML sitemap for a mobile site with canonical URLs?

    - by Emil Rasmussen
    We have a site with both a desktop version and a mobile version. Most of the content are the same and both versions have the same URL, but the HTML generated is device specific. Looking at Google's recommendations for smartphone-optimized sites, one could get the impression that the mobile xml sitemap is only for sites with different URLs. Will ranking be affected - negatively or positively - if we add a mobile xml sitemap that effectively will be a duplicate of the desktop sitemap?

    Read the article

  • Will ranking be affected with a mobile XML sitemap for a mobile site with the same URLs as the desktop site?

    - by Emil Rasmussen
    We have a site with both a desktop version and a mobile version. Most of the content are the same and both versions have the same URL, but the HTML generated is device specific. Looking at Google's recommendations for smartphone-optimized sites, one could get the impression that the mobile xml sitemap is only for sites with different URLs. Will ranking be affected - negatively or positively - if we add a mobile xml sitemap that effectively will be a duplicate of the desktop sitemap?

    Read the article

  • How to Remove Extensions From, and Force the Trailing Slash at the End of URLs?

    - by Kronbernkzion
    Example of current file structure: example.com/foo.php example.com/bar.html example.com/directory/ example.com/directory/foo.php example.com/directory/bar.html example.com/cgi-bin/directory/foo.cgi I would like to remove HTML, PHP and CGI extensions from, and then force the trailing slash at the end of URLs. So, it could look like this: example.com/foo/ example.com/bar/ example.com/directory/ example.com/directory/foo/ example.com/directory/bar/ example.com/cgi-bin/directory/foo/ I am very frustrated because I've searched for 17 hours straight for solution and visited more than a few hundred pages on various blogs and forums. I'm not joking. So I think I've done my research. Here is the code that sits in my .htaccess file right now: RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_FILENAME}\.html -f RewriteRule ^(([^/]+/)*[^./]+)/$ $1.html RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_URI} !(\.[a-zA-Z0-9]|/)$ RewriteRule (.*)$ /$1/ [R=301,L] As you can see, this code only removes .html (and I'm not very happy with it because I think it could be done a lot simpler). I can remove the extension from PHP files when I rename them to .html through .htaccess, but that's not what I want. I want to remove it straight. This is the first thing I don't know how to do. The second thing is actually very annoying. My .htaccess file with code above, adds .html/ to every string entered after example.com/directory/foo/. So if I enter example.com/directory/foo/bar (obviously /bar doesn't exist since foo is a file), instead of just displaying message that page is not found, it converts it to example.com/directory/foo/bar.html/, then searches for a file for a few seconds and then displays the not found message. This, of course, is bad behavior. So, once again, I need the code in .htaccess to do the following things: Remove .html extension Remove .php extension Remove .cgi extension Force the trailing slash at the end of URLs Requests should behave correctly (no adding trailing slashes or extensions to strings if file or directory doesn't exist on server) Code should be as simple as possible I would very much appreciate any help. And to first person that gives me the solution, I'll send two $50 iTunes Store gift cards for US store. If this offends anyone, I am truly sorry and I apologize. Thanks in advance.

    Read the article

  • Do SEO-friendly URLs really affect a page's ranking?

    - by Lee Harold
    SEO-friendly URLs are all the rage these days. But do they actually have a meaningful impact on a page's ranking in Google and other search engines? If so, why? If not, why not? (Note that I would absolutely agree that SEO-friendly URLs are nicer to use for human beings. My question is whether they actually make a difference to the ranking algorithms.) Update: As it turns out, the Google post that endorphine points to here has caused tremendous confusion in the SEO community. For a sampling of the discussion, see here, here, and here. Part of the problem is that the Google post is addressing the worst case where URL rewriting is done poorly and so you'd be better off sticking with a dynamic URL rather than a mangled static "SEO-friendly" URL. There's no question dynamic URLs can be crawled by Google and can achieve high rankings. Maybe it would be easier to reframe the question more concretely: given 2 otherwise equivalent pages, which will rank higher for the search "do seo friendly urls really affect page ranking"? A) http://stackoverflow.com/questions/505793/do-seo-friendly-urls-really-affect-a-pages-ranking or B) http://stackoverflow.com?question=505793 (a fake URL for comparison only)

    Read the article

< Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >