Search Results

Search found 9757 results on 391 pages for 'shekhar pro'.

Page 176/391 | < Previous Page | 172 173 174 175 176 177 178 179 180 181 182 183  | Next Page >

  • Experienced programmer, beginner at web design, tools for effective maintainable web design? [closed]

    - by Clinton
    I do quite a bit of programming in my work, which I'm comfortable with, but recently I've being trying to do some web-design for non-work related reasons. I've got a Drupal site up and running, and added some content. But they all look fairly basic. Header with some content. It doesn't look particularly polished. Anyway, as an example, what I wanted to do was make some "bubbles", each with some text in them. From a programmers point of view, say: bubble(question_text, answer_text) might expand to a box with some border, with "Question: " + question_text then "Answer: " + answer_text. Of course I'd have lots of these bubbles, but I'd like to change their look and feel in one place, so simple HTML would be a maintainable nightmare. I also want to lay them out on the screen in some fashion. I was thinking a mixture of javascript and CSS, or possibly use PHP which Drupal uses. On the other hand, I fear I might be taking a 1990s approach to this, and that there's actually tools available now that make this process a lot easier. I'm just wondering what the best approach to this sort of task is? Should I be using offline web design software and copying the code to Drupal, and if so, any recommendations? I'm sorry if my question is a bit vague, because I'm not really sure what question I should be asking. I'd appreciate if you answer and comment, and I'll try my best to be more specific as I understand more.

    Read the article

  • Redirect public traffic to a different subfolder, while local traffic remains unchanged

    - by ecnepsnai
    I would like to have local (intranet) HTTP traffic go to the /var/www/html folder while any public traffic goes to the subfolder, /var/www/html/public I've tried this configuration, with some variation, in httpd.conf <VirtualHost PRIVATE-IP> DocumentRoot /var/www/html ServerName ecn ErrorLog /var/www/logs/error/private CustomLog /var/www/logs/access/private common </VirtualHost> <VirtualHost PUBLIC-IP> DocumentRoot /var/www/html/public ServerName PUBLIC-DOMAIN-NAME ErrorLog /var/www/logs/error/public CustomLog /var/www/logs/access/public common </VirtualHost> PUBLIC-IP, PRIVATE-IP, and PUBLIC-DOMAIN name are all replaced with the correct values in the actual document. The problem is, local traffic can browse fine but remote traffic is directed to the root folder and getting 403d (because I have that folder blocked off through my .htaccess file). If I append /public to the URL it works fine.

    Read the article

  • Google is not treating two Australian schools as separate sites when both are subdomains of qld.edu.au

    - by LuckySpoon
    My question relates to two websites, each of which is a "Calvary Christian College", however in two totally different locations and unrelated to each other entirely (except by name, and thus domain). All schools in the state are issued a <school-name>.qld.edu.au subdomain, in this case calvary.qld.edu.au and calvarycc.qld.edu.au. Now what's interesting is that these domains are crossing each other in sitelinks for searches such as calvary christian college townsville. The green data here is for one school (the Townsville school, as per search term), and the red data is for the other school. I've put a demotion in for this 6 months ago (we control calvary.qld.edu.au), however we're seeing no change on the results page. I have been able to get the owners of calvarycc.qld.edu.au to submit demotions for our domain, which should go in sometime in the next few days. What can we do to tell Google that these websites are not interchangeable, despite both appearing as "subdomains" of qld.edu.au? We can possibly open channels of communication with the administrators of qld.edu.au but will need to tell them what we need to change, and at this point I'm out of ideas.

    Read the article

  • Free online PHP hosting [closed]

    - by Anthony Newman
    Possible Duplicate: How to find web hosting that meets my requirements? I have a PHP script that can take $_GET parameters from a URL (i.e. http://www.example.com/test.php?name=george). I'd like to be able to host this script online so that others can pass parameters to it to obtain the returned data. Anyone know of a free PHP hosting site that would allow for his functionality? (PS: I can't host it myself) Thanks!

    Read the article

  • Redirect/Rewrite Subdomain to Subfolder

    - by Laurent Ho
    I'm trying to redirect a subdomain to a subfolder e.g. forums.domain.com to www.domain.com/forums Note that I started the forums in the subfolder format but worried that members might mistakenly try to access the forums using the subdomain format. RewriteCond %{HTTP_HOST} ^(www\.)?forums\.domain\.com RewriteRule .* /forums [L] From what I read the codes above should work through .htaccess, but do I still need to create a DNS A record to point to the IP address of the server?

    Read the article

  • How to make Google recognize language for a multilingual website?

    - by Julien Fouilhé
    Few weeks ago, I implemented translation functionality for the website of my company. The website is now available in french and english and I did look on the internet the best way to do if we want to do not lose any ranking and to have our pages on Google. Here is what I did: I did set a response header: Content-Language:en and Content-Language:fr My URLs are formatted as: http://www.website.com/en/... and http://www.website.com/fr/... My html tag is set with a lang attribute: <html lang="en"> and <html lang="fr"> There is a <link rel="alternate" hreflang="en" href="EnglishPageUrl"> on french pages and a <link rel="alternate" hreflang="en" href="frenchPageUrl"> on english pages. But Google keeps referring to some english pages when I'm doing a search on french engine, knowing that the website was first only available in english. Is that normal? Do I have to wait still, it has been now almost one month, I thought it would be okay...? Thank you.

    Read the article

  • wget not respecting my robots.txt. Is there an interceptor?

    - by Jane Wilkie
    I have a website where I post csv files as a free service. Recently I have noticed that wget and libwww have been scraping pretty hard and I was wondering how to circumvent that even if only a little. I have implemented a robots.txt policy. I posted it below.. User-agent: wget Disallow: / User-agent: libwww Disallow: / User-agent: * Disallow: / Issuing a wget from my totally independent ubuntu box shows that wget against my server just doesn't seem to work like so.... http://myserver.com/file.csv Anyway I don't mind people just grabbing the info, I just want to implement some sort of flood control, like a wrapper or an interceptor. Does anyone have a thought about this or could point me in the direction of a resource. I realize that it might not even be possible. Just after some ideas. Janie

    Read the article

  • Trouble with .htacess redirection

    - by mike23
    I use this redirect rule to redirect users from www.domain.com/admin to www.domain.com/wp-admin on a Wordpress site. RedirectMatch 301 \@admin http://www.domain.com/wp-admin The problem is that instead redirecting to wp-admin/, it redirects to an article called Administrators are awesome people (slug : administrators-are-awesome-people) I can guess what is going on, WP sees that there is an article slug starting with "admin", and redirects to it, overruling my own rule. Is there a way to be more specific, like saying "redirect urls that end with exactly admin ?

    Read the article

  • How can I redirect everything but the index as 410?

    - by Mikko Saari
    Our site shut down and we need to give a 410 redirect to the users. We have a small one-page replacement site set up in the same domain and a custom 410 error page. We'd like to have it so that all page views are responded with 410 and redirected to the error page, except for the front page, which should point to the new index.html. Here's what in the .htaccess: RewriteEngine on RewriteCond %{REQUEST_FILENAME} !-f RewriteRule !^index\.html$ index.html [L,R=410] This works, except for one thing: If I type the domain name, I get the 410 page. With www.example.com/index.html I see the index page as I should, but just www.example.com gets 410. How could I fix this?

    Read the article

  • SMTP server to deliver mail to Rails app, how?

    - by Gunchars
    all, this is my first question and I hope I chose the right place to post it. Here's what I need help with: I've been looking for this all day and I'm having a hard time finding a SMTP mail server that would fit the following criteria: lightweight, does one thing and does it good is able to route and deliver local mail to a Rails application The second point could be accomplished in any number of ways. I'm running a VPS, so I have full freedom in how to implement this. It could, for example, put messages straight in the db, pipe them to a helper program that would then process them accordingly or also save messages in a mbox file and run a script after every received message. I'm building a small site so the traffic is not going to be a problem. If there are alternative ways to deliver messages to a Rails app, I'd gladly hear about them. Thank you. EDIT: After long searching, I think I've found what I was looking for. Exim is a mail server that can deliver local mail to pipes. Also, Rails 3 and ActionMailer can make it really easy to process the incoming mail. More info here: http://www.exim.org/exim-html-current/doc/html/spec_html/ch29.html http://guides.rubyonrails.org/action_mailer_basics.html#receiving-emails

    Read the article

  • Web stalker has purchased a domain name that uses my personal name, web page is defamatory [closed]

    - by Deborah Morse-Kahn
    We have been unsuccessful in persuading a stalker's website host to release the domain name he purchased which is my own personal name, e.g., PERSONALNAME.com. You will find my name below in the signature area. Look for yourself. On the one page that this domain name leades to is dreadful and defamatory material. No attorney has felt it worth their time to chase this issue down, and we cannot afford to go to a national or international dispute resolution group to bring this issue to WHOIS. Worse, the stalker is amoral and a psychopath: he would just love the attention. We've even consider trying to find someone to illegally hack into the webpage to at least redirect the domain pointers to my own professional website. This issue has continued now for two years and is affecting my professional reputations as potential clients have looked for me online. Is there any remedy? Your help and advice would be greatly welcomed.

    Read the article

  • Is there any advantage/disadvantage to using robots.txt to disallow access to legal pages such as terms, privacy policy, etc.?

    - by CaptainCodeman
    As I understand, having repetitive content is a detriment to search engine placement. Given that many websites that use similar or even identical "Terms and Conditions" and "Privacy Policy" pages due to similar legal wording or due to copy & pasting from the same source, would it be a good idea to disallow access to these pages via robots.txt, in order to avoid being penalized for "non-original content"? Or, on the contrary, could the search engines identify this as circumvention and penalize the site for trying to hide content? Or does it not matter?

    Read the article

  • Joomla Backend running slow on localhost

    - by boothe
    I made a local backup of a Joomla site a few months ago to test changes before updating the live site. Everything worked fine. Today I checked the local version after a while but when I open the backend (/administrator) it takes a while until the site is loaded. I tried out different things and accidently disconnected my Network Connection. After that everything loads as fast as before. But when I connect the Network Connection the problem reappears. I am running Joomla 1.5.14 on XAMPP 1.7.0.

    Read the article

  • How can I turn on compression for my IIS 7 web sites?

    - by Richard A
    I am using IIS7 and trying to optimize as much as possible. I had one suggestion about compression but I am not sure how to turn this on. I am familiar with making changes to Web.Config but not sure about making IIS7 changes. What makes it more difficult is that I am using Windows Azure where new images are created every time I publish. Can someone explain if there's more than one way to turn on compression and how I can do it.

    Read the article

  • Google Webmaster Tools Index dropped to Zero [closed]

    - by Brian Anderson
    Earlier this year I rebuilt my website using ZenCart. Immediately I saw a drop in index status from 59 to 0. I then signed up for Google Webmaster Tools and noticed the Index status took a dramatic drop and has never recovered. I have worked to add content and I know I am not done, but have not seen any recovery of this index since. What confuses me is when I look at the sitemap status under Optimization it shows me there are 1239 submitted and 1127 pages indexed. Most of my pages have fallen off page one for relevant search terms and some are as far back as page 7 or 8 where they used to be on the first page. I have made some changes in the past week to robots.txt and sitemap.xml, but have not seen any improvements. Can anyone tell me what might be going on here? My website is andersonpens.net. Thanks! Brian

    Read the article

  • GA tracking utm query params after hashbang

    - by hybrid9
    We currently use a hashbang for the portion of our site that generates dynamic content which can also be deep linked. Our analytics team wants to use utm params to track the referral traffic from social networks. We are using Universal Analytics (analytics.js) as well as GTM. Will GA pick up the query parameters after the hashbang or does it always have to go before? For example: example.com/#!/some/content?utm_source=foo&utm_campaign=bar example.com?utm_source=foo&utm_campaign=bar/#!/some/content In #1, I'm concerned that the utm params won't be recorded and in #2 the page will break or the url could be incorrectly written. How does GA pull in those parameters - location.search? regex? Can I get away with using either?

    Read the article

  • No description for any page on the website is available in Google despite robots.txt allowing crawling

    - by Abhijit
    I seem to have the weirdest issue with Search Engine Optimization, and I asked the IT folks at my university, I asked people on Joomla forums and I am trying to sort this issue out using Google Webmaster Tools for more than 2 months to little avail. I want to know if I have some blatantly wrong configuration somewhere that is causing search engines to be unable to index this site. I noticed a similar issue with another website I searched for online (ECEGSA - The University of British Columbia at gsa.ece.ubc.ca), making me believe this might be a concern that people might be looking an answer for. Here are the details: The website in question is: http://gsa.ece.umd.edu/. It runs using Joomla 2.5.x (latest). The site was up since around mid December of 2013, and I noticed right from the get go that the site was not being indexed correctly on Google. Specifically I see the following message when I search for the website on Google: A description for this result is not available because of this site's robots.txt – learn more. The thing is in December till around March I used the default Joomla robots.txt file which is: User-agent: * Disallow: /administrator/ Disallow: /cache/ Disallow: /cli/ Disallow: /components/ Disallow: /images/ Disallow: /includes/ Disallow: /installation/ Disallow: /language/ Disallow: /libraries/ Disallow: /logs/ Disallow: /media/ Disallow: /modules/ Disallow: /plugins/ Disallow: /templates/ Disallow: /tmp/ Nothing there should stop Google from searching my website. And even more confusingly, when I go to Google Webmaster tools, under "Blocked URLs" tab, when I try many of the links on the site, they are all shown up as "Allowed". I then tried adding a sitemap, putting it in the robots.txt file. That did not help. Same exact search result, same behavior in the "Blocked URLs" tab on the webmaster tools. Now additionally, the "sitemaps" tab says for several links an error saying "URL is robotted out". I tried those exact links in the "Blocked URLs" and they are allowed! I then tried deleting the robots.txt file. No use. Same exact problem. Here is an example screenshot from Google's Webmaster Tools: At this point I cannot give a rational explanation to why this is happening and neither can anyone in the IT department here. No one on Joomla forums can seem to understand what is going on. Based on what I explained, does it seem that I have somehow set a setting in the robots.txt or in .htaccess or somewhere else, incorrectly?

    Read the article

  • Question aboud Headings For Professionals <H1>... <H9> in SEO & Browsercompatibility Differences

    - by Sam
    We all know the importance ans significance of Headings for Professional Webmasters. These were known for professional developers as <h1>Heading 1</h1> h2 ... h6. As a daring webdeveloper I lately needed more short headings for complex structured document and i thought what the hell and went ahead and used in css h1,h2,h3,h4,h5,h6{ } h7{ } h8{ } h9{ } My experiment turned out to pay back. But only in Firefox, Safari, Chrome etc, not in Internet Explorer 8. Q1. Who(&When) decided that All headings should go upto h6, and not h4 or h7? Q2. Why h7 -h9 work perfect in all major browsers, except IE8? Q3. What is the significance for Bing,Yahoo and Googld in terms of recognition or headings h1 ~ h9? obviously h1 is more important than 2, but do they differentiate between h5 and h6? or not anymore after h3?

    Read the article

  • Google Analytics Export API - nextPagePath data

    - by Btibert3
    I am probably missing something obvious, but I do not understand when I query: start.date = DATE_START, end.date = DATE_END, dimensions = c("ga:pagePath","ga:previousPagePath"), metrics = c("ga:pageviews"), filters = mypageofinterest, table.id = "ga:mytable", max.results=RESULTS my data return as expected, all of the previous pages including (entrance). However, when I modify the code to be nextPagePath start.date = DATE_START, end.date = DATE_END, dimensions = c("ga:pagePath","ga:nextPagePath"), metrics = c("ga:pageviews"), filters = mypageofinterest, table.id = "ga:mytable", max.results=RESULTS only one line of data are returned; the pagepath and nextpagepath are identical with itself. I replicated this result using the Query Explorer. What am I missing or doing wrong? I was expecting to see a large number of "next" pages, including (exit). Thanks in advance.

    Read the article

  • Where to store users consent (EU cookie law)

    - by Mantorok
    We are legally obliged in a few months to obtain consent from users to allow us to store any cookies on the users PC. My query is, what would be the most effective way of storing this consent to ensure that users don't get repeat requests to give consent in the future, obviously for authenticated users I can store this against their profile. But what about for non-authenticated users. My initial thought, ironically, was to store given consent in a cookie..?

    Read the article

  • Source code not matching uploaded HTML file

    - by benhowdle89
    I'm not sure if this is the right place to ask but i'm having a hugely frustrating problem with Coda and my website (i'm not sure which one is causing the issue) I'm using Coda to make changes to my website, Coda uses built in FTP to save changes to your web page. So when you hit Save, it uploads the new file. I've been using Coda for months and never had a problem until now. I am making changes in the html of my index.php and hitting save, it's successfully uploading the file but no changes are reflected in the source code in ANY browser. I even logged into cPanel on my website, ie. www.example.com:2082 and looked at the file - the changes have been made successfully. But the actual webpage in browser's source code, no changes?? I have tried adding which made no difference. Interestingly i make changes to style.css and the changes are instant. I have emptied the cache on all of my browsers but i'm still having an issue. Does this sound like a Coda problem or has anyone heard of such a thing?

    Read the article

  • getting the user back where they came from with mod_form_auth

    - by bmargulies
    Using the mod_form_auth module in Apache HTTPD 2.4.3, I am looking for a way to have the user redirected to their original desired target after completing a login. That is, if I have a <Location /protected> ... form auth config here </Location> the user might browse to /protected/a, or to protected/b. In either case, they will be presented with the login form. However, as far as I can see, I must specific a single 'success' URL. I'm wondering if I'm missing some Apache feature that would allow me to, for example, cause the redirect to the login form go to something like: https://login.html?origTarget=/protected/a via some syntax on the AuthForLoginRequiredLocation statement?

    Read the article

  • Why do different browsers return different search results at Google and how can I prevent it?

    - by Sei
    I am running some websites and am constantly checking keywords' rankings on google.com. And it is really important for us to see the organic search result without logging in or setting a specific location. Since this morning my colleague and I have checked the same ranking on both IE and firefox, the result surprisingly is very different (it almost feel like IE was logged in because the ranking is much higher, while in reality it is not). I have changed computer and the same problem occurred. It did not happen before. Can anyone tell me why is it?

    Read the article

  • Tips for managing internal and external links using WordPress [closed]

    - by keruilin
    So I'm looking for ways to optimize my site for user and search engine purposes. I've read several articles and looked at several different plugins. To say the least, I'm thoroughly confused as what are the best practices for managing internal and external links. Here is a list of some of my questions: Which internal links should be set to "nofollow"? Which external links should be set to "nofollow"? To what degree does actively managing links contribute to your PR? Should you use "nofollow" blindly on all links in comments? If a link to an external site is broken (404 or whatever), should you "nofollow" that link? What about "noindex"? As you can see, lots of questions. I'm hoping that you experienced webmasters can give a newb some best-practice advice.

    Read the article

  • In Joomla how to change the module mod_news_show_gk3?

    - by Emerson
    How can I change the mod_news_show_gk3 module to: Change the size of the post image. It uses the first image of the post, but it seems to not be resizing the image. Add an extra field: It is important to show in the main page what is the source of the post. I would then like to add an extra field during post editing time, namely "source", and then I would like to show the source below the title and before the text. Change the title size. Title is oversized and I couldn't find any way to decrease its size. Add a border around the section. Here is the address: http://central.antinovaordemmundial.com/ Thanks!

    Read the article

< Previous Page | 172 173 174 175 176 177 178 179 180 181 182 183  | Next Page >