Search Results

Search found 37688 results on 1508 pages for 'site search'.

Page 71/1508 | < Previous Page | 67 68 69 70 71 72 73 74 75 76 77 78  | Next Page >

  • Extend legacy site with another server-side programming platform best practice

    - by Andrew Florko
    Company I work for have a site developed 6-8 years ago by a team that was enthusiastic enough to use their own private PHP-based CMS. I have to put dynamic data from one intranet company database on this site in one week: 2-3 pages. I contacted company site administrator and she showed me administrative part - CMS allows only to insert html blocks & manage site map (site is deployed on machine that is inside company & fully accessible & upgradeable). I'm not a PHP-guy & I don't want to dive into legacy hardly-who-ever-heard-about CMS engine I also don't want to contact developers team, 'cos I'm not sure they are still present and capable enough to extend this old days site and it'll take too much time anyway. I am about to deploy helper asp.net site on IIS with 2-3 pages required & refer helper site via iframe from present site. New pages will allow to download some dynamic content from present site also. Is it ok and what are the pitfalls with iframe approach?

    Read the article

  • ruby on rails implement search with auto complete

    - by user429400
    I've implemented a search box that searches the "Illnesses" table and the "symptoms" table in my DB. Now I want to add auto-complete to the search box. I've created a new controller called "auto_complete_controller" which returns the auto complete data. I'm just not sure how to combine the search functionality and the auto complete functionality: I want the "index" action in my search controller to return the search results, and the "index" action in my auto_complete controller to return the auto_complete data. Please guide me how to fix my html syntax and what to write in the js.coffee file. I'm using rails 3.x with the jquery UI for auto-complete, I prefer a server side solution, and this is my current code: main_page/index.html.erb: <p> <b>Syptoms / Illnesses</b> <%= form_tag search_path, :method => 'get' do %> <p> <%= text_field_tag :search, params[:search] %> <br/> <%= submit_tag "Search", :name => nil %> </p> <% end %> </p> auto_complete_controller.rb: class AutoCompleteController < ApplicationController def index @results = Illness.order(:name).where("name like ?", "%#{params[:term]}%") + Symptom.order(:name).where("name like ?", "%#{params[:term]}%") render json: @results.map(&:name) end end search_controller.rb: class SearchController < ApplicationController def index @results = Illness.search(params[:search]) + Symptom.search(params[:search]) respond_to do |format| format.html # index.html.erb format.json { render json: @results } end end end Thanks, Li

    Read the article

  • Does sitewide html refactoring affect Google traffic?

    - by Name
    Good morning, I have recently made a big structural change on my site and the very next day the number of Google impressions went from 75.000 to 3.000, with a proportional drop of traffic from searches. No URLs were changed, neither were the page titles or descriptions. Everything is exactly the same, but different looking, except that it does barely appear on Google anymore. Anybody has a clue to why?

    Read the article

  • Abnormal alexa ranking score

    - by SteenhouwerD
    I have 2 websites of ecommerce in France and for these websites I have a strange behaviour of the alexa results. Here are some statistics about the websites : Unique Visits January 2012 Website A : 158,828 Website B : 58,867 Number of Search Results google Website A : 5,100 Website B : 56,000 Links to my site Website A : 3,120 Website B : 2,180 ALEXA Score Website A : 405,804 Website B : 278,944 How does it come that website B with 1/3 of the visitors of website A have a much better Alexa Score ( x2 ) then website A?

    Read the article

  • Only one sharepoint site is not reachable

    - by Kabir Rao
    We are facing a strange issue since morning...only one sharepoint site is not accessible. In our sharepoint 2003 server there are several sites configured for separate projects. Several IIS Reset has been done, everyother site is accesible except one. When I try to map the site as network drive, i am able to access the contents of site. Please help asap. Thanks upfront.

    Read the article

  • How can I set up a dual-site Storage Daemon in Bacula (mirror the backup)

    - by Andy
    On site A, I have sucessfully set up a bacula director on one host, several File Daemons on the hosts I want to backup, and finally one Storage Daemon where the backup actually is stored. If disaster struck the building Site A, I want a second Storage Daemon on another site, Site B. The Filesets, Director etc would be the same, except the jobs will be stored on the other Storage Daemon as well. Are there any best practises on this?

    Read the article

  • Copy CakePHP site under IIS webroot?

    - by dan giz
    I've got a CakePHP application that uses a MSSql server running on windows server 2008r2 enterprise using IIS 7.5 My application is the only website running and is installed into wwwroot with cakephp installed to c:\inetpub (one level up from the site) I want to copy this site so i can have a development version to work on, without conflicting with the live site. how do i go about doing this? i'm confused, since previous set ups i've seen have a different folder in wwwroot for the site itself.

    Read the article

  • Why is my site not on Google? [closed]

    - by RD
    I wanted to post a link here, but some people might see that as advertising. So, instead I'm going to phrase my question like this: What can I do, to make sure my site appears on Google? I have already done the following: Submitted my sitemap Added my site at www.google.com/addurl Added Analytics to my site Checked in the webmaster tools if there are crawlers errors But still, after about three or four days, the crawler hasn't crawled my site. What am I missing?

    Read the article

  • Why my index page is shown at google search engin?

    - by Riaz Khan
    Hi deae the problem is this, here is my website URL http://pkbazaar.com When I search my site at google search-engin, the google search-engin showns index page as a Heading, and dates, apache etc. as contents.. Like this.. ,, Index of http://pkbazaar.com/ Apache/2.2.22 (Unix) mod_ssl/2.2.22 OpenSSL/0.9.8m DAV/2 mod_auth_passthrough/2.1 mod_bwlimited/1.4 FrontPage/5.0.2.2635 Server at pkbazaar.com ... the question is this Why google not shows my originol contents and headings that I used in my website. anybody help me please........

    Read the article

  • SQL Server 2008 full-text search doesn't find word in words?

    - by Martijn
    In the database I have a field with a .mht file. I want to use FTS to search in this document. I got this working, but I'm not satisfied with the result. For example (sorry it's in dutch, but I think you get my point) I will use 2 words: zieken and ziekenhuis. As you can see, the phrase 'zieken' is in the word 'ziekenhuis'. When I search on 'ziekenhuis' I get about 20 results. When I search on 'zieken' I get 7 results. How is this possible? I mean, why doesn't the FTS resturn the minimal results which I get from 'ziekenhuis'? Here's the query I use: SELECT DISTINCT d.DocID 'Id', d.Titel, (SELECT afbeeldinglokatie FROM tbl_Afbeelding WHERE soort = 'beleid') as Pic, 'belDoc' as DocType FROM docs d JOIN kpl_Document_Lokatie dl ON d.DocID = dl.DocID JOIN HandboekLokaties hb ON dl.LokatieID = hb.LokatieID WHERE hb.InstellingID = @instellingId AND ( FREETEXT(d.Doel, @searchstring) OR FREETEXT(d.Toepassingsgebied, @searchstring) OR FREETEXT(d.HtmlDocument, @searchstring) OR FREETEXT (d.extraTabblad, @searchstring) ) AND d.StatusID NOT IN( 1, 5)

    Read the article

  • How do you search for symbols without knowing what they're called?

    - by froadie
    This is a question that's been bothering me for a while, and it's likely that there's no way around it. But I was curious if anyone has any techniques for dealing with this. How do you search on a symbol using a search engine (such as Google or Bing) without knowing the symbol's name...? Most standard search engines seem to ignore symbols... (e.g. @, ^, etc.) Are there any engines that don't? Is there any other way to do this?

    Read the article

  • How can I get rid of horizontal padding or indent in html5 search inputs in webkit?

    - by Scott Vandehey
    In webkit only, the text in a search input is indented from the left side. Here's a demo. Even after stripping all padding, text-indent, and setting -webkit-appearance to textfield or none, the text is still indented. It looks to be around 10px or so, but the inspector doesn't show any CSS rules (even browser defaults) that seem to apply this style. Any ideas? <input type="search" value="Search"> -webkit-appearance: textfield; border: 1px solid #ccc; padding: 0; margin: 0; text-indent: 0;

    Read the article

  • Using local DNS and public DNS during site development

    - by ChrisFM
    I'm a web designer and I often develop new sites for existing businesses. Sometimes I find it useful to point my DNS address (for my personal computer) to the development servers local DNS (instead of Googles 8.8.8.8 or the default isp's address). I like this as it let's me see that the new site's internal links, etc, operate before switching over the authorative DNS from the old site. However outgoing links (Like a Google map) would not route with my local dns. The first thing I thought was, oh I just need to fill out my DNS directory with Google and every other domain I might need to link to, wait... that sounds insane. I was wondering if anyone could give me insight into a better way or more functional way to get use local DNS addresses when they're available and public supplied DNS address when they are not? Kinda like a DNS to 'roll-over'? Or maybe a completely different approach to development all together. Thanks in advance for your insight. All the best!

    Read the article

  • Twitter User/Search Feature Header Support in LINQ to Twitter

    - by Joe Mayo
    LINQ to Twitter’s goal is to support the entire Twitter API. So, if you see a new feature pop-up, it will be in-queue for inclusion. The same holds for the new X-Feature… response headers for User/Search requests.  However, you don’t have to wait for a special property on the TwitterContext to access these headers, you can just use them via the TwitterContext.ResponseHeaders collection. The following code demonstrates how to access the new X-Feature… headers with LINQ to Twitter: var user = (from usr in twitterCtx.User where usr.Type == UserType.Search && usr.Query == "Joe Mayo" select usr) .FirstOrDefault(); Console.WriteLine( "X-FeatureRateLimit-Limit: {0}\n" + "X-FeatureRateLimit-Remaining: {1}\n" + "X-FeatureRateLimit-Reset: {2}\n" + "X-FeatureRateLimit-Class: {3}\n", twitterCtx.ResponseHeaders["X-FeatureRateLimit-Limit"], twitterCtx.ResponseHeaders["X-FeatureRateLimit-Remaining"], twitterCtx.ResponseHeaders["X-FeatureRateLimit-Reset"], twitterCtx.ResponseHeaders["X-FeatureRateLimit-Class"]); The query above is from the User entity, whose type is Search; allowing you to search for the Twitter user whose name is specified by the Query parameter filter. After materializing the query, with FirstOrDefault, twitterCtx will hold all of the headers, including X-Feature… that Twitter returned.  Running the code above will display results similar to the following: X-FeatureRateLimit-Limit: 60 X-FeatureRateLimit-Remaining: 59 X-FeatureRateLimit-Reset: 1271452177 X-FeatureRateLimit-Class: namesearch In addition to getting the X-Feature… headers a capability you might have noticed is that the TwitterContext.ResponseHeaders collection will contain any HTTP that Twitter sends back to a query. Therefore, you’ll be able to access new Twitter headers anytime in the future with LINQ to Twitter. @JoeMayo

    Read the article

  • Google Webmasters Tools strange 404 errors referred from same site

    - by Out of Control
    Starting about a month ago, I noticed a sudden increase in 404 errors in Webmasters Tools for one of my sites (over 1400 errors so far). All the errors are being referred from my own site to non existent pages. The 404 error URLs are all of the same format: URL: http://www.helloneighbour.com/save/1347208508000 The number on the end appears to be a timestamp followed by 3 zeros. The referring page, in this case is : Linked from http://www.helloneighbour.com/save/cmw-insurance-insurance-burnaby When I look at the source code of that page, or I use Webmaster tools to view the page as Google sees it, I can't find any link that comes close to what is above. I built the site, and I can't find any place that might be causing these false links either. The server logs (access and error) don't show Google or anyone else trying to access these links. I've marked all these pages as fixed, and waited a couple of weeks, only to find the errors come back again over the last few days. I'm wondering if anyone else has seen anything strange like this, or if someone might have a way for me to debug, replicate this error myself.

    Read the article

  • MS Bing web crawler out of control causing our site to go down

    - by akaDanPaul
    Here is a weird one that I am not sure what to do. Today our companies e-commerce site went down. I tailed the production log and saw that we were receiving a ton of request from this range of IP's 157.55.98.0/157.55.100.0. I googled around and come to find out that it is a MSN Web Crawler. So essentially MS web crawler overloaded our site causing it not to respond. Even though in our robots.txt file we have the following; Crawl-delay: 10 So what I did was just banned the IP range in iptables. But what I am not sure to do from here is how to follow up. I can't find anywhere to contact Bing about this issue, I don't want to keep those IPs blocked because I am sure eventually we will get de-indexed from Bing. And it doesn't really seem like this has happened to anyone else before. Any Suggestions? Update, My Server / Web Stats Our web server is using Nginx, Rails 3, and 5 Unicorn workers. We have 4gb of memory and 2 virtual cores. We have been running this setup for over 9 months now and never had an issue, 95% of the time our system is under very little load. On average we receive 800,000 page views a month and this never comes close to bringing / slowing down our web server. Taking a look at the logs we were receiving anywhere from 5 up to 40 request / second from this IP range. In all my years of web development I have never seen a crawler hit a website so many times. Is this new with Bing?

    Read the article

  • Site in subdomain (MaraDNS + Nginx)

    - by Grzegorz
    Welcome, Actually I'm doing some experiments on my VPS with Ubuntu. I've installed MaraDNS with Nginx. At this moment I've correctly launch static site which is available from Internet (maindomain.com). In next step I want to add new site which will be available in subdomain, for example dev.maindomain.com. I've tried to db.maindomain.com file (used by MaraDNS): maindomain.com. xxx.xxx.xxx.xxx www.maindomain.com. CNAME maindomain.com. dev.maindomain.com. xxx.xxx.xxx.xxx Where xxx.xxx.xxx.xxx is VPS IP address. In nginx.conf I have: server { listen 80; server_name maindomain.com; access_log /var/log/nginx/maindomain.com.log location / { root /var/www/maindomain.com; index index.html; } } server { listen 80; server_name dev.maindomain.com; access_log /var/log/nginx/dev.maindomain.com.log location / { root /var/www/dev.maindomain.com; index index.html; } } With this configuration maindomain.com works properly, but dev.maindomain.com isn't available. When I try: ping dev.maindomain.com then I get my xxx.xxx.xxx.xxx IP. Do you have any suggestions how can I resolve this problem?

    Read the article

  • Adding Suggestions to the SharePoint 2010 Search Programatically

    - by Ricardo Peres
    There are numerous pages that show how to do this with PowerShell, but I found none on how to do it with plain old C#, so here it goes! To abbreviate, I wanted to have SharePoint suggest the site collection user’s names after the first letters are filled in a search site’s search box. Here’s how I did it: 1: //get the Search Service Application (replace with your own name) 2: SearchServiceApplication searchApp = farm.Services.GetValue<SearchQueryAndSiteSettingsService>().Applications.GetValue<SearchServiceApplication>("Search Service Application") as SearchServiceApplication; 3: 4: Ranking ranking = new Ranking(searchApp); 5:  6: //replace EN-US with your language of choice 7: LanguageResourcePhraseList suggestions = ranking.LanguageResources["EN-US"].QuerySuggestionsAlwaysSuggestList; 8:  9: foreach (SPUser user in rootWeb.Users) 10: { 11: suggestions.AddPhrase(user.Name, String.Empty); 12: } 13:  14: //get the job that processes suggestions and run it 15: SPJobDefinition job = SPFarm.Local.Services.OfType<SearchService>().SelectMany(x => x.JobDefinitions).Where(x => x.Name == "Prepare query suggestions").Single(); 16: job.RunNow(); You may do this, for example, on a feature. Of course, feel free to change users for something else, all suggestions are treated as pure text.

    Read the article

  • Adsense click bot is click bombing my site

    - by Graham
    I have a site that get's roughly 7,000 - 10,000 page views per day right now. Starting around 1 AM on 7/1/12 I noticed the CTR was rising dramatically. These clicks would be credited then de-credited soon after. So, they were obviously fraudulent clicks. The next day I had about 200 clicks in account with about 100 of them being fraudulent. It's about 3 - 8 per hour evenly dispersed for each of the three ads 24 hours a day. This leads me to believe that it's some sort of Adsense click bot. Also, I removed the ads last evening then put them back up around 3AM and the invalid clicks started within 10 minutes. I signed up for statcounter.com to analyze the exit links on the Adsense. Then I conditionally blocked ads for the IP address of the person / bot I suspected doing this. But, I think that the bot has several proxies to choose from and can refresh IP addresses. I've notified Google through the invalid click form / email 4 times over the past two days in order to let them know I'm aware of the situation and am working on a solution. I've also temporally removed all ads on that site. How can I block a bot like this? Thank you.

    Read the article

  • What to choose for a multilingual site with support for Markdown and commenting

    - by Kent
    I want to publish articles at a multilingual site. I want to be able to write an article in two languages and have them available on separate URLs: thesite.foo/english-breakfast thesite.com/engelsk-frukost If the users web browser is set to English I'd like to show a small notice at the top of the Swedish version with a link to the English one. The link should have an appropriate rel attribute for a translation (search for hreflang at http://diveintohtml5.org/semantics.html). There should be a way to list all articles belonging to these sets: Swedish only, English only, Swedish versions + English only, English versions + Swedish only. I'd like to publish these as four RSS-feeds. And I would like to have two versions of the main site, one in Swedish (showing Swedish versions + English only) and one in English (showing English versions). I shall be able to write the articles using Markdown, as that is the formatting language I find most convenient. There should be a way for users to comment. And some kind of way for me to protect myself against comment spam. I am leaning towards learning Drupal. I suspect I'll have to code this behavior myself as a module. To be frank I'd rather work with Java. Is Drupal the way to go? Or is there something more suitable for this project?

    Read the article

  • Video for an ads-driven web-site

    - by AntonAL
    I have a website, wich i will fill with a bunch of useful videos. I've implemented an ads rotation engine for articles and will do so for videos. The next milestone is to decide, how video will be integrated. They are two ways: To host videos myself. Pros: complete freedom. Cons: need tens of gigabytes of storage; support for multiple formats to be crossbrowser and crossdevice. Use Youtube. Pros: Very simple to use; nothing to do. What are pros and cons for each way ? Some questions for YouTube: Will i be able to control playback of YouTube-embedded video to make post-rolls ? What is ranking impact on my web-site, when most of pages will refer to YouTube ? Will, say, iPad play video, embedded via YouTube's iframe ? Does relying entirely on YouTube have a long-term perspective for a web-site, that should bring money ?

    Read the article

  • Can .htaccess slow down a site?

    - by Cody Sharp
    I'm working with a client on an e-commerce website. I implemented clean URLs using .htaccess. I also used .htaccess to solve canonical issues such as redirecting www to non-www and removing index.php from the URL. The website recently began to slow down dramatically, sometimes not even loading. The site is hosted on GoDaddy, and when the client called GoDaddy they told him it was the .htaccess file slowing down the website. I find this highly unlikely because of my past experiences, but I'm not 100% sure. My thinking is that the client's website is most likely on a shared server with a busy neighborhood, thus slowing down the site. It's not always slow, but rather sporadic throughout the day, loading fast at some points and slow at other points in time. Can the .htaccess file slow down a website to a crawl? If so, are there better ways to solve these problems with different rewrite rules and such? Here is what the actual .htaccess file looks like: Options +FollowSymlinks RewriteEngine On RewriteBase / RewriteCond %{HTTP_HOST} ^www.example.net [NC] RewriteRule ^(.*)$ http://example.net/$1 [L,R=301] RewriteRule ^products/([0-9a-zA-Z\_\-]*)\.htm([l]?)$ index.php p=product&product_code=$1 [L] RewriteRule ^catalog/([0-9a-zA-Z\_\-]*)\.htm([l]?)$ index.php p=catalog&catalog_code=$1 [L] RewriteRule ^pages/([0-9a-zA-Z\_\-]*)\.htm([l]?)$ index.php?p=page&page_id=$1 [L] RewriteRule ^index\.htm([l]?)$ index.php?p=home [L] RewriteRule ^site_map\.htm([l]?)$ index.php?p=site_map [L] RewriteCond %{QUERY_STRING} ^p=home$ RewriteRule (.*) ? [R=permanent] I'm a .htaccess and regex novice, so any pointed out mistakes would also help. Thank you.

    Read the article

  • Site migration and SEO impact

    - by John Smith
    I'd greatly appreciate a response on the following question relating to site migration and SEO impact. Here's some background on how my domain name and site is currently configured: My domain name provider has the following settings: host name @ is an A NAME record and points to IP address x.x.x.x host name www is an A NAME record and points to IP address x.x.x.x sub-domain host name new.example.com is an A NAME record and points to IP address x.x.x.x My hosting provider has the following settings: host record @ is an A NAME record and points to IP address x.x.x.x, folder home/public_html/old host record www is a C NAME record and points to example.com sub-domain host record new.example.com points to home/public_html/new I want to: point the domain (example.com AND www.example.com) to the content hosted under folder home/public_html/new, which is currently the content directory for new.example.com retire the content hosted under folder home/public_html/old retire the sub-domain host record new.example.com I believe the easiest method of doing this, is: removing the sub-domain host record new.example.com; and changing the following line in the .htaccess file in home/public_html from # Change 'subdirectory' to be the directory you will use for your main domain. RewriteCond %{REQUEST_URI} !^/old/ to # Change 'subdirectory' to be the directory you will use for your main domain. RewriteCond %{REQUEST_URI} !^/new/ But I don't understand how this will impact my SERP - ideally, I'd like it to remain the same. Research on this topic resulted in the following Google page, which was no help, and this related StackExchange question, which suggests that this should not affect my SERP (at least, not permanently). But I wanted to make certain with a more specific example, and hopefully contribute to the community at the same time. I'd appreciate any feedback on this. Is there a better/recommended method to migrate sites this way? Is there an SEO impact?

    Read the article

< Previous Page | 67 68 69 70 71 72 73 74 75 76 77 78  | Next Page >