Search Results

Search found 25614 results on 1025 pages for 'content filter'.

Page 30/1025 | < Previous Page | 26 27 28 29 30 31 32 33 34 35 36 37  | Next Page >

  • 5 Tips to Ensure That Your SEO Web Content Increases Conversions

    A lot of people treat SEO content as the means of driving traffic online which is absolutely true however that is not all that you want. An honest assessment would help you understand that the purpose of the web content is not only to attract traffic but also help you ensure that you can convert the prospects into customers.

    Read the article

  • Using Keywords to Create SEO Friendly Content

    So you have your site up and running and now you are about to load it with content. So you figure its time to get writing, but before you do you should have to know that not all articles are created equal! If you want to maximize your chances of ranking well in search engines, the first step in creating SEO friendly content is through understanding how to use keywords!

    Read the article

  • SEO Content - A Major Part of Your SEO Strategy

    Search Engine Optimization is a dynamic process and it involves a lot of factors that can be broadly be divided into on page and off page factors. Among the on page factors the content that is presented on the web page plays a very significant role in the determination of the rank of that page. With the right kind of SEO content you can increase the relevance of the page for the search engine thus making it rank higher for that particular keyword.

    Read the article

  • Generating Unique Content - How to Do it For Your Website

    Well, the first question is, why do you need to have unique content on your website? It's really all down to the search engines; they want original content which ends in unique results, thus making it a much better experience for the users and hopefully ensuring that they will return to the site again.

    Read the article

  • Is Content Important For SEO - Yes Or No?

    Those of us that have been around search marketing any time at all, have heard this statement far more times than you can remember. I realize it seems repetitive, however until more businesses do a far better job at concentrating on their Web site content, it's worth repeating. Effectively composed content is essential.

    Read the article

  • Forget the Hat - Just Write Compelling Content

    Search Engines and searchers are looking for one thing, relevant content that provides an answer to the searchers' query. This is exactly what we should be providing when we produce content for our blogs or websites. There is not a day goes by when my mailbox is not full of different emails promising top ranks in Google or Bing just by using this or that kind of technique or tool.

    Read the article

  • The Website Content

    What is it in on your website? The content of your website speaks a great deal about your website traffic and ranking. This article briefly discusses website content and Meta tags.

    Read the article

  • Creating Dynamic Web Content

    The most important part of your website is the quality of your content. Without good content you will not get the page ranking in the search engines that you need to get to be successful.

    Read the article

  • Creating Dynamic Web Content

    The most important part of your website is the quality of your content. Without good content you will not get the page ranking in the search engines that you need to get to be successful.

    Read the article

  • How to calculate Content-Length for a file download within Kohana PHP?

    - by moritzd
    I'm trying to send a file from within a Kohana model to the browser, but as soon as I add a Content-Length header, the file doesn't start downloading right away. Now the problem seems to be that Kohana is already outputting buffers. An ob_clean at the begin of the script doesn't help this though. Also adding ob_get_length() to the Content-Length isn't helping since this just returns 0. The getFileSize() function returns the right number: if I run the script outside of Kohana, it works. I read that exit() still calls all destructors and it might be that something is outputted by Kohana afterwards, but I can't find out what exactly. Hope someone can help me out here... This is the piece of code I'm using: public function download() { header("Expires: ".gmdate("D, d M Y H:i:s",time()+(3600*7))." GMT\n"); header("Content-Type: ".$this->getFileType()."\n"); header("Content-Transfer-Encoding: binary\n"); header("Last-Modified: " . gmdate("D, d M Y H:i:s",$this->getCreateTime()) . " GMT\n"); header("Content-Length: ".($this->getFileSize()+ob_get_length().";\n"); header('Content-Disposition: attachment; filename="'.basename($this->getFileName())."\"\n\n"); ob_end_flush(); readfile($this->getFilePath()); exit(); }

    Read the article

  • Plone site randomly serving wrong content

    - by Chris Miller
    I have a Plone site that has begun to randomly serve up the wrong content. Any given content suddenly shows something else. Sometimes a JPEG loads a stylesheet instead or a stylesheet loads as a page or a page as an image. The images move around, some times our site logo shows a bullet, or one of the other site images. Fiddler shows the wrong content in the response, the apache logs show the content type of the incorrect file (so if the an image loads in place of a style sheet, apache shows that). We thought mod_proxy was the source of our grief, but we get the problem hitting Zope directly. I never get the wrong content using the Medusa Monitor to repeatedly hit the content. I do see ConflictErrors in the instance.log file, and they seem to be correlated to the problem, but not 100%. ZPublisher.Conflict ConflictError at \path\to\object: database conflict error (oid 0x3586, class BTrees._OIBTree.OIBTree, serial this txn started with blah, serial currently committed blah) (X conflicts (0 unresolved) since startup blah) I pulled that off the web, it's not from our logs, but it's the same message. This may be a red herring, it sounds like those messages are normal. We've updated to the 3.3.5, same problems. I'm at a loss. I'm wondering if there a good way to intercept what is being served? Secondly, is there a way to increase the verbosity of the access log to included the content-type? I've even seen the problem manifest in ZMI. It happens more often when we're authenticated. Sometimes it can take a thousand reloads to see the problem, other times it happens in different ways every time we reload. I believe we've seen this problem for a couple years, but it was very intermittent, a page would show the content of a GIF, then a reload later wouldn't happen for a long time. Now it's a huge problem.

    Read the article

  • Supporting users if they're not on your site

    - by Roger Hart
    Have a look at this Read Write Web article, specifically the paragraph in bold and the comments. Have a wry chuckle, or maybe weep for the future of humanity - your call. Then pause, and worry about information architecture. The short story: Read Write Web bumps up the Google rankings for "Facebook login" at the same time as Facebook makes UI changes, and a few hundred users get confused and leave comments on Read Write Web complaining about not being able to log in to their Facebook accounts.* Blindly clicking the first Google result is not a navigation behaviour I'd anticipated for folks visiting big names sites like Facebook. But then, I use Launchy and don't know where any of my files are, depend on Firefox auto-complete, view Facebook through my IM client, and don't need a map to find my backside with both hands. Not all our users behave in the same way, which means not all of our architecture is within our control, and people can get to your content in all sorts of ways. Even if the Read Write Web episode is a prank of some kind (there are, after all, plenty of folks who enjoy orchestrated trolling) it's still a useful reminder. Your users may take paths through and to your content you cannot control, and they are unlikely to deconstruct their assumptions along the way. I guess the meaningful question is: can you still support those users? If they get to you from Google instead of your front door, does what they find still make sense? Does your information architecture still work if your guests come in through the bathroom window? Ok, so here they broke into the house next door - you can't be expected to deal with that. But the rest is well worth thinking about. Other off-site interaction It's rarely going to be as funny as the comments at Read Write Web, but your users are going to do, say, and read things they think of as being about you and your products, in places you don't control. That's good. If you pay attention to it, you get data. Your users get a better experience. There are easy wins, too. Blogs, forums, social media &c. People may look for and find help with your product on blogs and forums, on Twitter, and what have you. They may learn about your brand in the same way. That's fine, it's an interaction you can be part of. It's time-consuming, certainly, but you have the option. You won't get a blogger to incorporate your site navigation just in case your users end up there, but you can be there when they do. Again, Anne Gentle, Gordon McLean and others have covered this in more depth than I could. Direct contact Sales people, customer care, support, they all talk to people. Are they sending links to your content? if so, which bits? Do they know about all of it? Do they have the content they need to support them - messaging that funnels sales, FAQ that are realistically frequent, detailed examples of things people want to do, that kind of thing. Are they sending links because users can't find the good stuff? Are they sending précis of your content, or re-writes, or brand new stuff? If so, does that mean your content isn't up to scratch, or that you've got content missing? Direct sales/care/support interactions are enormously valuable, and can help you know what content your users find useful. You can't have a table of contents or a "See also" in a phonecall, but your content strategy can support more interactions than browsing. *Passing observation about Facebook. For plenty if folks, it is  the internet. Its services are simple versions of what a lot of people use the internet for, and they're aggregated into one stop. Flickr, Vimeo, Wordpress, Twitter, LinkedIn, and all sorts of games, have Facebook doppelgangers that are not only friendlier to entry-level users, they're right there, behind only one layer of authentication. As such, it could own a lot of interaction convention. Heavy users may well not be tech-savvy, and be quite change averse. That doesn't make this episode not dumb, but I'm happy to go easy on 'em.

    Read the article

  • Django filter vs exclude

    - by Enrico
    Is there a difference between filter and exclude in django? If I have self.get_query_set().filter(modelField=x) and I want to add another criteria, is there a meaningful difference between to following two lines of code? self.get_query_set().filter(user__isnull=False, modelField=x) self.get_query_set().filter(modelField=x).exclude(user__isnull=True) is one considered better practice or are they the same in both function and performance?

    Read the article

  • DRUPAL: order exposed filter items, be carefully it is not that simple (I cannot user "Sort")

    - by Patrick
    hi, DRUPAL question. I'm using Views, with an exposed filter (Taxonomy). I've downloaded the "better exposed filter" module to display it as checkbox list. a Now, how can I order the tags in the filter list ? "Views Sort" is not the solution because I only can order articles but not the filter items!! I want to add an option (checkbox) for the customer to order the tags alphabetically or leave them in the default order. thanks

    Read the article

  • How do I redirect to the current page in Servlet Filter?

    - by JeffJak
    I have a page say: /myapp/test.jsp?queryString=Y. The filter needs to redirect to current page. It should go to /myapp/test.jsp (without the query string). The below seems to bring it to to the context root: /myapp. I am running in WAS6.1. public void doFilter(ServletRequest req, ServletResponse resp, FilterChain chain) throws IOException, ServletException { HttpServletRequest httpReq = (HttpServletRequest) req; HttpServletResponse httpResp = (HttpServletResponse) resp; { boolean blnNeedToRedirect = true; if (blnNeedToRedirect) { httpResp.sendRedirect("."); return; } chain.doFilter(req, resp); }

    Read the article

  • How to code an efficient blacklist filter function in php?

    - by achairapart
    So, I have three arrays like this: [items] => Array ( [0] => Array ( [id] => someid [title] => sometitle [author] => someauthor ... ) ... ) and also a string with comma separated words to blacklist: $blacklist = "some,words,to,blacklist"; Now I need to match these words with (as they can be one of) id, title, author and show results accordingly. I was thinking of a function like this: $pattern = '('.strtr($blacklist, ",", "|").')'; // should return (some|words|etc) foreach ($items as $item) { if ( !preg_match($pattern,$item['id']) || !preg_match($pattern,$item['title']) || !preg_match($pattern,$item['author']) ) { // show item } } and I wonder if this is the most efficient way to filter the arrays or I should use something with strpos() or filter_var with FILTER_VALIDATE_REGEXP ... Note that this function is repeated per 3 arrays. However, each array will not contain more than 50 items.

    Read the article

  • IIS7 web farm - local or shared content?

    - by rbeier
    We're setting up an IIS7 web farm with two servers. Should each server have its own local copy of the content, or should they pull content directly from a UNC share? What are the pros and cons of each approach? We currently have a single live server WEB1, with content stored locally on a separate partition. A job periodically syncs WEB1 to a standby server WEB2, using robocopy for content and msdeploy for config. If WEB1 goes down, Nagios notifies us, and we manually run a script to move the IP addresses to WEB2's network interface. Both servers are actually VMs running on separate VMWare ESX 4 hosts. The servers are domain-joined. We have around 50-60 live sites on WEB1 - mostly ASP.NET, with a few that are just static HTML. Most are low-traffic "microsites". A few have moderate traffic, but none are massive. We'd like to change this so both WEB1 and WEB2 are actively serving content. This is mainly for reliability - if WEB1 goes down, we don't want to have to manually intervene to fail things over. Spreading the load is also nice, but the load is not high enough right now for us to need this. We're planning to configure our firewall to balance traffic across the two servers. It will detect when a server goes down and will send all the traffic to the remaining live server. We're planning to use sticky sessions for now... eventually we may move to SQL Server session state and stateless load balancing. But we need a way for the servers to share content. We were originally planning to move all the content to a UNC share. Our storage provider says they can set up a highly available SMB share for us. So if we go the UNC route, the storage shouldn't be a single point of failure. But we're wondering about the downsides to this approach: We'll need to change the physical paths for each site and virtual directory. There are also some projects that have absolute paths in their web.config files - we'll have to update those as well. We'll need to create a domain user for the web servers to access the share, and grant that user appropriate permissions. I haven't looked into this yet - I'm not sure if the application pool identity needs to be changed to this user, or if there's another way to tell IIS to use this account when connecting to the share. Sites will no longer be able to access their content if there's ever an Active Directory problem. In general, it just seems a lot more complicated, with more moving parts that could break. Our storage provider would create a volume for us on their redundant SAN. If I understand correctly, this SAN volume would be mounted on a VM running in their redundant VMWare environment; this VM would then expose the SMB share to our web servers. On the other hand, a benefit of the shared content approach is that we'd only need to deploy code to one place, and there would never be a temporary inconsistency between multiple copies of the content. This thread is pretty interesting, though some of these people are working at a much larger scale. I've just been discussing content so far, but we also need to think about configuration. I don't know if we can just use DFS replication for the applicationHost.config and other files, or if it's best to use the shared configuration feature with the config on a UNC share. What do you think? Thanks for your help, Richard

    Read the article

  • How to cache dynamic javascript/jquery/ajax/json content with Akamai

    - by Starfs
    Trying to wrap my head around how things are cached on a CDN and it is new territory for me. In the document we received about sending in environment requests, it says "Dynamically-generated content will not benefit much from EdgeSuite". I feel like this is a simplified statement and there has to be a way to make it so you cache dynamically generated content if the tools are configured correctly. The site we are working with runs off a wordpress database, and uses javascript and ajax to build the pages, based on the json objects that php scripts have generated. The process - user's browser this URL, browser talks to edgesuite tools which will have cached certain pre-defined elements, and then requests from the host web server anything that is not cached, once edgesuite has compiled a combination of the two, it sends that information back to the browser. Can we not simply cache all json objects (and of course images, js, css) and therefore the web browser never has to hit the host server's database, at which point in essence, we have cached our dynamic content? Does anyone have any pointers on the most efficient configuration for this type of system -- Akamai/CDN -- to served javascript/ajax/json generated pages that ideally already hit pre-cached json data? Any and all feedback is welcome!

    Read the article

  • Publish Static Content to WebLogic

    - by James Taylor
    Most people know WebLogic has a built in web server. Typically this is not an issue as you deploy java applications and WebLogic publishes to the web. But what if you just want to display a simple static HTML page. In WebLogic you can develop a simple web application to display static HTML content. In this example I used WLS 10.3.3. I want to display 2 files, an HTML file, and an xsd for reference. Create a directory of your choice, this is what I will call the document root. mkdir /u01/oracle/doc_root Copy the static files to this directory  In the document root directory created in step 1 create the directory WEB-INF mkdir WEB-INF In the WEB-INF directory create a file called web.xml with the following content <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE web-app PUBLIC "-//Sun Microsystems, Inc.//DTD Web Application 2.3//EN" "http://java.sun.com/j2ee/dtds/web-app_2_3.dtd"> <web-app> </web-app> Login to the WebLogic console to deploy application Click on Deployments Click on Lock & Edit Click Install and set the path to the directory created in step 1 Leave default "Install this deployment as an application" and click Next Select a Managed Server to deploy to and click Next Accept the defaults and click Finish  Deployment completes successfully, now click the Activate Changes You should now see the application started in the deployments You can now access your static content via the following URL http://localhost:7001/doc_root/helloworld.html

    Read the article

< Previous Page | 26 27 28 29 30 31 32 33 34 35 36 37  | Next Page >