Search Results

Search found 21478 results on 860 pages for 'content assist'.

Page 21/860 | < Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >

  • Is Content Important For SEO - Yes Or No?

    Those of us that have been around search marketing any time at all, have heard this statement far more times than you can remember. I realize it seems repetitive, however until more businesses do a far better job at concentrating on their Web site content, it's worth repeating. Effectively composed content is essential.

    Read the article

  • Forget the Hat - Just Write Compelling Content

    Search Engines and searchers are looking for one thing, relevant content that provides an answer to the searchers' query. This is exactly what we should be providing when we produce content for our blogs or websites. There is not a day goes by when my mailbox is not full of different emails promising top ranks in Google or Bing just by using this or that kind of technique or tool.

    Read the article

  • The Website Content

    What is it in on your website? The content of your website speaks a great deal about your website traffic and ranking. This article briefly discusses website content and Meta tags.

    Read the article

  • Creating Dynamic Web Content

    The most important part of your website is the quality of your content. Without good content you will not get the page ranking in the search engines that you need to get to be successful.

    Read the article

  • Creating Dynamic Web Content

    The most important part of your website is the quality of your content. Without good content you will not get the page ranking in the search engines that you need to get to be successful.

    Read the article

  • How to calculate Content-Length for a file download within Kohana PHP?

    - by moritzd
    I'm trying to send a file from within a Kohana model to the browser, but as soon as I add a Content-Length header, the file doesn't start downloading right away. Now the problem seems to be that Kohana is already outputting buffers. An ob_clean at the begin of the script doesn't help this though. Also adding ob_get_length() to the Content-Length isn't helping since this just returns 0. The getFileSize() function returns the right number: if I run the script outside of Kohana, it works. I read that exit() still calls all destructors and it might be that something is outputted by Kohana afterwards, but I can't find out what exactly. Hope someone can help me out here... This is the piece of code I'm using: public function download() { header("Expires: ".gmdate("D, d M Y H:i:s",time()+(3600*7))." GMT\n"); header("Content-Type: ".$this->getFileType()."\n"); header("Content-Transfer-Encoding: binary\n"); header("Last-Modified: " . gmdate("D, d M Y H:i:s",$this->getCreateTime()) . " GMT\n"); header("Content-Length: ".($this->getFileSize()+ob_get_length().";\n"); header('Content-Disposition: attachment; filename="'.basename($this->getFileName())."\"\n\n"); ob_end_flush(); readfile($this->getFilePath()); exit(); }

    Read the article

  • Plone site randomly serving wrong content

    - by Chris Miller
    I have a Plone site that has begun to randomly serve up the wrong content. Any given content suddenly shows something else. Sometimes a JPEG loads a stylesheet instead or a stylesheet loads as a page or a page as an image. The images move around, some times our site logo shows a bullet, or one of the other site images. Fiddler shows the wrong content in the response, the apache logs show the content type of the incorrect file (so if the an image loads in place of a style sheet, apache shows that). We thought mod_proxy was the source of our grief, but we get the problem hitting Zope directly. I never get the wrong content using the Medusa Monitor to repeatedly hit the content. I do see ConflictErrors in the instance.log file, and they seem to be correlated to the problem, but not 100%. ZPublisher.Conflict ConflictError at \path\to\object: database conflict error (oid 0x3586, class BTrees._OIBTree.OIBTree, serial this txn started with blah, serial currently committed blah) (X conflicts (0 unresolved) since startup blah) I pulled that off the web, it's not from our logs, but it's the same message. This may be a red herring, it sounds like those messages are normal. We've updated to the 3.3.5, same problems. I'm at a loss. I'm wondering if there a good way to intercept what is being served? Secondly, is there a way to increase the verbosity of the access log to included the content-type? I've even seen the problem manifest in ZMI. It happens more often when we're authenticated. Sometimes it can take a thousand reloads to see the problem, other times it happens in different ways every time we reload. I believe we've seen this problem for a couple years, but it was very intermittent, a page would show the content of a GIF, then a reload later wouldn't happen for a long time. Now it's a huge problem.

    Read the article

  • Supporting users if they're not on your site

    - by Roger Hart
    Have a look at this Read Write Web article, specifically the paragraph in bold and the comments. Have a wry chuckle, or maybe weep for the future of humanity - your call. Then pause, and worry about information architecture. The short story: Read Write Web bumps up the Google rankings for "Facebook login" at the same time as Facebook makes UI changes, and a few hundred users get confused and leave comments on Read Write Web complaining about not being able to log in to their Facebook accounts.* Blindly clicking the first Google result is not a navigation behaviour I'd anticipated for folks visiting big names sites like Facebook. But then, I use Launchy and don't know where any of my files are, depend on Firefox auto-complete, view Facebook through my IM client, and don't need a map to find my backside with both hands. Not all our users behave in the same way, which means not all of our architecture is within our control, and people can get to your content in all sorts of ways. Even if the Read Write Web episode is a prank of some kind (there are, after all, plenty of folks who enjoy orchestrated trolling) it's still a useful reminder. Your users may take paths through and to your content you cannot control, and they are unlikely to deconstruct their assumptions along the way. I guess the meaningful question is: can you still support those users? If they get to you from Google instead of your front door, does what they find still make sense? Does your information architecture still work if your guests come in through the bathroom window? Ok, so here they broke into the house next door - you can't be expected to deal with that. But the rest is well worth thinking about. Other off-site interaction It's rarely going to be as funny as the comments at Read Write Web, but your users are going to do, say, and read things they think of as being about you and your products, in places you don't control. That's good. If you pay attention to it, you get data. Your users get a better experience. There are easy wins, too. Blogs, forums, social media &c. People may look for and find help with your product on blogs and forums, on Twitter, and what have you. They may learn about your brand in the same way. That's fine, it's an interaction you can be part of. It's time-consuming, certainly, but you have the option. You won't get a blogger to incorporate your site navigation just in case your users end up there, but you can be there when they do. Again, Anne Gentle, Gordon McLean and others have covered this in more depth than I could. Direct contact Sales people, customer care, support, they all talk to people. Are they sending links to your content? if so, which bits? Do they know about all of it? Do they have the content they need to support them - messaging that funnels sales, FAQ that are realistically frequent, detailed examples of things people want to do, that kind of thing. Are they sending links because users can't find the good stuff? Are they sending précis of your content, or re-writes, or brand new stuff? If so, does that mean your content isn't up to scratch, or that you've got content missing? Direct sales/care/support interactions are enormously valuable, and can help you know what content your users find useful. You can't have a table of contents or a "See also" in a phonecall, but your content strategy can support more interactions than browsing. *Passing observation about Facebook. For plenty if folks, it is  the internet. Its services are simple versions of what a lot of people use the internet for, and they're aggregated into one stop. Flickr, Vimeo, Wordpress, Twitter, LinkedIn, and all sorts of games, have Facebook doppelgangers that are not only friendlier to entry-level users, they're right there, behind only one layer of authentication. As such, it could own a lot of interaction convention. Heavy users may well not be tech-savvy, and be quite change averse. That doesn't make this episode not dumb, but I'm happy to go easy on 'em.

    Read the article

  • IIS7 web farm - local or shared content?

    - by rbeier
    We're setting up an IIS7 web farm with two servers. Should each server have its own local copy of the content, or should they pull content directly from a UNC share? What are the pros and cons of each approach? We currently have a single live server WEB1, with content stored locally on a separate partition. A job periodically syncs WEB1 to a standby server WEB2, using robocopy for content and msdeploy for config. If WEB1 goes down, Nagios notifies us, and we manually run a script to move the IP addresses to WEB2's network interface. Both servers are actually VMs running on separate VMWare ESX 4 hosts. The servers are domain-joined. We have around 50-60 live sites on WEB1 - mostly ASP.NET, with a few that are just static HTML. Most are low-traffic "microsites". A few have moderate traffic, but none are massive. We'd like to change this so both WEB1 and WEB2 are actively serving content. This is mainly for reliability - if WEB1 goes down, we don't want to have to manually intervene to fail things over. Spreading the load is also nice, but the load is not high enough right now for us to need this. We're planning to configure our firewall to balance traffic across the two servers. It will detect when a server goes down and will send all the traffic to the remaining live server. We're planning to use sticky sessions for now... eventually we may move to SQL Server session state and stateless load balancing. But we need a way for the servers to share content. We were originally planning to move all the content to a UNC share. Our storage provider says they can set up a highly available SMB share for us. So if we go the UNC route, the storage shouldn't be a single point of failure. But we're wondering about the downsides to this approach: We'll need to change the physical paths for each site and virtual directory. There are also some projects that have absolute paths in their web.config files - we'll have to update those as well. We'll need to create a domain user for the web servers to access the share, and grant that user appropriate permissions. I haven't looked into this yet - I'm not sure if the application pool identity needs to be changed to this user, or if there's another way to tell IIS to use this account when connecting to the share. Sites will no longer be able to access their content if there's ever an Active Directory problem. In general, it just seems a lot more complicated, with more moving parts that could break. Our storage provider would create a volume for us on their redundant SAN. If I understand correctly, this SAN volume would be mounted on a VM running in their redundant VMWare environment; this VM would then expose the SMB share to our web servers. On the other hand, a benefit of the shared content approach is that we'd only need to deploy code to one place, and there would never be a temporary inconsistency between multiple copies of the content. This thread is pretty interesting, though some of these people are working at a much larger scale. I've just been discussing content so far, but we also need to think about configuration. I don't know if we can just use DFS replication for the applicationHost.config and other files, or if it's best to use the shared configuration feature with the config on a UNC share. What do you think? Thanks for your help, Richard

    Read the article

  • How to cache dynamic javascript/jquery/ajax/json content with Akamai

    - by Starfs
    Trying to wrap my head around how things are cached on a CDN and it is new territory for me. In the document we received about sending in environment requests, it says "Dynamically-generated content will not benefit much from EdgeSuite". I feel like this is a simplified statement and there has to be a way to make it so you cache dynamically generated content if the tools are configured correctly. The site we are working with runs off a wordpress database, and uses javascript and ajax to build the pages, based on the json objects that php scripts have generated. The process - user's browser this URL, browser talks to edgesuite tools which will have cached certain pre-defined elements, and then requests from the host web server anything that is not cached, once edgesuite has compiled a combination of the two, it sends that information back to the browser. Can we not simply cache all json objects (and of course images, js, css) and therefore the web browser never has to hit the host server's database, at which point in essence, we have cached our dynamic content? Does anyone have any pointers on the most efficient configuration for this type of system -- Akamai/CDN -- to served javascript/ajax/json generated pages that ideally already hit pre-cached json data? Any and all feedback is welcome!

    Read the article

  • Publish Static Content to WebLogic

    - by James Taylor
    Most people know WebLogic has a built in web server. Typically this is not an issue as you deploy java applications and WebLogic publishes to the web. But what if you just want to display a simple static HTML page. In WebLogic you can develop a simple web application to display static HTML content. In this example I used WLS 10.3.3. I want to display 2 files, an HTML file, and an xsd for reference. Create a directory of your choice, this is what I will call the document root. mkdir /u01/oracle/doc_root Copy the static files to this directory  In the document root directory created in step 1 create the directory WEB-INF mkdir WEB-INF In the WEB-INF directory create a file called web.xml with the following content <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE web-app PUBLIC "-//Sun Microsystems, Inc.//DTD Web Application 2.3//EN" "http://java.sun.com/j2ee/dtds/web-app_2_3.dtd"> <web-app> </web-app> Login to the WebLogic console to deploy application Click on Deployments Click on Lock & Edit Click Install and set the path to the directory created in step 1 Leave default "Install this deployment as an application" and click Next Select a Managed Server to deploy to and click Next Accept the defaults and click Finish  Deployment completes successfully, now click the Activate Changes You should now see the application started in the deployments You can now access your static content via the following URL http://localhost:7001/doc_root/helloworld.html

    Read the article

  • iFrame content pageviews not matching parent page pageviews

    - by surfbird0713
    I have a page with content hosted in an iFrame, both using the same GA account ID. When I look at the pages report, the parent page has about 9000 unique views, but the iFrame content only has 3700. Anyone have an idea what could cause that kind of discrepancy? My only guess is that it would be caused by people moving on before the iFrame content has a chance to load, but the average time on page for the host page is 56 seconds, so that doesn't seem possible. This is the page in question: http://cookware.lecreuset.com/cookware/content_le-creuset-lid_10151_-1_20002 The flipbook is hosted in the iFrame on a separate domain. I have each page of the flipbook triggering a virtual pageview to try to evaluate engagement with the book - when the flipbook loads, it fires a pageview for the page it is on, so that is the page I'm using for the 3700 number. I also looked at the source of the iFrame in the pages report, and that number just about matches the virtual pageviews so that piece is consistent. Any ideas on this are much appreciated. Thanks!

    Read the article

  • OOW content for Pattern Matching....

    - by KLaker
    If you missed my sessions at OpenWorld then don't worry - all the content we used for pattern matching (presentation and hands-on lab) is now available for download. My presentation "SQL: The Best Development Language for Big Data?" is available for download from the OOW Content Catalog, see here: https://oracleus.activeevents.com/2013/connect/sessionDetail.ww?SESSION_ID=9101 For the hands-on lab ("Pattern Matching at the Speed of Thought with Oracle Database 12c") we used the Oracle-By-Example content. The OOW hands-on lab uses Oracle Database 12c Release 1 (12.1) and uses the MATCH_RECOGNIZE clause to perform some basic pattern matching examples in SQL. This lab is broken down into four main steps: Logically partition and order the data that is used in the MATCH_RECOGNIZE clause with its PARTITION BY and ORDER BY clauses. Define patterns of rows to seek using the PATTERN clause of the MATCH_RECOGNIZE clause. These patterns use regular expressions syntax, a powerful and expressive feature, applied to the pattern variables you define. Specify the logical conditions required to map a row to a row pattern variable in the DEFINE clause. Define measures, which are expressions usable in the MEASURES clause of the SQL query. You can download the setup files to build the ticker schema and the student notes from the Oracle Learning Library. The direct link to the example on using pattern matching is here: http://apex.oracle.com/pls/apex/f?p=44785:24:0::NO:24:P24_CONTENT_ID,P24_PREV_PAGE:6781,2.

    Read the article

  • Tackling thin content on an images gallery

    - by Ted Wilmont
    We run an images gallery as part of our site, however we have over 8,000 images and every image has a separate HTML page of its own to display the image caption, related image and comments from users of the site. This seems to be a problem especially with the Google Panda update because these pages are technically "thin content". What would be the best way to tackle this? We'd love some feedback and advice regarding this scenario. We have a few options we thought of already but can't decide: We could noindex the separate image pages and loose any image search listings we have for the image in favour of removing these thin pages from the index. We could 301 all of the individual image pages back to the image category listing and anchor each image (e.g. #img2122) and include all of the comments and description on the category listing page itself. If we was to simply list all of the images and content on the category pages themself; what's the best method? We could add all of the content in the anchor tags and use jQuery to display them in a box when a user clicks on the image or we could use Ajax to retrieve the information. However, what's the best Ajax method for SEO? Any ideas, suggestions, tips or advice is greatly appreciated and thank you in advance for any given.

    Read the article

  • Leading Analyst Firm Positions Oracle in Leaders Quadrant for Web Content Management

    - by Christie Flanagan
    Gartner, Inc. has named Oracle a Leader in its latest “Magic Quadrant for Web Content Management.” Gartner’s Magic Quadrants position vendors within a particular quadrant based on their completeness of vision and their ability to execute on that vision. According to Gartner, “WCM plays an increasingly important role in business performance. It has become the central point of coordination for initiatives involving the enterprise's online presence, and these initiatives have become more sophisticated and more important to enterprises' business strategies. Thus, WCM is key for organizations wishing to execute a strategy of OCO (online channel optimization) that embraces areas such as customer experience management, e-commerce, digital marketing, multichannel marketing and website consolidation.” Gartner continued, “Leaders should drive market transformation. Leaders have the highest combined scores for Ability to Execute and Completeness of Vision. They are doing well and are prepared for the future with a clear vision and a thorough appreciation of the broader context of OCO. They have strong channel partners, a presence in multiple regions, consistent financial performance, broad platform support and good customer support. In addition, they dominate in one or more technologies or vertical markets. Leaders are aware of the ecosystem in which their offerings need to fit. Leaders can: demonstrate enterprise deployments’ offer integration with other business applications and content repositories; provide a vertical-process or horizontal-solution focus.” Oracle WebCenter, the engagement platform powering exceptional experiences for customers, employees and partners, connects people and information by bringing together the most complete portfolio of portal, Web experience management, content, social, and collaboration technologies into a single integrated product suite. Oracle WebCenter also provides the foundation for Oracle Fusion Middleware and Oracle Fusion Applications to deliver a next-generation user experience.  To see the latest reports, webcasts and demonstrations about Oracle's web experience management solution, Oracle WebCenter Sites, please visit our Connected Customer Experience Resource Center.

    Read the article

  • JavaOne Content on Video

    - by Tori Wieldt
    JavaOne content is available in video in three sizes, depending on if you want to have a sip, have a drink, or go to the proverbial firehose. Tall (Keynote Highlights) Go to the JavaOne playlist on the YouTube Java channel for highlights of the JavaOne Keynotes.  Grande (Keynotes in Full) Go to the Oracle Media Network JavaOne 2012 channel to view the keynotes in full (Community Keynote coming soon). Venti (All Sessions, BOFs and Tutorials) To view slides paired with audio of each session, go to the JavaOne content catalog (JavaOne homepage > Programs > Content Catalog) and select a session. If a video is available, you'll see "Media" in the right column. Look under "Presentation Download" to get the slides. Sessions are being made available as quickly as possible. "It's exciting to see Oracle take community stewardship so seriously," said Sharat Chander, Group Director for Java Technology Outreach. "Making all JavaOne sessions on video available online for free will helps make the future Java for everyone."

    Read the article

  • Add to extra div to the side of the content

    - by Kreker
    Hi. I've read some post for adding extra divs to the both sides of the main content and act like a sidebars. But I want to add 2 extra div for adding some graphics and the right will be to the right of the actual sidebar. I took a screenshot for explain my self better. I've added to extra div before the pagewrap and working with css I locate the divs to left and to the right but there is a problem, if some has a resolution smaller then me the divs start moving messing up the loyout. I want that they stay near the content and the sidebar for all the resolution. I've tryed working with css with position and % but with no results. Ok so, the cyan rectangle would be the extra divs, and the gray graphics the contents of these. I don't care if the users has smaller resolutions, the graphics can go out the screen, but the "roots" must be fixed to the side of the main content ;) Hope I explained well. This is the screenshot http://cl.ly/0D2H1s0t3Y0p2A412L35/Schermata-2011-01-11-a-22.22.17.png Thanks for help

    Read the article

  • How to send Content-Disposition headers in apache for files?

    - by Rory McCann
    I have a directory of text files that I'm serving out with apache 2. Normally when I (or any user) access the files they see them in their browser. I want to 'force'* the web browser to pop up a 'Save as' dialog box. I know this is possible to do with the Content-Disposition headers (more info). Is there some way to turn that on for each file? Ideally I'd like something like this: <Directory textfiles> AutoAddContentDispositionHeaders On </Directory> And then apache would set the correct content disposition header, including using the same filename. Something like this might be possible with the apache Header directive. Bonus points if it's included by standing in apache in debian. I could do a simple PHP wrapper script that takes in a filename argument, makes the call to header(...) and then prints the file, but then i have to validdate input etc. that's work I'm trying to avoid. * I know you can't actually force things when it comes to the web

    Read the article

  • Content Length and Transfer Encoding Chunked nginx, node-http-proxy

    - by rampr
    I have the following setup - node-http-proxy acts as a reverse proxy forwarding all requests to nginx/socket.io as necessary My problem is this When I send a HTTP DELETE request from the browser, node-http-proxy adds a header "Transfer Encoding Chunked" as the request from the browser had no Content Length. The request from the browser had no Content Length as it had no body. Nginx doesn't like the Transfer Encoding Chunked Header and throws a 411 asking for Content-Length. The problem gets solved when I send dummy data as part of the DELETE request so there is a Content Length and node-http-proxy doesn't add Transfer Encoding Chunked header and nginx is happy. I want to understand if node-http-proxy isn't working as expected, because it adds a Transfer Encoding Chunked header when Content Length is missing because there is no Content Body.

    Read the article

< Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >