Search Results

Search found 31630 results on 1266 pages for 'content management'.

Page 48/1266 | < Previous Page | 44 45 46 47 48 49 50 51 52 53 54 55  | Next Page >

  • How to remove HTML code from search result page content

    - by Jack Torris
    I have music website. There are 46 album pages and each page has different player and files. I just entered the one of album's URLs in a search engine. I found that Google is displaying player code in search result content. For example, enter this URL in Google and check the results. Each result displays a .mp3 file in content section. I see this: This page contains a demo of and documentation for the new jPlayer Playlist add-on, ... mp3:"http://www.jplayer.org/audio/mp3/Miaow-01-Tempered-song.mp3", ... I don't want Google to show the player code and mp3 files in search result. How can I hide audio files and player code from search engine? What would be the best solution for it?

    Read the article

  • Filtering content from response body HTML (mod_security or other WAFs)

    - by Bingo Star
    We have Apache on Linux with mod_security as the Web App Firewall (WAF) layer. To prevent content injections, we have some rules that basically disable a page containing some text patterns from showing up at all. For example, if an HTML page on webserver has slur words (because some webmaster may have copied/pasted text without proofreading) the Apache server throws a 406 error. Our requirement now is a little different: we would like to show the page as regular 200, but if such a pattern is matched, we want to strip out the offending content. Not block the entire page. If we had a server side technology we could easily code for this, but sadly this is for a website with 1000s of static html pages. Another solution might have been to do a cronjob of find/replace strings and run them on folders en-masse, maybe, but we don't have access to the file system in this case (different department). We do have control over WAF or Apache rules if any. Any pointers or creative ideas?

    Read the article

  • How to effectively use an overseas SEO team?

    - by Dan Gayle
    My company is currently in contract with a 20+ person team in the Philippines, previously used for comment linking and guest blogging spun content articles. This is a practice that we're stopping, but we don't want to sever our team because they work hard, they're really cheap, and they produce excellent accounting and reporting of their actions. What are ways that we can best put them to use as a link generating or content generating resource? Their English is fair, but not of high enough quality to use them for any direct content creation. Thanks

    Read the article

  • JavaOne San Francisco 2013 Content Catalog Live!

    - by Yolande Poirier
    There will be over 500 technical sessions, BOFs, tutorials, and hands-on labs offered. Note that "Securing Java" is a new track this year. The tracks are:  Client and Embedded Development with JavaFX Core Java Platform Edge Computing with Java in Embedded, Smart Card, and IoT Applications Emerging Languages on the Java Virtual Machine Securing Java Java Development Tools and Techniques Java EE Web Profile and Platform Technologies Java Web Services and the Cloud In the Content Catalog you can search on tracks, session types, session categories, keywords, and tags. Or, you can search for your favorite speakers to see what they’re presenting this year. And, directly from the catalog, you can share sessions you’re interested in with friends and colleagues through a broad array of social media channels. Start checking out JavaOne content now to plan your week at the conference. Then, you’ll be ready to sign up for all of your sessions when the scheduling tool goes live.

    Read the article

  • How did craigspro license Craigslist content? [closed]

    - by Joshua Frank
    There's an app called craigspro that provides a much better interface to Craigslist on mobile devices. They claim that the app is Officially Licensed by Craigslist, but I thought Craigslist never licensed their content, and the only thing I can find on the subject in the terms of use is this: Any copying, aggregation, display, distribution, performance or derivative use of craigslist or any content posted on craigslist whether done directly or through intermediaries (including but not limited to by means of spiders, robots, crawlers, scrapers, framing, iframes or RSS feeds) is prohibited. As a limited exception, general purpose Internet search engines and noncommercial public archives will be entitled to access craigslist without individual written agreements executed with CL that specifically authorize an exception to this prohibition if ... Does anyone know how do get a "written agreement" with Craigslist, and roughly what their terms would be? Do they charge a fee, or just check that you're not evil? I'll try next with Craigslist directly, but I'd like to get a sense of the landscape before stumbling in.

    Read the article

  • Wordpress theme for user generated content website

    - by iamjonesy
    I'm looking for a wordpress theme that I can work from. I'm basically creating a website like the following two http://www.damnyouautocorrect.com/ and http://icanhas.cheezburger.com/ - both are wordpress based websites which I guess are custom made themes. I'm looking for a theme that will let users enter content without beign logged in. Basically the post type has a title and a description and the name of the author. The homepage will show one post with a "Next" button. Clicking that will load the next post. The user content input just needs a title, description, and a name of the author. I'd also like to add voting up/down. I'm just asking first before I start hacking away at a theme.

    Read the article

  • How prevent useless content load on the page in Responsive Design

    - by Ícaro Leandro
    In Responsive Design, we hide elements in the page with @media queries and display: hide in the CSS. Ok, But in my system: Browsers that have less than width: 800px, the layout must hide some content, not only hide, but avoid them load fully. I mean, in access with desktop with more than 800px of screen, the page load fully; In mobile devices, or even in desktop with less than 800px, not load some content. I want to make the page load faster in this browsers. The system are maked in PHP and have some Javascript. Thanks...

    Read the article

  • Session Mania: Content Catalog & Suggest-a-Session

    - by Justin Kestelyn
    As ably reported in the Oracle Technology Network blog, the Oracle Develop Conference's content catalog is now public (as are the catalogs for JavaOne and Oracle OpenWorld), meaning you can now explore technical sessions scheduled for the conf to your heart's content."But something's missing", you may tell yourself. "Where is my favorite subject, the one I happen to also be an expert on?" Well, there's good news for you, too: The Suggest-A-Session project has returned. It works thus: Submit a session idea via Oracle Mix and ask your colleagues, Oracle Mix community, friends and anyone else you know to vote for your session. (You must be an Oracle Mix member to vote.) Voting is open through June 20. For the most part, the top voted sessions will be selected for the Oracle Develop Conf (or Oracle OpenWorld) official agenda. See the FAQ for fine print.Apparently some people have already jumped into this loophole, including Oracle ACE Director Marco Gralike, who has "gone video" on us: Why wait? Suggest-a-session!

    Read the article

  • Pagination and duplicate content

    - by jazz090
    I have an archive page that displays the number of articles published. Because there were so many, I ran a pagination script: for 127.0.0.1/archive/2/?p=x&pp=y where p is the page number and pp is number of articles to display per page. The pagination looks like this: Prev 1 2 3 4 ... 12 NEXT with each item linking to p like <a href="?p=x">x</a>. I also have the items per page setter: 25 | 50 | 100 (<a href="?pp=y">y</a>). Now I have a PHP script that fixes pp into a session variable. But I am worried about duplicate content (since incrementing pp values will be inclusive) and also content not getting indexed because its not in the pagination link. so in the example above, pages 5-11 will not be indexed. Any ideas on how to fix this?

    Read the article

  • Content, MetaData and Taxonomy 2 Overview of the Data Layer

    This article is cross-posted from my personal blog. In DotNetNuke version 5.3, we introduced the concept of a centralized Content store, together with the ability to apply Taxonomies (categories) to the content. We have extended this in DNN 5.4 by completing the MetaData API as well as adding Folksonomy (user tags). In this series of blogs I will explain how developers can take advantage of these new features in their own extensions. In the first blog in this series I covered the Taxonomy Manager...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Storing non-content data in Orchard

    - by Bertrand Le Roy
    A CMS like Orchard is, by definition, designed to store content. What differentiates content from other kinds of data is rather subtle. The way I would describe it is by saying that if you would put each instance of a kind of data on its own web page, if it would make sense to add comments to it, or tags, or ratings, then it is content and you can store it in Orchard using all the convenient composition options that it offers. Otherwise, it probably isn't and you can store it using somewhat simpler means that I will now describe. In one of the modules I wrote, Vandelay.ThemePicker, there is some configuration data for the module. That data is not content by the definition I gave above. Let's look at how this data is stored and queried. The configuration data in question is a set of records, each of which has a number of properties: public class SettingsRecord { public virtual int Id { get; set;} public virtual string RuleType { get; set; } public virtual string Name { get; set; } public virtual string Criterion { get; set; } public virtual string Theme { get; set; } public virtual int Priority { get; set; } public virtual string Zone { get; set; } public virtual string Position { get; set; } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Each property has to be virtual for nHibernate to handle it (it creates derived classed that are instrumented in all kinds of ways). We also have an Id property. The way these records will be stored in the database is described from a migration: public int Create() { SchemaBuilder.CreateTable("SettingsRecord", table => table .Column<int>("Id", column => column.PrimaryKey().Identity()) .Column<string>("RuleType", column => column.NotNull().WithDefault("")) .Column<string>("Name", column => column.NotNull().WithDefault("")) .Column<string>("Criterion", column => column.NotNull().WithDefault("")) .Column<string>("Theme", column => column.NotNull().WithDefault("")) .Column<int>("Priority", column => column.NotNull().WithDefault(10)) .Column<string>("Zone", column => column.NotNull().WithDefault("")) .Column<string>("Position", column => column.NotNull().WithDefault("")) ); return 1; } When we enable the feature, the migration will run, which will create the table in the database. Once we've done that, all we have to do in order to use the data is inject an IRepository<SettingsRecord>, which is what I'm doing from the set of helpers I put under the SettingsService class: private readonly IRepository<SettingsRecord> _repository; private readonly ISignals _signals; private readonly ICacheManager _cacheManager; public SettingsService( IRepository<SettingsRecord> repository, ISignals signals, ICacheManager cacheManager) { _repository = repository; _signals = signals; _cacheManager = cacheManager; } The repository has a Table property, which implements IQueryable<SettingsRecord> (enabling all kind of Linq queries) as well as methods such as Delete and Create. Here's for example how I'm getting all the records in the table: _repository.Table.ToList() And here's how I'm deleting a record: _repository.Delete(_repository.Get(r => r.Id == id)); And here's how I'm creating one: _repository.Create(new SettingsRecord { Name = name, RuleType = ruleType, Criterion = criterion, Theme = theme, Priority = priority, Zone = zone, Position = position }); In summary, you create a record class, a migration, and you're in business and can just manipulate the data through the repository that the framework is exposing. You even get ambient transactions from the work context.

    Read the article

  • How to stop gold-plating and just be content to release working developments

    - by Andy Bowskill
    The development team that I'm a member of has recently adapted to work according to Agile practices. This has personally highlighted the fact that I can't stop myself gold-plating code (and documentation) and I consequently exceed original estimates, when I could've delivered solutions that meet the requirements much earlier. I think my ethic is bordering on the obsessive in that I become too attached to my code and am rarely content to release before I've refactored and perfected it to the nth degree. I am happy that I have realised this but how can I change my attitude/mentality to be content with my progress and release on-time instead?

    Read the article

  • Why is my content database so large?

    - by PeterBrunone
    If your SharePoint site collection hasn't grown, but your content database has, the most likely culprit is versioning.  If a list -- or worse, a library -- has versioning enabled, the default is to keep every single one.  That means that every time someone edits and checks in a document, its storage footprint increases by the size of the document (and probably a little more).The solution?  It could be a bit painful, but you'll need to go back into each library and restrict the number of versions to keep (three is sufficient for most uses, but your needs may vary).  I suggest keeping only major versions as well, since minor versions are really just stopping points on the way to a published document.Of course if you have a real business need to keep all those versions around, then you'll want to look into an archiving solution that will take the old versions out of the content database but still make them available if necessary.

    Read the article

  • Recommendations for a network of student-related content

    - by Javier Marín
    I am running a network of websites with notes, homeworks, essays, etc. where users share their own content. I'm having real trouble with the latest Google updates (penguin, panda, etc) because the content is mainly poor-quality and with the same topic. For that reason, I want to create more websites and have more probabilites to appear in the SERPs. My question is: does Google analyzes related websites in order to exclude it from the results? I've think about distribute the websites around the world, in different hostings, but I'm afraid that Google would link it by their analytics, webmaster tools or adsense account, is that possible? What other recommendations do you have?

    Read the article

  • Will duplicate international (i18n) content hinder SEO rankings?

    - by Rhys
    Google clearly states that duplicate content within a single, or multiple, domains is not advised. This is understood, but I am not sure of any exceptions for sites with region-specific content that is often replicated across locales. For example, a site's /en-us/about page could be identical to /en-uk/about, whereas most likely /en-ja/about is unique. Are GYM smart enough to understand that the initial URL depth is a locale specifier? Is there any robots.txt or header, etc, trickery that I should include to outline the site's international structure?

    Read the article

  • Rehosting content from another server

    - by Lana_M
    We have a set of static pages that will augment a customer's existing site. The pages will not reside on the customer's servers for logistical reasons and because we need to maintain control of the content. The plan is for the customer to set up a mod_rewrite rule that will funnel certain types of URLs to a single server-side handler script that will grab the appropriate file from a CDN and just output its content. This illustrates the approach: <?php echo(file_get_contents(str_replace($customer_host, $cdn_host, $_SERVER['REQUEST_URI']))); ?> Can anyone think of pitfalls or offer up a different approach? Is there some way to circumvent a script altogether?

    Read the article

  • Google Analytics Content Experiments for non-simultaneous tests

    - by mnort9
    I really like how Google Analytics displays the results of content experiments. However, it seems the tool only works for simultaneous tests. I'd like to use the tool without implementing the page variation code into my site. For example, I want to test copy on an ecommerece category page. The original page variation would be the current page for the past 2500 visits. After making the copy changes, the new variation would be for the next 2500 visits. I realize I can simply record the metrics before and after each variation, but I'd like to take advantage of Google's presentation of the experiment. Is it possible to use the Content Experiments in this way?

    Read the article

  • XNA content.load Dependancy

    - by Richard
    Quick question, My project i'm building for test purposes is working fine but i have dependencies flying around everywhere due to the XNA framework. In Update i have gametime passed everywhere... this is okay. In Draw i have gametime & spritebatch passed everywhere... this is okay. My issue is in the content.load textures/sounds/fonts. I have them as public variables ie Texture1 = Content.load(of texture2d)("Texture1") I'm passing a 'Game1' pointer into the constructor of every new class being instantiated to gain access to these variables. Am i missing an OOP trick to prevent me having to pass a pointer to 'game1' to every New class?

    Read the article

  • Configuring trace file size and number in WebCenter Content 11g

    - by Kyle Hatlestad
    Lately I've been doing a lot of debugging using the System Output tracing in WebCenter Content 11g.  This is built-in tracing in the content server which provides a great level of detail on what's happening under the hood.  You can access the settings as well as a view of the tracing by going to Administration -> System Audit Information.  From here, you can select the tracing sections to include.  Some of my personal favorites are searchquery,  systemdatabase, userstorage, and indexer.  Usually I'm trying to find out some information regarding a search, database query, or user information.  Besides debugging, it's also very helpful for performance tuning. One of the nice tricks with the tracing is it honors the wildcard (*) character.  So you can put in 'schema*' and gather all of the schema related tracing.  And you can notice if you select 'all' and update, it changes to just a *.   To view the tracing in real-time, you simply go to the 'View Server Output' page and the latest tracing information will be at the bottom. This works well if you're looking at something pretty discrete and the system isn't getting much activity.  But if you've got a lot of tracing going on, it would be better to go after the trace log file itself.  By default, the log files can be found in the <content server instance directory>/data/trace directory. You'll see it named 'idccs_<managed server name>_current.log.  You may also find previous trace logs that have rolled over.  In this case they will identified by a date/time stamp in the name.  By default, the server will rotate the logs after they reach 1MB in size.  And it will keep the most recent 10 logs before they roll off and get deleted.  If your server is in a cluster, then the trace file should be configured to be local to the node per the recommended configuration settings. If you're doing some extensive tracing and need to capture all of the information, there are a couple of configuration flags you can set to control the logs. #Change log size to 10MB and number of logs to 20FileSizeLimit=10485760FileCountLimit=20 This is set by going to Admin Server -> General Configuration and entering them in the Additional Configuration Variables: section.  Restart the server and it should take on the new logging settings. 

    Read the article

  • JavaOne 2012 Content Catalog is Available

    - by arungupta
    JavaOne 2012 Content Catalog is now available! The complete list of technical sessions, birds-of-feather, hands-on labs, tutorials and other details are available. We are still working on the overall schedule and it will be shared in the coming days. The conference will be held in San Francisco from September 30th to October 4th, 2012. You can also connect using the usual social media channels: facebook, twitter, blogs, linkedin, and mix. Oracle Open World, running in parallel to JavaOne, also has the content catalog available.

    Read the article

  • Microformats, Reviews and Duplicate Content

    - by Nicholas
    Let's say I have a site that sells widgets, and the URL structure is like so: /[type-of-widget]/[sub-type]/[widget-name]/ So, a URL for a widget might be: /screwdrivers/philips-screwdrivers/acme-big-screwdriver/ We show reviews on the widget page, and use the appropriate microformat data so Google knows it's a review, etc. Now, what if I want to show random reviews in the "sub-type" and "type-of-widget" landing pages? Will Google ding me for duplicate content, or is it smart enough to know (based on microformat data/etc.) that this is not duplicate content?

    Read the article

  • Dynamically change page content based on URL parameter?

    - by volume one
    The title of my question seems simple but here is an example of what I want to do: http://www.mayoclinic.com/health/infant-jaundice/DS00107 What happens on that page is whenever you click on a link to go a section (e.g. "Symptoms") in the article on "Infant Jaundice", it provides a URL parameter like this: http://www.mayoclinic.com/health/infant-jaundice/DS00107/DSECTION=symptoms As the DESCTION parameter changes, you get different content on the same page DS00107. The content changes as well as <meta keywords>. Can someone please tell me how this is achieved? I was thinking it was an if/else situation programmed into the page itself to display different properties depending on the URL parameter. Any help or suggestions are very much appreciated and my thanks to you for reading my question.

    Read the article

< Previous Page | 44 45 46 47 48 49 50 51 52 53 54 55  | Next Page >