Search Results

Search found 28650 results on 1146 pages for 'content length'.

Page 3/1146 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • The ethics of aggregating other peoples' content

    - by AaronBertrand
    Content theft Most of the recent - what I'll call - "discussions" about content aggregators have revolved around content theft. I'm not going to flip-flop on that topic: when you use someone else's work, you ask first, and if given the okay, you cite it. You can't take my blog article and post it on your site, implying that it is your own. Likewise, you shouldn't be taking a Books Online topic, changing three words and adding an intro sentence, and making that a blog post. I think this stuff is pretty...(read more)

    Read the article

  • Remove urls to unidex blog content from google, then copy blogs content to new blog [closed]

    - by sam
    Possible Duplicate: migrating PR / rankings from one site to another Ive been writing a blog for the past yr or so, with about 300 published articles, the blog have been running under a subdomain blog.mysite.com We are no looking to change the url of our site, so the blog is going to have to be ported over to a subdoamin on the new site. We would really like to keep the backlog / archive of all the articles we have written but dont wont to be penalized for having duplicate content, could we just remove / unindex the urls from google in webmaster tools then export the blog and import it back to our new blog ? Would google still see this a duplicate content or becuase ive removed the urls have they no longer got a copy of it ? thanks

    Read the article

  • SEO for Country & Language Specific content.

    - by kecebongsoft
    Currently I am creating a website which has a common topic for an article, but it's going to be different content for each country, and also, each of that content will be provided in several languages. And this mechanism exists in most of the parts in the website. For example, I have an article about tax. This article has to be different for each country, for example china. And tax content for china should be written in china AND english language (for non china-speaker). What is the best URL pattern to handle this? What I've been thinking is, using a sub folder (/country-code/language-code/) such as: www.example.com/cn/cn/tax www.example.com/cn/en/tax Or using top level domain such as: www.example.cn/cn/tax www.example.cn/en/tax Or subdomain such as cn.example.com/cn/tax cn.example.com/en/tax I think I will not prefer the last option since I might need to use subdomain for other purpose. Which left only subfolder and TLDN. I've read some articles saying that TLDN is good for localized content (language-specific content), but in my case, my TLDN will also has english contents (for non local speaker) which is specific only to that particular country (also the purpose of this is to let people from other country easily search it through google). What is the best pattern to pick and why?.

    Read the article

  • Oracle Enterprise Content Management 11gR1 Patch Set 3 Released

    - by michelle.huff
    We're pleased to announce an updated patch set for Oracle Enterprise Content Management 11gR1 PS3 (11.1.1.4.0). Patch Set 3 (PS3) supports additional platforms and applications, and adds several new features to the products. Highlights include: Content Server (repository for UCM, URM & I/PM): New security capabilities, file store provider updates. Desktop Integration Suite: Windows 7 64-bit and Office 2010 (32 & 64-bit) support and new "Recent Content Items" menu. Universal Content Management (UCM): Site Studio Manager for Site Studio for External Applications, new template management options and ability to run Site Studio & Site Studio for External Applications 11g components on Content Server 10gR3. Imaging and Process Management (I/PM): Now certified with Oracle Business Process Management (BPM) 11g, Oracle Single Sign On (OSSO) 10g and Oracle Access Manager (OAM) 10g, export search results to Microsoft Excel. ECM Adapter for PeopleSoft: Support for UCM 11g Managed Attachments (support for 10g released earlier in 2010) and certification with PeopleTools 8.50. Information Rights Management (IRM): Desktop support for Microsoft Office 2010, Adobe Reader X and Microsoft SharePoint 2010. Customer Webcast We'll be covering this new release in our Quarterly Customer Update Webcast scheduled for this week, January 19/20, 2011. Register today. More Information Downloads now available on Oracle Technology Network (OTN) - it will be available via eDelivery soon. Read the updated ECM documentation for 11.1.1.4.0 Review the ECM 11.1.1.4.0 Upgrade & Patch Guides See the Release Notes

    Read the article

  • Crawling a Content Folio

    - by Kyle Hatlestad
    Content Folios in WebCenter Content allow you to assemble, track, and access a logical group of documents and/or links.  It allows you to manage them as just a list of items (simple folio) or organized as a hierarchy (advanced folio).  The built-in UI in content server allows you to work with these folios, but publishing them or consuming them externally can be a bit of a challenge.   [Read More]

    Read the article

  • UPK Pre-built Content Now Available for Additional Product Lines

    - by Karen Rihs
    UPK pre-built content development efforts are always underway and growing, and now include the recent release of new UPK pre-built content modules for three additional product lines! Oracle Utilities for Meter Data Management 2.0.1 Administrative Setup User Tasks VEE and Usage Rules Working with Measurement Data Fusion 11g Release 1 Functional Setup Manager General Ledger Global Human Resources Project Portfolio Management Self Service Procurement Oracle Crystal Ball 11.1.2 Oracle Crystal Ball For the most recent list of modules currently available for each product line, visit the UPK Resource Library on Oracle.com. For more information on how your organization can take advantage of UPK pre-built content, see our previous blog, The Value of UPK Pre-Built Content.  - Karen Rihs, UPK Outbound Product Management

    Read the article

  • Does Google AdWords care about duplicate content?

    - by Yarin
    Our site offers several families of products, all of which have a common set of configurations. For simplicity's sake, we'll say we offer products A, B and C, each with configurations 1, 2 and 3 Products: A, B, C Configurations: 1, 2, 3 We want to create landing page <- ad group combinations that reflect each possible combination of each product and configuration. Each product and each configuration have their own page, and so each landing page would have include the product content and the configuration content: ourproducts.com/A-1 (Contains copy for A and 1) ourproducts.com/A-2 ourproducts.com/A-3 ourproducts.com/B-1 ... etc... As you can see, this will lead to duplicate content across our product pages, though in different combinations. My question is, does this matter from AdWords point of view? Will there be any negative consequence to repeating portions of content this way?

    Read the article

  • Getting user generated content with no titles to rank

    - by hugo
    We are creating a site that allows users to generate content. The user is provided with a text field only (no title), similar to Twitter, Facebook, and Google+. Each piece of content created by the users will have a dedicated page/URL. Since the page has no title, I was wondering how search engines will index and display our pages. If the content was shared on other social networks, what will those results look like if there is no title for the open graph or Twitter tags?

    Read the article

  • Resize content for "frame" on .aspx page which draws product content from suppliers' page

    - by zenbike
    I have a retail store site, no online sales, which displays the webpage of our supplier in a "frame" in order to have the most accurate and up to date information for our customers. (example) My issue is that the size of the page it is pulling in doesn't fit in the frame. It looks pretty poor, and part of the content is obscured. Is there a way to scale the content drawn in to the size of the frame? The same site also has an intermittent issue with the Flash banner loading. When it doesn't load, the layout of the header on the page is awful. Any ideas there will also be appreciated.

    Read the article

  • SEO - Hidden content before main site content

    - by 0pt1m1z3
    I have a two hidden divs before my main site content, one with the login form and another with the signup form. I then have login and signup buttons within the page that use JQuery to show or hide these divs. I like the effect this setup offers, dropping down from the top of the page and pushing the rest of the content down. However, recently I have been getting serious about SEO and I am wondering if these divs have been affecting my SERP rankings. Basically, every non-logged page (everything bots see) has the same two display:none; divs at the top of the document flow. Is it bad? Should I re-engineer these forms and the way they are displayed?

    Read the article

  • Plan your SharePoint 2010 Content Type Hub carefully

    - by Wayne
    Currently setting up a new environment on SharePoint 2010 (which was made available for download yesterday if anyone missed that :-). One of the new features of SharePoint 2010 is to set up a Content Type Hub (which is a part of the Metadata Service Application), which is a hub for all Content Types that other Site Collections can subscribe to. That is you only need to manage your content types in one location. Setting up the Content Type Hub is not that difficult but you must make it very careful to avoid a lot of work and troubleshooting. Here is a short tutorial with a few tips and tricks to make it easy for you to get started. Determine location of Content Type Hub First of all you need to decide in which Site Collection to place your Content Type Hub; in the root site collection or a specific one. I think using a specific Site Collection that only acts as a Content Type Hub is the best way, there are no best practice as of now. So I create a new Site Collection, at for instance http://server/sites/CTH/. The top-level site of this site collection should be for instance a Team Site. You cannot use Blank Site by default, which would have been the best option IMHO, since that site does not have the Taxonomy feature stapled upon it (check the TaxonomyFeatureStapler feature for which site templates that can be used). Configure Managed Metadata Service Application Next you need to create your Managed Metadata Service Application or configure the existing one, Central Administration > Application Management > Manage Service Applications. Select the Managed Metadata service application and click Properties if you already have created it. In the bottom of the dialog window when you are creating the service application or when you are editing the properties is a section to fill in the Content Type Hub. In this text box fill in the URL of the Content Type Hub. It is essential that you have decided where your Content Type Hub will reside, since once this is set you cannot change it. The only way to change it is to rebuild the whole managed metadata service application! Also make sure that you enter the URL correctly. I did copy and paste the URL once and got the /default.aspx in the URL which funked the whole service up. Make sure that you only use the URL to the Site Collection of the hub. Now you have to set up so that other Site Collections can consume the content types from the hub. This is done by selecting the connection for the managed metadata service application and clicking properties. A new dialog window opens and there you need to click the Consumes content types from the Content Type Gallery at nnnn. Now you are free to syndicate your Content Types from the Hub. Publish Content Types To publish a Content Type from the hub you need to go to Site Settings > Content Types and select the content type that you would like to publish. Then select Manage publishing for this content type. This takes you to a page from where you can Publish, Unpublish or Republish the content type. Once the content type is published it can take up to an hour for the subscribing Site Collections to get it. This is controlled by the Content Type Subscriber job that is scheduled to run once an hour. To speed up your publishing just go to Central Administration > Monitoring > Review Job Definitions > Content Type Subscriber and click Run now and you content type is very soon available for use. Published Content Type status You can check the status of the content type publishing in your destination site collections by selecting Site Settings > Content Type Publishing. From here you can force a refresh of all subscribed content types, see which ones that are subscribed and finally check the publishing error log. This error log is very useful for detecting errors during the publishing. For instance if you use any features such as ratings, metadata, document ids in your content type hub and your destination site collection does not have those features available this will be reported here.

    Read the article

  • Javascript array length incorrect on array of objects

    - by Serenti
    Could someone explain this (strange) behavior? Why is the length in the first example 3 and not 2, and most importantly, why is the length in the second example 0? As long as the keys are numerical, length works. When they are not, length is 0. How can I get the correct length from the second example? Thank you. a = []; a["1"] = {"string1":"string","string2":"string"}; a["2"] = {"string1":"string","string2":"string"}; alert(a.length); // returns 3 b = []; b["key1"] = {"string1":"string","string2":"string"}; b["key2"] = {"string1":"string","string2":"string"}; alert(b.length); // returns 0

    Read the article

  • iPhone App runtime error- "Error: Embedded profile header length is greater than data length.\n"

    - by Rob Lourens
    This error Thu Apr 8 20:24:15 iPod-touch appname[947] <Error>: Error: Embedded profile header length is greater than data length.\n Thu Apr 8 20:24:16 iPod-touch appname[947] <Error>: Error: Embedded profile header length is greater than data length.\n is logged when a UIImageView is loaded. The view isn't huge but it has a few other UIImageViews as subviews and it might be related to memory, but I can't find anything on this message. Any ideas?

    Read the article

  • JavaScript loaded external content SEO

    - by user005569871
    I wonder what is the best way to have Javascript loaded content indexed by search engines. I know that search engines don't execute Javascript, but I am thinking more of an progressive enchantment. I am creating a responsive website, and on the home page I will have some sections about most visited products and recommended product that I plan to load depending on the device detected. These products will be in sliders with thumbnail images and names of the products. If mobile is detected slider content will not load, ant the link to the external page will be shown. I know that external content will be indexed via link to those resources. Where will the users be directed from search in this case? To the external page or home page? Will it be bad for SEO if I show only product names on front page so they can be indexed and hide them with CSS? What is the best way to index that content and possibly direct users from search to home page? Also, i've seen the Ajax crawling but iI would like not to use that if there is any better way.

    Read the article

  • Storing editable site content?

    - by hmp
    We have a Django-based website for which we wanted to make some of the content (text, and business logic such as pricing plans) easily editable in-house, and so we decided to store it outside the codebase. Usually the reason is one of the following: It's something that non-technical people want to edit. One example is copywriting for a website - the programmers prepare a template with text that defaults to "Lorem ipsum...", and the real content is inserted later to the database. It's something that we want to be able to change quickly, without the need to deploy new code (which we currently do twice a week). An example would be features currently available to the customers at different tiers of pricing. Instead of hardcoding these, we read them from database. The described solution is flexible but there are some reasons why I don't like it. Because the content has to be read from the database, there is a performance overhead. We mitigate that by using a caching scheme, but this also adds some complexity to the system. Developers who run the code locally see the system in a significantly different state compared to how it runs on production. Automated tests also exercise the system in a different state. Situations like testing new features on a staging server also get trickier - if the staging server doesn't have a recent copy of the database, it can be unexpectedly different from production. We could mitigate that by committing the new state to the repository occasionally (e.g. by adding data migrations), but it seems like a wrong approach. Is it? Any ideas how best to solve these problems? Is there a better approach for handling the content that I'm overlooking?

    Read the article

  • Kill a tree, save your website? Content strategy in action, part III

    - by Roger Hart
    A lot has been written about how driving content strategy from within an organisation is hard. And that's true. Red Gate is pretty receptive to new ideas, so although I've not had a total walk in the park, it's been a hike with charming scenery. But I'm one of the lucky ones. Lots of people are involved in content, and depending on your organisation some of those people might be the kind who'll gleefully call themselves "stakeholders". People holding a stake generally want to stick it through something's heart and bury it at a crossroads. Winning them over is not always easy. (Richard Ingram has made a nice visual summary of how this can feel - Content strategy Snakes & ladders - pdf ) So yes, a lot of content strategy advocates are having a hard time. And sure, we've got a nice opportunity to get together and have a hug and a cry, but in the interim we could use a hand. What to do? My preferred approach is, I'll confess, brutal. I'd like nothing so much as to take a scorched earth approach to our website. Burn it, salt the ground, and build the new one right: focusing on clearly delineated business and user content goals, and instrumented so we can tell if we're doing it right. I'm never getting buy-in for that, but a boy can dream. So how about just getting buy-in for some small, tenable improvements? Easier, but still non-trivial. I sat down for a chat with our marketing and design guys. It seemed like a good place to start, even if they weren't up for my "Ctrl-A + Delete"  solution. We talked through some of this stuff, and we pretty much agreed that our content is a bit more broken than we'd ideally like. But to get everybody on board, the problems needed visibility. Doing a visual content inventory Print out the internet. Make a Wall Of Content. Seriously. If you've already done a content inventory, you know your architecture, and you know the scale of the problem. But it's quite likely that very few other people do. So make it big and visual. I'm going to carbon hell, but it seems to be working. This morning, I printed out a tiny, tiny part of our website: the non-support content pertaining to SQL Compare I made big, visual, A3 blowups of each page, and covered a wall with them. A page per web page, spread over something like 6M x 2M, with metrics, right in front of people. Even if nobody reads it (and they are doing) the sheer scale is shocking. 53 pages, all told. Some are redundant, some outdated, some trivial, a few fantastic, and frighteningly many that are great ideas delivered not-quite-right. You have to stand quite far away to get it all in your field of vision. For a lot of today, a whole bunch of folks have been gawping in amazement, talking each other through it, peering at the details, and generally getting excited about content. Developers, sales guys, our CEO, the marketing folks - they're engaged. Will it last? I make no promises. But this sort of wave of interest is vital to getting a content strategy project kicked off. While the content strategist is a saucer-eyed orphan in the cupboard under the stairs, they're not getting a whole lot done. Of course, just printing the site won't necessarily cut it. You have to know your content, and be able to talk about it. Ideally, you'll also have page view and time-on-page metrics. One of the most powerful things you can do is, when people are staring at your wall of content, ask them what they think half of it is for. Pretty soon, you've made a case for content strategy. We're also going to get folks to mark it up - cover it with notes and post-its, let us know how they feel about our content. I'll be blogging about how that goes, but it's exciting. Different business functions have different needs from content, so the more exposure the content gets, and the more feedback, the more you know about those needs. Fingers crossed for awesome.

    Read the article

  • xt_TCPMSS: bad length messages

    - by Matic
    I'm getting loads of messages like: Jun 23 10:24:20 awakening kernel: [ 1691.596823] xt_TCPMSS: bad length (1492 bytes) Jun 23 10:24:21 awakening kernel: [ 1692.663362] xt_TCPMSS: bad length (1448 bytes) Jun 23 10:24:21 awakening kernel: [ 1692.663495] xt_TCPMSS: bad length (1448 bytes) Jun 23 10:24:21 awakening kernel: [ 1692.663588] xt_TCPMSS: bad length (1448 bytes) Jun 23 10:24:21 awakening kernel: [ 1692.663671] xt_TCPMSS: bad length (1440 bytes) Jun 23 10:24:26 awakening kernel: [ 1697.062914] xt_TCPMSS: bad length (474 bytes) Jun 23 10:24:26 awakening kernel: [ 1697.305525] xt_TCPMSS: bad length (1492 bytes) Jun 23 10:24:27 awakening kernel: [ 1698.946633] xt_TCPMSS: bad length (1492 bytes) Jun 23 10:24:36 awakening kernel: [ 1707.481198] xt_TCPMSS: bad length (1492 bytes) Jun 23 10:24:37 awakening kernel: [ 1708.723526] xt_TCPMSS: bad length (805 bytes) Jun 23 10:24:38 awakening kernel: [ 1709.599461] xt_TCPMSS: bad length (805 bytes) Jun 23 10:24:41 awakening kernel: [ 1712.211052] xt_TCPMSS: bad length (1492 bytes) Jun 23 10:24:41 awakening kernel: [ 1712.260588] xt_TCPMSS: bad length (1492 bytes) Jun 23 10:24:41 awakening kernel: [ 1712.976058] xt_TCPMSS: bad length (1492 bytes) Jun 23 10:24:43 awakening kernel: [ 1714.225209] xt_TCPMSS: bad length (1492 bytes) Jun 23 10:24:43 awakening kernel: [ 1714.914961] xt_TCPMSS: bad length (1492 bytes) Jun 23 10:24:55 awakening kernel: [ 1726.192696] xt_TCPMSS: bad length (1480 bytes) Jun 23 10:24:55 awakening kernel: [ 1726.192825] xt_TCPMSS: bad length (1480 bytes) In my dmesg/syslog. This linux machine is among other things used as an internet gateway. Connection is over PPPoE. I have the following line in my iptables script: $IPT -A FORWARD -p tcp --tcp-flags SYN,RST SYN -j TCPMSS --clamp-mss-to-pmtu # PPPoE fix The frequency of this messages increased 10x when I upgraded from Debian lenny with 2.6.27 to squeeze with 2.6.32 few days ago. Why am I seeing this messages and how can I fix them?

    Read the article

  • Preventing indexing duplicate content by search engines

    - by umesh awasthi
    I am in process of migrating my old domain (www.oldurl.com) to new domain (www.newurl.com). Almost all the content,URL structure as well database is same except for few URL's and only difference will be in the domain name. I have made entries in the Apache's .htaccess file to set 301 redirect and currently have blocked all search engines from crawling my new domain by setting in robot.txt file. I am not sure how i will handle the duplicate content issue as when i will make the new domain go live. Should i block search engines to index/crawl my old domain? i am new to this field and not sure if this is actually any duplicate content issue or not.

    Read the article

  • xt_TCPMSS: bad length messages

    - by Matic
    Hey! I'm getting loads of messages like: Jun 23 10:24:20 awakening kernel: [ 1691.596823] xt_TCPMSS: bad length (1492 bytes) Jun 23 10:24:21 awakening kernel: [ 1692.663362] xt_TCPMSS: bad length (1448 bytes) Jun 23 10:24:21 awakening kernel: [ 1692.663495] xt_TCPMSS: bad length (1448 bytes) Jun 23 10:24:21 awakening kernel: [ 1692.663588] xt_TCPMSS: bad length (1448 bytes) Jun 23 10:24:21 awakening kernel: [ 1692.663671] xt_TCPMSS: bad length (1440 bytes) Jun 23 10:24:26 awakening kernel: [ 1697.062914] xt_TCPMSS: bad length (474 bytes) Jun 23 10:24:26 awakening kernel: [ 1697.305525] xt_TCPMSS: bad length (1492 bytes) Jun 23 10:24:27 awakening kernel: [ 1698.946633] xt_TCPMSS: bad length (1492 bytes) Jun 23 10:24:36 awakening kernel: [ 1707.481198] xt_TCPMSS: bad length (1492 bytes) Jun 23 10:24:37 awakening kernel: [ 1708.723526] xt_TCPMSS: bad length (805 bytes) Jun 23 10:24:38 awakening kernel: [ 1709.599461] xt_TCPMSS: bad length (805 bytes) Jun 23 10:24:41 awakening kernel: [ 1712.211052] xt_TCPMSS: bad length (1492 bytes) Jun 23 10:24:41 awakening kernel: [ 1712.260588] xt_TCPMSS: bad length (1492 bytes) Jun 23 10:24:41 awakening kernel: [ 1712.976058] xt_TCPMSS: bad length (1492 bytes) Jun 23 10:24:43 awakening kernel: [ 1714.225209] xt_TCPMSS: bad length (1492 bytes) Jun 23 10:24:43 awakening kernel: [ 1714.914961] xt_TCPMSS: bad length (1492 bytes) Jun 23 10:24:55 awakening kernel: [ 1726.192696] xt_TCPMSS: bad length (1480 bytes) Jun 23 10:24:55 awakening kernel: [ 1726.192825] xt_TCPMSS: bad length (1480 bytes) In my dmesg/syslog. This linux machine is among other things used as an internet gateway. Connection is over PPPoE. I have the following line in my iptables script: $IPT -A FORWARD -p tcp --tcp-flags SYN,RST SYN -j TCPMSS --clamp-mss-to-pmtu # PPPoE fix The frequency of this messages increased 10x when I upgraded from Debian lenny with 2.6.27 to squeeze with 2.6.32 few days ago. Why am I seeing this messages and how can I fix them?

    Read the article

  • Duplicate Content Problem due to plugin

    - by Amar Ryder
    Actually i am running website on wordpress where i have installed Transposh plugin on my site 'example'. Unfortunately, despite having English as the default language and therefore available at example.com/xxx, Google is indexing example.com/en/xxx so i m getting problem with duplicate content now i want to remove this plugin and links from google so that my content will be refine without getting duplicate content pages. Do you have any solution to do this safely. I think myself to remove this plugin from website, though it will create 404 errors from google links but i can add redirect code in htaccess till google would remove that "example.com/en/xxx " not found links. If you know any other healthy way to handle this please help me!

    Read the article

  • The entire content of my Wordpress page has disappeared

    - by John Catterfeld
    I have a blog installed on my site using Wordpress. Last week I upgraded Wordpress from 2.6 to 3.0.4 (I had to do this manually). All went well, or so I thought, but I have just noticed that the content of an existing page has vanished. The page URL still works, but all content has disappeared - doctype, html tags, body tags, everything. Please note, this is specific to pages - posts are still displaying fine. I have since created a brand new page which does not display the content either. Things I have tried include Switching to a freshly installed theme Deactivating all plugins Setting the problem page to draft, and back again Deleting the .htaccess file I suspect it's a database problem and have contacted my hosting company who have said the only thing they can do is restore the DB from a backup, but that I should consider it a last resort. Does anyone have any further ideas what to try?

    Read the article

  • Separate urls for a set of pages sharing 80% duplicate content

    - by user131003
    Issue: Currently my site has one particular page which has country specific data. So I've URLs like : mysite.com/sale-united-states mysite.com/sale-united-kingdom mysite.com/sale-sweden etc. All these pages have 80-90% common content and 10-20% country specific content. currently all these pages canonically point to mysite.com/sale-united-states. The problem is when someone searches for "sale Sweden", Google correctly shows mysite.com/sale-united-states page, which does not feel correct as it shows US page instead of Sweden. Now I'm thinking of not using canonical url so that country specific urls are produced in Google saerch. But I'm not sure how 80% duplicate content is going to affect SEO? What should be the recommended approach for this situation? A friend of mine suggested a "separate subdomain per country" based approach but it seems overkill for one page.

    Read the article

  • Start Your Session Search: Content Catalog is Live

    - by RichSchwerin
    Untitled Document Search through nearly 300 exhibitors and 1,600 sessions across 80 tracks, plus speakers and demos With Oracle OpenWorld 2011 just 15 weeks away, Content Catalog is now available online. That means you can browse through almost 300 exhibitors and nearly 1,600 content sessions across more than 80 different tracks, along with scores of demos. Even better, you can perform keyword searches for subjects that interest you most, from Active Data Guard to ZFS (and everything in between). But wait, there's more... Speaker Catalog--a veritable Oracle Who's Who--is also live online. You can search through hundreds of speakers, with names, titles, companies, and which sessions they're presenting. Save $500: Register Today Now that you've seen all the great content and speakers lined up for Oracle OpenWorld 2011, join us in San Francisco, October 2-6. Register by the Early Bird deadline of July 29th and save $500.

    Read the article

  • Content in Context: The right medicine for your business applications

    - by Lance Shaw
    For many of you, your companies have already invested in a number of applications that are critical to the way your business is run. HR, Payroll, Legal, Accounts Payable, and while they might need an upgrade in some cases, they are all there and handling the lifeblood of your business. But are they really running as efficiently as they could be? For many companies, the answer is no. The problem has to do with the important information caught up within documents and paper. It’s everywhere except where it truly needs to be – readily available right within the context of the application itself. When the right information cannot be easily found, business processes suffer significantly. The importance of this recently struck me when I recently went to meet my new doctor and get a routine physical. Walking into the office lobby, I couldn't help but notice rows and rows of manila folders in racks from floor to ceiling, filled with documents and sensitive, personal information about various patients like myself.  As I looked at all that paper and all that history, two things immediately popped into my head.  “How do they find anything?” and then the even more alarming, “So much for information security!” It sure looked to me like all those documents could be accessed by anyone with a key to the building. Now the truth is that the offices of many general practitioners look like this all over the United States and the world.  But it had me thinking, is the same thing going on in just about any company around the world, involving a wide variety of important business processes? Probably so. Think about all the various processes going on in your company right now. Invoice payments are being processed through Accounts Payable, contracts are being reviewed by Procurement, and Human Resources is reviewing job candidate submissions and doing background checks. All of these processes and many more like them rely on access to forms and documents, whether they are paper or digital. Now consider that it is estimated that employee’s spend nearly 9 hours a week searching for information and not finding it. That is a lot of very well paid employees, spending more than one day per week not doing their regular job while they search for or re-create what already exists. Back in the doctor’s office, I saw this trend exemplified as well. First, I had to fill out a new patient form, even though my previous doctor had transferred my records over months previously. After filling out the form, I was later introduced to my new doctor who then interviewed me and asked me the exact same questions that I had answered on the form. I understand that there is value in the interview process and it was great to meet my new doctor, but this simple process could have been so much more efficient if the information already on file could have been brought directly together with the new patient information I had provided. Instead of having a highly paid medical professional re-enter the same information into the records database, the form I filled out could have been immediately scanned into the system, associated with my previous information, discrepancies identified, and the entire process streamlined significantly. We won’t solve the health records management issues that exist in the United States in this blog post, but this example illustrates how the automation of information capture and classification can eliminate a lot of repetitive and costly human entry and re-creation, even in a simple process like new patient on-boarding. In a similar fashion, by taking a fresh look at the various processes in place today in your organization, you can likely spot points along the way where automating the capture and access to the right information could be significantly improved. As you evaluate how content-process flows through your organization, take a look at how departments and regions share information between the applications they are using. Business applications are often implemented on an individual department basis to solve specific problems but a holistic approach to overall information management is not taken at the same time. The end result over the years is disparate applications with separate information repositories and in many cases these contain duplicate information, or worse, slightly different versions of the same information. This is where Oracle WebCenter Content comes into the story. More and more companies are realizing that they can significantly improve their existing application processes by automating the capture of paper, forms and other content. This makes the right information immediately accessible in the context of the business process and making the same information accessible across departmental systems which has helped many organizations realize significant cost savings. Here on the Oracle WebCenter team, one of our primary goals is to help customers find new ways to be more effective, more cost-efficient and manage information as effectively as possible. We have a series of three webcasts occurring over the next few weeks that are focused on the integration of enterprise content management within the context of business applications. We hope you will join us for one or all three and that you will find them informative. Click here to learn more about these sessions and to register for them. There are many aspects of information management to consider as you look at integrating content management within your business applications. We've barely scratched the surface here but look for upcoming blog posts where we will discuss more specifics on the value of delivering documents, forms and images directly within applications like Oracle E-Business Suite, PeopleSoft Enterprise, JD Edwards Enterprise One, Siebel CRM and many others. What do you think?  Are your important business processes as healthy as they can be?  Do you have any insights to share on the value of delivering content directly within critical business processes? Please post a comment and let us know the value you have realized, the lessons learned and what specific areas you are interested in.

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >