Search Results

Search found 22304 results on 893 pages for 'content filtering'.

Page 4/893 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • The entire content of my Wordpress page has disappeared

    - by John Catterfeld
    I have a blog installed on my site using Wordpress. Last week I upgraded Wordpress from 2.6 to 3.0.4 (I had to do this manually). All went well, or so I thought, but I have just noticed that the content of an existing page has vanished. The page URL still works, but all content has disappeared - doctype, html tags, body tags, everything. Please note, this is specific to pages - posts are still displaying fine. I have since created a brand new page which does not display the content either. Things I have tried include Switching to a freshly installed theme Deactivating all plugins Setting the problem page to draft, and back again Deleting the .htaccess file I suspect it's a database problem and have contacted my hosting company who have said the only thing they can do is restore the DB from a backup, but that I should consider it a last resort. Does anyone have any further ideas what to try?

    Read the article

  • Start Your Session Search: Content Catalog is Live

    - by RichSchwerin
    Untitled Document Search through nearly 300 exhibitors and 1,600 sessions across 80 tracks, plus speakers and demos With Oracle OpenWorld 2011 just 15 weeks away, Content Catalog is now available online. That means you can browse through almost 300 exhibitors and nearly 1,600 content sessions across more than 80 different tracks, along with scores of demos. Even better, you can perform keyword searches for subjects that interest you most, from Active Data Guard to ZFS (and everything in between). But wait, there's more... Speaker Catalog--a veritable Oracle Who's Who--is also live online. You can search through hundreds of speakers, with names, titles, companies, and which sessions they're presenting. Save $500: Register Today Now that you've seen all the great content and speakers lined up for Oracle OpenWorld 2011, join us in San Francisco, October 2-6. Register by the Early Bird deadline of July 29th and save $500.

    Read the article

  • Separate urls for a set of pages sharing 80% duplicate content

    - by user131003
    Issue: Currently my site has one particular page which has country specific data. So I've URLs like : mysite.com/sale-united-states mysite.com/sale-united-kingdom mysite.com/sale-sweden etc. All these pages have 80-90% common content and 10-20% country specific content. currently all these pages canonically point to mysite.com/sale-united-states. The problem is when someone searches for "sale Sweden", Google correctly shows mysite.com/sale-united-states page, which does not feel correct as it shows US page instead of Sweden. Now I'm thinking of not using canonical url so that country specific urls are produced in Google saerch. But I'm not sure how 80% duplicate content is going to affect SEO? What should be the recommended approach for this situation? A friend of mine suggested a "separate subdomain per country" based approach but it seems overkill for one page.

    Read the article

  • Content in Context: The right medicine for your business applications

    - by Lance Shaw
    For many of you, your companies have already invested in a number of applications that are critical to the way your business is run. HR, Payroll, Legal, Accounts Payable, and while they might need an upgrade in some cases, they are all there and handling the lifeblood of your business. But are they really running as efficiently as they could be? For many companies, the answer is no. The problem has to do with the important information caught up within documents and paper. It’s everywhere except where it truly needs to be – readily available right within the context of the application itself. When the right information cannot be easily found, business processes suffer significantly. The importance of this recently struck me when I recently went to meet my new doctor and get a routine physical. Walking into the office lobby, I couldn't help but notice rows and rows of manila folders in racks from floor to ceiling, filled with documents and sensitive, personal information about various patients like myself.  As I looked at all that paper and all that history, two things immediately popped into my head.  “How do they find anything?” and then the even more alarming, “So much for information security!” It sure looked to me like all those documents could be accessed by anyone with a key to the building. Now the truth is that the offices of many general practitioners look like this all over the United States and the world.  But it had me thinking, is the same thing going on in just about any company around the world, involving a wide variety of important business processes? Probably so. Think about all the various processes going on in your company right now. Invoice payments are being processed through Accounts Payable, contracts are being reviewed by Procurement, and Human Resources is reviewing job candidate submissions and doing background checks. All of these processes and many more like them rely on access to forms and documents, whether they are paper or digital. Now consider that it is estimated that employee’s spend nearly 9 hours a week searching for information and not finding it. That is a lot of very well paid employees, spending more than one day per week not doing their regular job while they search for or re-create what already exists. Back in the doctor’s office, I saw this trend exemplified as well. First, I had to fill out a new patient form, even though my previous doctor had transferred my records over months previously. After filling out the form, I was later introduced to my new doctor who then interviewed me and asked me the exact same questions that I had answered on the form. I understand that there is value in the interview process and it was great to meet my new doctor, but this simple process could have been so much more efficient if the information already on file could have been brought directly together with the new patient information I had provided. Instead of having a highly paid medical professional re-enter the same information into the records database, the form I filled out could have been immediately scanned into the system, associated with my previous information, discrepancies identified, and the entire process streamlined significantly. We won’t solve the health records management issues that exist in the United States in this blog post, but this example illustrates how the automation of information capture and classification can eliminate a lot of repetitive and costly human entry and re-creation, even in a simple process like new patient on-boarding. In a similar fashion, by taking a fresh look at the various processes in place today in your organization, you can likely spot points along the way where automating the capture and access to the right information could be significantly improved. As you evaluate how content-process flows through your organization, take a look at how departments and regions share information between the applications they are using. Business applications are often implemented on an individual department basis to solve specific problems but a holistic approach to overall information management is not taken at the same time. The end result over the years is disparate applications with separate information repositories and in many cases these contain duplicate information, or worse, slightly different versions of the same information. This is where Oracle WebCenter Content comes into the story. More and more companies are realizing that they can significantly improve their existing application processes by automating the capture of paper, forms and other content. This makes the right information immediately accessible in the context of the business process and making the same information accessible across departmental systems which has helped many organizations realize significant cost savings. Here on the Oracle WebCenter team, one of our primary goals is to help customers find new ways to be more effective, more cost-efficient and manage information as effectively as possible. We have a series of three webcasts occurring over the next few weeks that are focused on the integration of enterprise content management within the context of business applications. We hope you will join us for one or all three and that you will find them informative. Click here to learn more about these sessions and to register for them. There are many aspects of information management to consider as you look at integrating content management within your business applications. We've barely scratched the surface here but look for upcoming blog posts where we will discuss more specifics on the value of delivering documents, forms and images directly within applications like Oracle E-Business Suite, PeopleSoft Enterprise, JD Edwards Enterprise One, Siebel CRM and many others. What do you think?  Are your important business processes as healthy as they can be?  Do you have any insights to share on the value of delivering content directly within critical business processes? Please post a comment and let us know the value you have realized, the lessons learned and what specific areas you are interested in.

    Read the article

  • HTML Manifest for Content Folios

    - by Kyle Hatlestad
    I recently worked on a project to create a custom content folio renderer in WebCenter Content. It needed to output the native files in the folio along with a manifest file in HTML format which would list the contents of the folio along with any designated metadata and a relative link to the file within the download.  This way a person could hand someone the folio download and it would be a self-contained package with all of the content and a single file to display the information on the contents.  The default Zip rendition of the folio will output the web-viewable version of the file with an HDA formatted file for each one. And unless you are fluent in HDA or have a tool to read them, they are difficult to consume. [Read More]

    Read the article

  • Dojo Datagrid Filtering Issue

    - by Zoom Pat
    I am having hard time filtering a datagrid. Please help! This is how I draw a grid. var jsonStore = new dojo.data.ItemFileWriteStore({data:columnValues}); gridInfo = { store: jsonStore, queryOptions: {ignoreCase: true}, structure: layout }; grid = new dojox.grid.DataGrid(gridInfo, "gridNode"); grid.startup(); Now if i try something like this, it works fine and gives me the rows which has the column (AGE_FROM) value equal to 63. grid.filter({AGE_FROM:63}); but I need all kinds of filtering and not just 'equal to' So how do I try to obtain all the rows which have AGE_FROM 63, and < 63 and <= 63 and =63. because grid.filter({AGE_FROM:<63}); does not work Also One other way I was thingking was to use the following filteredStore = new dojox.jsonPath.query(filterData,"[?(@.AGE_FROM = 63]"); and then draw the grid with the filteredStore, but the above is not working for a != operator. Once I figure a good way to filter grid I need to see a way to filter out dates. I am trying to find a good example for filtering dataGrid but most of the examples are just filtering based on the 'equal to' criteria. Any help is highly appreciated.

    Read the article

  • How do I stop XNA/Visual Studio from rebuilding my content project every time I build?

    - by Phil Quinn
    My group and I are working on a game in XNA 4.0 with Visual Studio 2010/2012. The main solution has 6 projects: 2 XNA game projects (1 executable/ 1 class library), 1 WPF executable for the level editor, 2 standard class libraries, and a content project. Originally, the editor and engine XNA game projects had a content reference to separate content projects. Recently, I consolidated the content projects into one to simplify asset additions. Since pushing these changes to our git repo, certain members of my group have been experiencing weird build issues. Every time they run the project, they have to re-build all of the assets. This happens regardless of whether any changes were made, even if they just run the project directly after building. I've taken a few steps to figure out why this is happening. Below is the MSBuild output set on Normal verbosity. The seemingly important part is at 4, with the line 4> Rebuilding all content because build settings have changed 1>------ Build started: Project: Engine.Core, Configuration: Debug x86 ------ 1>Build started 11/29/2012 3:24:24 AM. 1>ResolveAssemblyReferences: 1> A TargetFramework profile exclusion list will be generated. 1>EmbedXnaFrameworkRuntimeProfile: 1>Skipping target "EmbedXnaFrameworkRuntimeProfile" because all output files are up-to-date with respect to the input files. 1>GenerateTargetFrameworkMonikerAttribute: 1>Skipping target "GenerateTargetFrameworkMonikerAttribute" because all output files are up-to-date with respect to the input files. 1>CoreCompile: 1>Skipping target "CoreCompile" because all output files are up-to-date with respect to the input files. 1>XnaWriteCacheFile: 1>Skipping target "XnaWriteCacheFile" because all output files are up-to-date with respect to the input files. 1>_CopyOutOfDateSourceItemsToOutputDirectoryAlways: 1> Copying file from "<solution-dir>\src\Engine.Core\DialoguePrototypeTestDB.s3db" to "bin\x86\Debug\DialoguePrototypeTestDB.s3db". 1>_CopyAppConfigFile: 1>Skipping target "_CopyAppConfigFile" because all output files are up-to-date with respect to the input files. 1>CopyFilesToOutputDirectory: 1> Engine.Core -> <solution-dir>\src\Engine.Core\bin\x86\Debug\TimeSink.Engine.Core.dll 1> 1>Build succeeded. 1> 1>Time Elapsed 00:00:00.13 2>------ Build started: Project: TimeSink.Entities, Configuration: Debug x86 ------ 2>Build started 11/29/2012 3:24:25 AM. 2>ResolveAssemblyReferences: 2> A TargetFramework profile exclusion list will be generated. 2>EmbedXnaFrameworkRuntimeProfile: 2>Skipping target "EmbedXnaFrameworkRuntimeProfile" because all output files are up-to-date with respect to the input files. 2>GenerateTargetFrameworkMonikerAttribute: 2>Skipping target "GenerateTargetFrameworkMonikerAttribute" because all output files are up-to-date with respect to the input files. 2>CoreCompile: 2>Skipping target "CoreCompile" because all output files are up-to-date with respect to the input files. 2>XnaWriteCacheFile: 2>Skipping target "XnaWriteCacheFile" because all output files are up-to-date with respect to the input files. 2>_CopyOutOfDateSourceItemsToOutputDirectoryAlways: 2> Copying file from "<solution-dir>\src\Engine.Core\DialoguePrototypeTestDB.s3db" to "bin\x86\Debug\DialoguePrototypeTestDB.s3db". 2>CopyFilesToOutputDirectory: 2> TimeSink.Entities -> <solution-dir>\src\TimeSink.Entities\bin\x86\Debug\TimeSink.Entities.dll 2> 2>Build succeeded. 2> 2>Time Elapsed 00:00:00.11 3>------ Build started: Project: Editor (Editor\Editor), Configuration: Debug x86 ------ 4>------ Build started: Project: Engine.Game, Configuration: Debug x86 ------ 3>Build started 11/29/2012 3:24:25 AM. 3>CoreCompile: 3> All content is already up to date 3>ResolveAssemblyReferences: 3> A TargetFramework profile exclusion list will be generated. 3>EmbedXnaFrameworkRuntimeProfile: 3>Skipping target "EmbedXnaFrameworkRuntimeProfile" because all output files are up-to-date with respect to the input files. 3>GenerateTargetFrameworkMonikerAttribute: 3>Skipping target "GenerateTargetFrameworkMonikerAttribute" because all output files are up-to-date with respect to the input files. 3>CoreCompile: 3>Skipping target "CoreCompile" because all output files are up-to-date with respect to the input files. 3>XnaWriteCacheFile: 3>Skipping target "XnaWriteCacheFile" because all output files are up-to-date with respect to the input files. 3>_CopyOutOfDateSourceItemsToOutputDirectoryAlways: 3> Copying file from "<solution-dir>\src\Engine.Core\DialoguePrototypeTestDB.s3db" to "bin\x86\Debug\DialoguePrototypeTestDB.s3db". 3>_CopyOutOfDateNestedContentItemsToOutputDirectory: 3>Skipping target "_CopyOutOfDateNestedContentItemsToOutputDirectory" because all output files are up-to-date with respect to the input files. 3>CopyFilesToOutputDirectory: 3> Editor -> <solution-dir>\src\Editor\Editor\bin\x86\Debug\Editor.dll 3> 3>Build succeeded. 3> 3>Time Elapsed 00:00:00.39 4>Build started 11/29/2012 3:24:25 AM. 4>CoreCompile: 4> Rebuilding all content because build settings have changed 4> Building Textures\circle.png -> <solution-dir>\src\Engine.Game\Engine.Game\bin\x86\Debug\Content\Textures\circle.xnb 4> Importing Textures\circle.png with Microsoft.Xna.Framework.Content.Pipeline.TextureImporter 4> Processing Textures\circle.png with Microsoft.Xna.Framework.Content.Pipeline.Processors.TextureProcessor 4> Compiling <solution-dir>\src\Engine.Game\Engine.Game\bin\x86\Debug\Content\Textures\circle.xnb 4> Building Textures\giroux.png -> <solution-dir>\src\Engine.Game\Engine.Game\bin\x86\Debug\Content\Textures\giroux.xnb 4> Importing Textures\giroux.png with Microsoft.Xna.Framework.Content.Pipeline.TextureImporter 4> Processing Textures\giroux.png with Microsoft.Xna.Framework.Content.Pipeline.Processors.TextureProcessor 4> Compiling <solution-dir>\src\Engine.Game\Engine.Game\bin\x86\Debug\Content\Textures\giroux.xnb 4> Building Textures\Body_Neutral.png -> <solution-dir>\src\Engine.Game\Engine.Game\bin\x86\Debug\Content\Textures\Body_Neutral.xnb 4> Importing Textures\Body_Neutral.png with Microsoft.Xna.Framework.Content.Pipeline.TextureImporter 4> Processing Textures\Body_Neutral.png with Microsoft.Xna.Framework.Content.Pipeline.Processors.TextureProcessor 4> Compiling <solution-dir>\src\Engine.Game\Engine.Game\bin\x86\Debug\Content\Textures\Body_Neutral.xnb 4> Building font.spritefont -> <solution-dir>\src\Engine.Game\Engine.Game\bin\x86\Debug\Content\font.xnb 4> Importing font.spritefont with Microsoft.Xna.Framework.Content.Pipeline.FontDescriptionImporter 4> Processing font.spritefont with Microsoft.Xna.Framework.Content.Pipeline.Processors.FontDescriptionProcessor 4> Compiling <solution-dir>\src\Engine.Game\Engine.Game\bin\x86\Debug\Content\font.xnb 4>ResolveAssemblyReferences: 4> A TargetFramework profile exclusion list will be generated. 4>EmbedXnaFrameworkRuntimeProfile: 4>Skipping target "EmbedXnaFrameworkRuntimeProfile" because all output files are up-to-date with respect to the input files. 4>GenerateTargetFrameworkMonikerAttribute: 4>Skipping target "GenerateTargetFrameworkMonikerAttribute" because all output files are up-to-date with respect to the input files. 4>CoreCompile: 4>Skipping target "CoreCompile" because all output files are up-to-date with respect to the input files. 4>_CopyOutOfDateSourceItemsToOutputDirectoryAlways: 4> Copying file from "<solution-dir>\src\Engine.Core\DialoguePrototypeTestDB.s3db" to "bin\x86\Debug\DialoguePrototypeTestDB.s3db". 4>_CopyOutOfDateNestedContentItemsToOutputDirectory: 4>Skipping target "_CopyOutOfDateNestedContentItemsToOutputDirectory" because all output files are up-to-date with respect to the input files. 4>_CopyAppConfigFile: 4>Skipping target "_CopyAppConfigFile" because all output files are up-to-date with respect to the input files. 4>CopyFilesToOutputDirectory: 4> Engine.Game -> <solution-dir>\src\Engine.Game\Engine.Game\bin\x86\Debug\Engine.Game.exe 4>IncrementalClean: 4> Deleting file "<solution-dir>\src\Engine.Game\Engine.Game\bin\x86\Debug\circle.xnb". 4> Deleting file "<solution-dir>\src\Engine.Game\Engine.Game\bin\x86\Debug\giroux.xnb". 4> Deleting file "<solution-dir>\src\Engine.Game\Engine.Game\bin\x86\Debug\Body_Neutral.xnb". 4> Deleting file "<solution-dir>\src\Engine.Game\Engine.Game\bin\x86\Debug\font.xnb". 4> 4>Build succeeded. 4> 4>Time Elapsed 00:00:01.72 ========== Build: 4 succeeded, 0 failed, 1 up-to-date, 0 skipped ========== I can't think of how build settings could change between consecutive executions. Like I said, this only happens for half our group. One member is on a 32-bit Windows 7 Prof bootcamp partition on a Mac. Everyone else, including those who don't have the issue, are running straight 64-bit Windows 7 Prof. Both have tried using VS 2010 and VS 2012. Any insight would be greatly appreciated. Also, I can post more details upon request if this isn't thorough enough.

    Read the article

  • UPK Pre-Built Content Update

    - by Karen Rihs
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} UPK pre-built content development efforts are always underway and growing. Over the last few months, the following new, upgraded, and revised modules became available:  NEW CONTENT RELEASES E-Business Suite 12.1 Install Base Process Manufacturing, Process Quality Fundamentals for EBS Fusion 11g Release 1 Receivables Assets Purchasing Distributed Order Orchestration Payables Functional Setup Manager Project Portfolio Management Self Service Procurement JDE E1 9.0 Accounts Payable 9.0 with 9.1 Tools Fundamentals 9.0 with 9.1 Tools General Ledger 9.0 with 9.1 Tools Accounts Receivable 9.0 with 9.1 Tools Procurement and Subcontract Management 9.0 with 9.1 Tools Oracle Utilities Customer Care and Billing 2.3.1 Administrative Setup User Tasks Primavera Primavera Contract Management 14 Primavera P6 Enterprise Project Portfolio Management 8.2 UPK CONTENT UPGRADES Agile CNM 1.2 Customer Needs Management E-Business Suite 12.1 Project Foundation JDE E1 9.1 Fixed Assets Accounting General Ledger Fundamentals Inventory Management Sales Order Management PeopleSoft 9.1 Reporting Tools for PeopleTools 8.5.2  UPK CONTENT REVISIONS Oracle Utilities for Meter Data Management 2.0.1 Administrative Setup User Tasks VEE and Usage Rules Working with Measurement Data PeopleSoft 9.0 and 9.1 Enterprise Learning Management Reporting Tools for HCM (previously Reporting Tools for HRMS) PeopleSoft 9.1 Expenses General Ledger Inventory Contracts Grants Strategic Sourcing For a list of modules currently available for each product line, visit the UPK Resource Library on Oracle.com. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} For more information on how your organization can take advantage of UPK pre-built content, see our previous blog,  Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} The Value of UPK Pre-Built Content. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} - Karen Rihs, UPK Outbound Product Management

    Read the article

  • Webpage loading with wrong content-type after setting up CloudFlare

    - by Daniel Little
    I recently migrated my blog to the Ghost service, I've also setup an alias DNS record with CloudFlare. While showing the blog to a colleague I discovered one of the posts wasn't loading properly and would instead prompt to be downloaded with an application/octet-stream content-type. I can view all the pages without any issues and I believe we're both on the same network as well. Has anyone received a wrong content type like application/octet-stream using CloudFlare, or know what I can do to correct this?

    Read the article

  • Configuring the iPlanet as web tier for Oracle WebCenter Content (UCM)

    - by Adao Junior
    If you are looking for configure the iPlanet as Web server/proxy to use with the Oracle WebCenter Content, you probably won’t found an specific documentation for that or will found some old complex notes related to the old 10gR3. This post will help you out with few simple steps. That’s the diagram of the test scenario, considering that you will deploy in production in an cluster environment. First you need the software, for our scenario you will need: - Oracle iPlanet Web Server 7.0.15+ (Installed) - Oracle WebCenter Content 11gR1 PS5 (Installed) - Oracle WebLogic Web Server Plugins 11g (1.1) - Supported JDK (Using Oracle Java JDK 7u4 for the test) - Certified Client OS - Certified Server OS (Using Oracle Solaris 11 for the test) - Certified Database (Using Oracle Database 11.2.0.3 for the test) Then the configuration: - Download the latest plugin: http://www.oracle.com/technetwork/middleware/ias/downloads/wls-plugins-096117.html - Extract the WLSPlugin11g-iPlanet7.0 in some folder, like <iPlanet_Home>/plugins/wls11 - Include the plugin reference to the magnus.conf: If Unix (Solaris or Linux), include the line: Init fn="load-modules" shlib="/apps/oracle/WebServer7/plugins/wls11/lib/mod_wl.so" If Windows, Include the line:        Init fn="load-modules" shlib="D:\\oracle\\WebServer7\\plugins\\wls11\\lib\\mod_wl.dll" - Include the proxy reference to the obj.conf of each instance: <Object name="weblogic" ppath="*/cs/*"> Service fn="wl-proxy" WebLogicCluster="wcc-node1:16201,wcc-node2:16202, wcc-node3:16203" </Object>   <Object name="weblogic" ppath="*/_dav/*"> Service fn="wl-proxy" WebLogicCluster="wcc-node1:16201,wcc-node2:16202, wcc-node3:16203" </Object>   <Object name="weblogic" ppath="*/_ocsh/*"> Service fn="wl-proxy" WebLogicCluster="wcc-node1:16201,wcc-node2:16202, wcc-node3:16203" </Object>   <Object name="weblogic" ppath="*/adfAuthentication/*"> Service fn="wl-proxy" WebLogicCluster="wcc-node1:16201,wcc-node2:16202, wcc-node3:16203" </Object> If you are using an single node setup, change the Service fn=…. line to something like: Service fn="wl-proxy" WebLogicHost=<wcc-server> WebLogicPort=16200 With these configurations, your should have the WebCenter Content UI working with the iPlanet, test it. [http://<web-server>/cs/] With the UI working, the last step is to configure the WebDav: - Go to the iPlanet Admin Console (usually https://<web-server>:8989) - Go to Configurations >> [instance] >> Virtual Servers >> [Virtual Server] >> WebDAV: - Click New - Populate the URI with /cs/idcplg/webdav: - Select “Anyone (No Authentication)”, the wc Content will take care of the security: This will allow you to use the WebDav feature and the Desktop Integration Suite, including double-byte characters. Anothers iPlanet tunes could be done, I can cover in the next post related to the iPlanet. Cross-posted on the ContentrA.com Blog Related posts:  - Using a Web Proxy Server with WebCenter Family

    Read the article

  • Java webapp: adding a content-disposition header to force browsers "save as" behavior

    - by WizardOfOdds
    Even though it's not part of HTTP 1.1/RFC2616 webapps that wish to force a resource to be downloaded (rather than displayed) in a browser can use the Content-Disposition header like this: Content-Disposition: attachment; filename=FILENAME Even tough it's only defined in RFC2183 and not part of HTTP 1.1 it works in most web browsers as wanted. So from the client side, everything is good enough. However on the server-side, in my case, I've got a Java webapp and I don't know how I'm supposed to set that header, especially in the following case... I'll have a file (say called "bigfile") hosted on an Amazon S3 instance (my S3 bucket shall be accessible using a partial address like: files.mycompany.com/) so users will be able to access this file at files.mycompany.com/bigfile. Now is there a way to craft a servlet (or a .jsp) so that the Content-Disposition header is always added when the user wants to download that file? What would the code look like and what are the gotchas, if any?

    Read the article

  • Content Catalog for Oracle OpenWorld is Ready

    - by Rick Ramsey
    American Major League Baseball Umpire Jim Joyce made one of the worst calls in baseball history when he ruled Jason Donald safe at First in Wednesday's game between the Detroit Lions and the Cleveland Indians. The New York Times tells the story well. It was the 9th inning. There were two outs. And Detroit Tiger's pitcher Armando Galarraga had pitched a perfect game. Instead of becoming the 21st pitcher in Major League Baseball history to pitch a perfect game, Galarraga became the 10th pitcher in Major League Baseball history to ever lose a perfect game with two outs in the ninth inning. More insight from the New York Times here. You can avoid a similar mistake and its attendant death treats, hate mail, and self-loathing by studying the Content Catalog just released for Oracle Open World, Java One, and Oracle Develop conferences being held in San Francisco September 19-23. The Content Catalog displays all the available content related to the event, the venue, and the stream or track you're interested in. Additional filters are available to narrow down your results even more. It's simple to use and a big help. Give it a try. It'll spare you the fate of Jim Joyce. - Rick

    Read the article

  • Content Catalog Live!

    - by marius.ciortea
    tweetmeme_url = 'http://blogs.oracle.com/javaone/2010/06/content_catalog_live.html'; Share .FBConnectButton_Small{background-position:-5px -232px !important;border-left:1px solid #1A356E;} .FBConnectButton_Text{margin-left:12px !important ;padding:2px 3px 3px !important;} The Oracle OpenWorld, JavaOne and Oracle Develop 2010 content catalog is live. You can peruse most of the almost 2,000 sessions available this year at OpenWorld, JavaOne and Oracle Develop, including session titles, abstracts, track info, and confirmed speakers.You can find the latest on JDK 7, deep dives on the JVM, REST, JavaFX, JSF, Enterprise Java, Seam, OSGi, HTTP, Swing, GWT, Groovy, JRuby, Unit Testing, Metro, Lift, Comet, jclouds, Hudson, Scala, [insert technology here], etc. To access the Content Catalog, just look under Tools on the right side of this page. You can tag content in the catalog so you--and others who do what you do, or think the way you think--can easily find this year's don't-miss sessions. Take a few minutes to look around, and start planning your most productive/informative/valuable JavaOne ever! Schedule Builder, where you can sign up for sessions, will be up in July.

    Read the article

  • Cost effective way to provide static media content

    - by james
    I'd like to be able to deliver around 50MB of static content, either in about 30 individual files up to 10MB or grouped into 3 compressed files, around 5k to 20k times a day. Ideally I'd like to put some sort of very basic security around providing the data to ensure that a request is from the expected source, but if tossing the security for a big reduction in price is possible then it's an option. Does anyone have any suggestions other than what I've found: Google AppEngine is $0.12/GB & I believe has a file size limit of 10MB so I'd have to break the data up a bit. So a rough calculation would seem to be that this would cost me about $30 to $120 a day. Or I've seen something like what seems to be just public static content delivery with no type of logic capabilities like Usenet.nl at what I think calculates to about $0.025/GB which would cost me about $6 to $25 a day. Any idea if I'm going about these calculations right & if there might be a better option for just static content on a decently high volume delivery? Again some basic security would be great but if cost is greatly reduced without it then I'm up for that.

    Read the article

  • SEO with duplicate content

    - by user16831
    I have a nature photography site with multiple types of photo galleries. Each photo and associated caption on my site appears in several galleries. For instance, a photo of a goldfinch that was taken on a trip to New Mexico in 2008 will appear in the "goldfinch.php" gallery, in the "finches.php" gallery, and in the "New_Mexico_2008.php" gallery. This duplication is useful for my site visitors - User A may want to see goldfinch photos, whereas User B wants to see photos from New Mexico - but I am concerned about the SEO implications. The typical suggestions to deal with duplicate content, such as 301 redirects and canonical tags, probably won't work in this case, because the page content is substantially different (ranging from ~1% to ~90% duplication, depending on the specific example chosen). The obvious solution to me would be to edit robots.txt to only allow search engines to crawl one type of gallery - for instance, if they crawled only the galleries organized by species(e.g. goldfinch.php), all the photos on my site would be found exactly once. However, the Google content guidelines recommend against blocking crawler access to duplicate information. Should I go ahead and use robots.txt anyway? Or is there a better solution?

    Read the article

  • Oracle Enterprise Manager Ops Center : Using Operational Profiles to Install Packages and other Content

    - by LeonShaner
    Oracle Enterprise Manager Ops Center provides numerous ways to deploy content, such as through OS Update Profiles, or as part of an OS Provisioning plan or combinations of those and other "Install Software" capabilities of Deployment Plans.  This short "how-to" blog will highlight an alternative way to deploy content using Operational Profiles. Usually we think of Operational Profiles as a way to execute a simple "one-time" script to perform a basic system administration function, which can optionally be based on user input; however, Operational Profiles can be much more powerful than that.  There is often more to performing an action than merely running a script -- sometimes configuration files, packages, binaries, and other scripts, etc. are needed to perform the action, and sometimes the user would like to leave such content on the system for later use. For shell scripts and other content written to be generic enough to work on any flavor of UNIX, converting the same scripts and configuration files into Solaris 10 SVR4 package, Solaris 11 IPS package, and/or a Linux RPM's might be seen as three times the work, for little appreciable gain.   That is where using an Operational Profile to deploy simple scripts and other generic content can be very helpful.  The approach is so powerful, that pretty much any kind of content can be deployed using an Operational Profile, provided the files involved are not overly large, and it is not necessary to convert the content into UNIX variant-specific formats. The basic formula for deploying content with an Operational Profile is as follows: Begin with a traditional script header, which is a UNIX shell script that will be responsible for decoding and extracting content, copying files into the right places, and executing any other scripts and commands needed to install and configure that content. Include steps to make the script platform-aware, to do the right thing for a given UNIX variant, or a "sorry" message if the operator has somehow tried to run the Operational Profile on a system where the script is not designed to run.  Ops Center can constrain execution by target type, so such checks at this level are an added safeguard, but also useful with the generic target type of "Operating System" where the admin wants the script to "do the right thing," whatever the UNIX variant. Include helpful output to show script progress, and any other informational messages that can help the admin determine what has gone wrong in the case of a problem in script execution.  Such messages will be shown in the job execution log. Include necessary "clean up" steps for normal and error exit conditions Set non-zero exit codes when appropriate -- a non-zero exit code will cause an Operational Profile job to be marked failed, which is the admin's cue to look into the job details for diagnostic messages in the output from the script. That first bullet deserves some explanation.  If Operational Profiles are usually simple "one-time" scripts and binary content is not allowed, then how does the actual content, packages, binaries, and other scripts get delivered along with the script?  More specifically, how does one include such content without needing to first create some kind of traditional package?   All that is required is to simply encode the content and append it to the end of the Operational Profile.  The header portion of the Operational Profile will need to contain the commands to decode the embedded content that has been appended to the bottom of the script.  The header code can do whatever else is needed, and finally clean up any intermediate files that were created during the decoding and extraction of the content. One way to encode binary and other content for inclusion in a script is to use the "uuencode" utility to convert the content into simple base64 ASCII text -- a form that is suitable to be appended to an Operational Profile.   The behavior of the "uudecode" utility is such that it will skip over any parts of the input that do not fit the uuencoded "begin" and "end" clauses.  For that reason, your header script will be skipped over, and uudecode will find your embedded content, that you will uuencode and paste at the end of the Operational Profile.  You can have as many "begin" / "end" clauses as you need -- just separate each embedded file by an empty line between "begin" and "end" clauses. Example:  Install SUNWsneep and set the system serial number Script:  deploySUNWsneep.sh ( <- right-click / save to download) Highlights: #!/bin/sh # Required variables: OC_SERIAL="$OC_SERIAL" # The user-supplied serial number for the asset ... Above is a good practice, showing right up front what kind of input the Operational Profile will require.   The right-hand side where $OC_SERIAL appears in this example will be filled in by Ops Center based on the user input at deployment time. The script goes on to restrict the use of the program to the intended OS type (Solaris 10 or older, in this example, but other content might be suitable for Solaris 11, or Linux -- it depends on the content and the script that will handle it). A temporary working directory is created, and then we have the command that decodes the embedded content from "self" which in scripting terms is $0 (a variable that expands to the name of the currently executing script): # Pass myself through uudecode, which will extract content to the current dir uudecode $0 At that point, whatever content was appended in uuencoded form at the end of the script has been written out to the current directory.  In this example that yields a file, SUNWsneep.7.0.zip, which the rest of the script proceeds to unzip, and pkgadd, followed by running "/opt/SUNWsneep/bin/sneep -s $OC_SERIAL" which is the command that stores the system serial for future use by other programs such as Explorer.   Don't get hung up on the example having used a pkgadd command.  The content started as a zip file and it could have been a tar.gz, or any other file.  This approach simply decodes the file.  The header portion of the script has to make sense of the file and do the right thing (e.g. it's up to you). The script goes on to clean up after itself, whether or not the above was successful.  Errors are echo'd by the script and a non-zero exit code is set where appropriate. Second to last, we have: # just in case, exit explicitly, so that uuencoded content will not cause error OPCleanUP exit # The rest of the script is ignored, except by uudecode # # UUencoded content follows # # e.g. for each file needed, #  $ uuencode -m {source} {source} > {target}.uu5 # then paste the {target}.uu5 files below # they will be extracted into the workding dir at $TDIR # The commentary above also describes how to encode the content. Finally we have the uuencoded content: begin-base64 444 SUNWsneep.7.0.zip UEsDBBQAAAAIAPsRy0Di3vnukAAAAMcAAAAKABUAcmVhZG1lLnR4dFVUCQADOqnVT7up ... VXgAAFBLBQYAAAAAAgACAJEAAADTNwEAAAA= ==== That last line of "====" is the base64 uuencode equivalent of a blank line, followed by "end" and as mentioned you can have as many begin/end clauses as you need.  Just separate each embedded file by a blank line after each ==== and before each begin-base64. Deploying the example Operational Profile looks like this (where I have pasted the system serial number into the required field): The job succeeded, but here is an example of the kind of diagnostic messages that the example script produces, and how Ops Center displays them in the job details: This same general approach could be used to deploy Explorer, and other useful utilities and scripts. Please let us know what you think?  Until next time...\Leon-- Leon Shaner | Senior IT/Product ArchitectSystems Management | Ops Center Engineering @ Oracle The views expressed on this [blog; Web site] are my own and do not necessarily reflect the views of Oracle. For more information, please go to Oracle Enterprise Manager  web page or  follow us at :  Twitter | Facebook | YouTube | Linkedin | Newsletter

    Read the article

  • ASP.Net MVC2 (RTM) breaks response filtering - "Filtering is not allowed"

    - by womp
    I've just done a test run of upgrading a project to ASP.Net MVC 2 (RTM) in anticipation of the full official .Net 4.0 release coming later this month. Our application is using a minimizer for our CSS and javascript. To do so, it is making use of the HttpResponse.Filter property to set a custom filter. With the upgrade, the setter for this property is throwing an HttpException saying "Filtering is not allowed." Looking that the HttpResponse.Filter property in reflector shows this: set { if (!this.UsingHttpWriter) { throw new HttpException(SR.GetString("Filtering_not_allowed")); } ... private bool UsingHttpWriter { get { return ((this._httpWriter != null) && (this._writer == this._httpWriter)); } } Clearly something has changed in the way the HttpResponse is writing to the output stream in MVC2. Does anyone know what the change is, or at least a workaround for this? EDIT: This seems pretty radical. Some further investigation shows that ASP.Net MVC 2 RTM is using a System.Web.Mvc.ViewPage.SwitchWriter as the Output property of an HttpResponse, whereas MVC 1 was using a plain old HttpWriter. That explains why the exception is being thrown. But that doesn't explain why they've chosen to completely break this functionality. This thread seems to indicate that this is just temporary... but this makes me pretty nervous... this is the RTM after all. Any further comments appreciated on this.

    Read the article

  • Conversation as User Assistance

    - by ultan o'broin
    Applications User Experience members (Erika Web, Laurie Pattison, and I) attended the User Assistance Europe Conference in Stockholm, Sweden. We were impressed with the thought leadership and practical application of ideas in Anne Gentle's keynote address "Social Web Strategies for Documentation". After the conference, we spoke with Anne to explore the ideas further. Anne Gentle (left) with Applications User Experience Senior Director Laurie Pattison In Anne's book called Conversation and Community: The Social Web for Documentation, she explains how user assistance is undergoing a seismic shift. The direction is away from the old print manuals and online help concept towards a web-based, user community-driven solution using social media tools. User experience professionals now have a vast range of such tools to start and nurture this "conversation": blogs, wikis, forums, social networking sites, microblogging systems, image and video sharing sites, virtual worlds, podcasts, instant messaging, mashups, and so on. That user communities are a rich source of user assistance is not a surprise, but the extent of available assistance is. For example, we know from the Consortium for Service Innovation that there has been an 'explosion' of user-generated content on the web. User-initiated community conversations provide as much as 30 times the number of official help desk solutions for consortium members! The growing reliance on user community solutions is clearly a user experience issue. Anne says that user assistance as conversation "means getting closer to users and helping them perform well. User-centered design has been touted as one of the most important ideas developed in the last 20 years of workplace writing. Now writers can take the idea of user-centered design a step further by starting conversations with users and enabling user assistance in interactions." Some of Anne's favorite examples of this paradigm shift from the world of traditional documentation to community conversation include: Writer Bob Bringhurst's blog about Adobe InDesign and InCopy products and Adobe's community help The Microsoft Development Network Community Center ·The former Sun (now Oracle) OpenDS wiki, NetBeans Ruby and other community approaches to engage diverse audiences using screencasts, wikis, and blogs. Cisco's customer support wiki, EMC's community, as well as Symantec and Intuit's approaches The efforts of Ubuntu, Mozilla, and the FLOSS community generally Adobe Writer Bob Bringhurst's Blog Oracle is not without a user community conversation too. Besides the community discussions and blogs around documentation offerings, we have the My Oracle Support Community forums, Oracle Technology Network (OTN) communities, wiki, blogs, and so on. We have the great work done by our user groups and customer councils. Employees like David Haimes reach out, and enthusiastic non-employee gurus like Chet Justice (OracleNerd), Floyd Teter and Eddie Awad provide great "how-to" information too. But what does this paradigm shift mean for existing technical writers as users turn away from the traditional printable PDF manual deliverables? We asked Anne after the conference. The writer role becomes one of conversation initiator or enabler. The role evolves, along with the process, as the users define their concept of user assistance and terms of engagement with the product instead of having it pre-determined. It is largely a case now of "inventing the job while you're doing it, instead of being hired for it" Anne said. There is less emphasis on formal titles. Anne mentions that her own title "Content Stacker" at OpenStack; others use titles such as "Content Curator" or "Community Lead". However, the role remains one essentially about communications, "but of a new type--interacting with users, moderating, curating content, instead of sitting down to write a manual from start to finish." Clearly then, this role is open to more than professional technical writers. Product managers who write blogs, developers who moderate forums, support professionals who update wikis, rock star programmers with a penchant for YouTube are ideal. Anyone with the product knowledge, empathy for the user, and flair for relationships on the social web can join in. Some even perform these roles already but do not realize it. Anne feels the technical communicator space will move from hiring new community conversation professionals (who are already active in the space through blogging, tweets, wikis, and so on) to retraining some existing writers over time. Our own research reveals that the established proponents of community user assistance even set employee performance objectives for internal content curators about the amount of community content delivered by people outside the organization! To take advantage of the conversations on the web as user assistance, enterprises must first establish where on the spectrum their community lies. "What is the line between community willingness to contribute and the enterprise objectives?" Anne asked. "The relationship with users must be managed and also measured." Anne believes that the process can start with a "just do it" approach. Begin by reaching out to existing user groups, individual bloggers and tweeters, forum posters, early adopter program participants, conference attendees, customer advisory board members, and so on. Use analytical tools to measure the level of conversation about your products and services to show a return on investment (ROI), winning management support. Anne emphasized that success with the community model is dependent on lowering the technical and motivational barriers so that users can readily contribute to the conversation. Simple tools must be provided, and guidelines, if any, must be straightforward but not mandatory. The conversational approach is one where traditional style and branding guides do not necessarily apply. Tools and infrastructure help users to create content easily, to search and find the information online, read it, rate it, translate it, and participate further in the content's evolution. Recognizing contributors by using ratings on forums, giving out Twitter kudos, conference invitations, visits to headquarters, free products, preview releases, and so on, also encourages the adoption of the conversation model. The move to conversation as user assistance is not free, but there is a business ROI. The conversational model means that customer service is enhanced, as user experience moves from a functional to a valued, emotional level. Studies show a positive correlation between loyalty and financial performance (Consortium for Service Innovation, 2010), and as customer experience and loyalty become key differentiators, user experience professionals cannot explore the model's possibilities. The digital universe (measured at 1.2 million petabytes in 2010) is doubling every 12 to 18 months, and 70 percent of that universe consists of user-generated content (IDC, 2010). Conversation as user assistance cannot be ignored but must be embraced. It is a time to manage for abundance, not scarcity. Besides, the conversation approach certainly sounds more interesting, rewarding, and fun than the traditional model! I would like to thank Anne for her time and thoughts, and recommend that all user assistance professionals read her book. You can follow Anne on Twitter at: http://www.twitter.com/annegentle. Oracle's Acrolinx IQ deployment was used to author this article.

    Read the article

  • Ingress filtering in Linux traffic control: Redirect traffic to IFB device

    - by Dani Camps
    I have an openwrt router and I want to shape incoming traffic in order to classify all the traffic addressed to a certain IP address in my home network as low priority. For that purpose I want to redirect all traffic incoming to the eth1 interface, the one connected to the DSL modem, to an IFB device where I will do the shaping. These are the details of my system: Linux OpenWrt 2.6.32.27 #7 Fri Jul 15 02:43:34 CEST 2011 mips GNU/Linux Here is the script I am using where the last instruction is failing: # Variable definition ETH=eth1 IFB=ifb1 IP_LP="192.168.1.22/32" DL_RATE="900kbps" HP_RATE="890kbps" LP_RATE="10kbps" TC="tc" # Configuring the ifbX interface insmod ifb insmod sch_htb insmod sch_ingress ifconfig $IFB up # Adding the HTB scheduler to the ingress interface $TC qdisc add dev $IFB root handle 1: htb default 11 # Set the maximum bandwidth that each priority class can get, and the maximum borrowing they can do $TC class add dev $IFB parent 1:1 classid 1:10 htb rate $LP_RATE ceil $DL_RATE $TC class add dev $IFB parent 1:1 classid 1:11 htb rate $HP_RATE ceil $DL_RATE # Redirect all ingress traffic arriving at $ETH to $IFB $TC qdisc del dev $ETH ingress 2>/dev/null $TC qdisc add dev $ETH ingress $TC filter add dev $ETH parent ffff: protocol ip prio 1 u32 \ match u32 0 0 flowid 1:1 \ action mirred egress redirect dev $IFB The last instruction fails with: Action 4 device ifb1 ifindex 9 RTNETLINK answers: No such file or directory We have an error talking to the kernel Does anyone know what am I doing wrong ? Best Regards Daniel

    Read the article

  • sendmail rules for filtering spam

    - by user71061
    Hi! Can anyone help me with constructing sendmail rules for limiting spam? Assuming that name of my domain is my.domain.com, I want to use following rules: If BOTH sender and recipient address is from my.domain.com, message should be rejected (sendmail server only relays messages between my internal exchange server and outside word, so sending messages between users from my.domain.com always occour on exchange server and never on sendmail server) If recipient list contains AT LAST ONE invalid address, whole message should be rejected (even for valid recipients addresses) If sending server uses HELO message with bogus domain name (other than domain of this server), message should be rejected Any server attempting to send mail to dedicated address (f.e. [email protected]), should be automatically blacklisted Any other suggested rules ...

    Read the article

  • Index page content identical to page 1 of a gallery-type website

    - by WordPress Developer
    I have a gallery type website, e.g. a site that lists blog posts or pictures in a paginated manner. However, I have 2 pages that have identical content: example.com/index.html example.com/page/1 Page 2, 3 and so on have different content naturally. However, for SEO purposes, what is the best way of telling Google that page 1 is identical to index.html? Should I 302 redirect index.html to /page/1 so index.html is non-existent, so to say or should I put a canonical tag in /page/1 (but not on /page/2) that points to index.html?

    Read the article

  • assigning values to shader parameters in the XNA content pipeline

    - by Nick
    I have tried creating a simple content processor that assigns the custom effect I created to models instead of the default BasicEffect. [ContentProcessor(DisplayName = "Shadow Mapping Model")] public class ShadowMappingModelProcessor : ModelProcessor { protected override MaterialContent ConvertMaterial(MaterialContent material, ContentProcessorContext context) { EffectMaterialContent shadowMappingMaterial = new EffectMaterialContent(); shadowMappingMaterial.Effect = new ExternalReference<EffectContent>("Effects/MultipassShadowMapping.fx"); return context.Convert<MaterialContent, MaterialContent>(shadowMappingMaterial, typeof(MaterialProcessor).Name); } } This works, but when I go to draw a model in a game, the effect has no material properties assigned. How would I go about assigning, say, my DiffuseColor or SpecularColor shader parameter to white or (better) can I assign it to some value specified by the artist in the model? (I think this may have something to do with the OpaqueDataDictionary but I am confused on how to use it--the content pipeline has always been a black box to me.)

    Read the article

  • Will Google penalize subdomains if content is nearly identical

    - by John Pham
    I have created a subdomain for a town in San Diego that's ranking very well for it's keywords: http://carmelvalleymortgage.loanrebateinc.com/ I want to replicate this subdomain's content for another town in San Diego: http://sandiego.mortgage.loanrebateinc.com/ I will edit the text, tags, image files specific to each town, otherwise the verbiage will be identical. Question: Will Google penalize the main site? Will Google penalize the subdomains and list the content as spam? If yes to either 1 or 2, what strategies can I implement to prevent this? I'm using WordPress.

    Read the article

  • JavaOne 2011: Content review process and Tips for submissions

    - by arungupta
    The Technical Sessions, Birds of Feather, Panels, and Hands-on labs (basically all the content delivered at JavaOne) forms the backbone of the conference. At this year's JavaOne conference you'll have access to the rock star speakers, the ability to engage with luminaries in the hallways, and have beer (or 2) with community peers in designated areas. Even though the conference is Oct 2-6, 2011, and will be bigger and better than last year's conference, the Call for Paper submission and review/selection evaluation started much earlier.In previous years, I've participated in the review process and this year I was honored to serve as co-lead for the "Enterprise Service Architecture and Cloud" track with Ludovic Champenois. We had a stellar review team with an equal mix of Oracle and external community reviewers. The review process is very overwhelming with the reviewers going through multiple voting iterations on each submission in order to ensure that the selected content is the BEST of the submitted lot. Our ultimate goal was to ensure that the content best represented the track, and most importantly would draw interest and excitement from attendees. As always, the number and quality of submissions were just superb, making for a truly challenging (and rewarding) experience for the reviewers. As co-lead I tried to ensure that I applied a fair and balanced process in the evaluation of content in my track. . Here are some key steps followed by all track leads: Vote on sessions - Each reviewer is required to vote on the sessions on a scale of 1-5 - and also provide a justifying comment. Create buckets - Divide the submissions into different buckets to ensure a fair representation of different topics within a track. This ensures that if a particular bucket got higher votes then the track is not exclusively skewed towards it. Top 7 - The review committee provides a list of the top 7 talks that can be used in the promotional material by the JavaOne team. Generally these talks are easy to identify and a consensus is reached upon them fairly quickly. First cut - Each track is allocated a total number of sessions (including panels), BoFs, and Hands-on labs that can be approved. The track leads then start creating the first cut of the approvals using the casted votes coupled with their prior experience in the subject matter. In our case, Ludo and I have been attending/speaking at JavaOne (and other popular Java-focused conferences) for double digit years. The Grind - The first cut is then refined and refined and refined using multiple selection criteria such as sorting on the bucket, speaker quality, topic popularity, cumulative vote total, and individual vote scale. The sessions that don't make the cut are reviewed again as well to ensure if they need to replace one of the selected one as a potential alternate. I would like to thank the entire Java community for all the submissions and many thanks to the reviewers who spent countless hours reading each abstract, voting on them, and helping us refine the list. I think approximately 3-4 hours cumulative were spent on each submission to reach an evaluation, specifically the border line cases. We gave our recommendations to the JavaOne Program Committee Chairperson (Sharat Chander) and accept/decline notifications should show up in submitter inboxes in the next few weeks. Here are some points to keep in mind when submitting a session to JavaOne next time: JavaOne is a technology-focused conference so any product, marketing or seemingly marketish talk are put at the bottom of the list.Oracle Open World and Oracle Develop are better options for submitting product specific talks. Make your title catchy. Remember the attendees are more likely to read the abstract if they like the title. We try our best to recategorize the talk to a different track if it needs to but please ensure that you are filing in the right track to have all the right eyeballs looking at it. Also, it does not hurt marking an alternate track if your talk meets the criteria. Make sure to coordinate within your team before the submission - multiple sessions from the same team or company does not ensure that the best speaker is picked. In such case we rely upon your "google presence" and/or review committee's prior knowledge of the speaker. The reviewers may not know you or your product at all and you get 750 characters to pitch your idea. Make sure to use all of them, to the last 750th character. Make sure to read your abstract multiple times to ensure that you are giving all the relevant information ? Think through your presentation and see if you are leaving out any important aspects.Also look if the abstract has any redundant information that will not required by the reviewers. There are additional sections that allow you to share information about the speaker and the presentation summary. Use them to blow the horn about yourself and any other relevant details. Please don't say "call me at xxx-xxx-xxxx to find out the details" :-) The review committee enjoyed reviewing the submissions and we certainly hope you'll have a great time attending them. Happy JavaOne!

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >