Search Results

Search found 20935 results on 838 pages for 'content'.

Page 34/838 | < Previous Page | 30 31 32 33 34 35 36 37 38 39 40 41  | Next Page >

  • Taking a Flying Leap

    - by Lance Shaw
    Yesterday, I went skydiving with three of my children.  It was thrilling, scary, invigorating and exciting. While there is obvious risk involved, the reward and feeling of success was well worth it. You might already be wondering what skydiving would have to with WebCenter, so let me explain. Implementing a skydiving program and becoming an instructor does not happen overnight.  It does not happen with the purchase of the needed technology. Not one of us would go out, buy a parachute, the harnesses, helmet and all the gear and be able to convince anyone that we are now ready to be a skydiving instructor. The fact is that obtaining the technology is merely a small piece of the overall process and so is the case with managing content in your company. You don't just buy the right software (Oracle WebCenter Content) and go to your boss and declare information management success. There is planning, research and effort that goes into deploying software of any kind and especially when it is as mission-critical to the success of your business as Enterprise Content Management. To become a certified skydiving instructor takes at least 3 years of commitment and often longer. In the United States, candidates must complete over 500 solo jumps of their own over a minimum of 36 months and then must complete additional rigorous training under observation.  When you consider the amount of time and effort involved, it's not unlike getting a college degree and anyone that has trusted their lives to one of these instructors will no doubt appreciate their dedication to the curriculum.  Implementing an ECM system won't take that long, but it certainly requires commitment, analysis and consideration. But guess what?  Humans are involved and that means that mistakes can happen and that rules change.  This struck me while reading an excellent post on darkreading.com by Glenn S. Phillips entitled "Mission Impossible: 4 Reasons Compliance is Impossible".  His over-arching point was that with information management and security, environments change and people are involved meaning the work is never done.  He stated that you can never claim your compliance efforts are complete because of the following reasons. People are involved.  And lets face it, some are more trustworthy than others. Change is Constant. There is always some new technology coming along that is disruptive. Consumer grade cloud file sharing and sync tools come to mind here. Compliance is interpreted, not defined.  Laws and the judges that read them are always on the move. Technology is a tool, not a complete solution. There is no magic pill. The skydiving analogy holds true here as well.  Ultimately, a single person packs your parachute.  For obvious reasons, you prefer that this person be trustworthy but there are no absolute guarantees of a 100% error-free scenario.  Weather and wind conditions are never a constant and the best-laid plans for a great day of skydiving are easily disrupted by forces outside of your control.  Rules and regulations vary by location and may be updated at any time and as I mentioned early on, even the best technology on its own will only get you started. The good news is that, like skydiving, with the right technology, the right planning, the right team and a proper understanding of the rules and regulations that govern your industry, your ECM deployment can be a great success.  Failure to plan for any of the 4 factors that Glenn outlined in his article will certainly put your deployment and maybe even your company at risk, so consider them carefully. As a final aside, for those of you who consider skydiving an incredibly dangerous and risky pastime, consider this comparative statistic.  In 2012, the U.S. Parachute Association recorded 19 fatal skydiving accidents in the U.S. out of roughly 3.1 million jumps.  That’s 0.006 fatalities per 1,000 jumps. By comparison, the U.S. National Highway Traffic Safety Administration reports that there were 34,080 deaths due to car accidents in 2012.  Based on the percentages, one could argue that it is safer to jump out of a plane than to drive to the airport where the skydiving will take place. While the way you manage, secure, classify, control, retain and dispose of company files may not carry as much risk as driving or skydiving, it certainly carries risk for the organization when not planned and deployed appropriately.  Consider all the factors involved in your organization as you make your content management plans.  For additional areas of consideration, be sure to download our free whitepaper on the topic entitled "The Top 10 Criteria for Choosing an ECM System" which is available for download here.

    Read the article

  • Groovy/Grails course content

    - by Don
    Hi, Some Java developers have asked if I could give them a 2-day primer on Grails development. I'm assuming they're familiar with: Java language and libraries Java web development, e.g. Servlets, JSPs Spring Hibernate Client-side development, CSS, HTML, JavaScript I'm further assuming they have no experience with Groovy or Grails. AFAIK, the app that they'll be building is a new project, so there's no need to cover topics like using GORM with a legacy database. I'm trying to decide how I should structure the course, e.g. what topics to cover and how much time to spend on each. I reckon about 1/2 - 3/4 days on Groovy and the rest of the time on Grails would be adequate. I'll probably use the Groovy console to demonstrate the Groovy language concepts and a simple Grails app for explaining the conventions and structure of a Grails project. If anyone has a list of Groovy/Grails topics that I should cover, or even an outline of a similar course that they've given/taken, I'd be very grateful. Naturally, I will credit for any resources that I use during the course.

    Read the article

  • REST - Tradeoffs between content negotiation via Accept header versus extensions

    - by Brandon Linton
    I'm working through designing a RESTful API. We know we want to return JSON and XML for any given resource. I had been thinking we would do something like this: GET /api/something?param1=value1 Accept: application/xml (or application/json) However, someone tossed out using extensions for this, like so: GET /api/something.xml?parm1=value1 (or /api/something.json?param1=value1) What are the tradeoffs with these approaches? Is it best to rely on the accept header when an extension isn't specified, but honor extensions when specified? Is there a drawback to that approach?

    Read the article

  • Showing content from pages at different URL's (masking), possibly with .htaccess

    - by zigojacko
    If I have URL's like:- domain.com/category/widgets/filter/blue domain.com/category/widgets/filter/red And it is pretty difficult to reconstruct them to something like:- domain.com/category/blue-widgets domain.com/category/red-widgets Is there any way at all that I can use URL rewrites or anything else with .htaccess or on the server to display the URL's as the domain.com/category/blue-widgets on the domain.com/category/widgets/filter/blue page? I've looked into masking URL's but got nowhere and this has been something bugging me for almost 6 months now. Is there any way to achieve what I want to do? FYI: This is a Magento website and the above process, I am wanting to implement for potentially hundreds of URL's. Edit To respond to @kkugelmann's answer:- I couldn't get your proposed RewriteRule to make a difference at all in the .htaccess file so I started testing a few things in this .htaccess tester:- The proposed RewriteRule didn't work in this tester:- However, the following did:- But adding any of these RewriteRule's into the website's .htaccess file did not rewrite the URL at all... Edit2 By the way, if I add [R=301,L] to the end of the URL rewrite rule, it does actually then rewrite the rule, but of course 301 redirects it as well which is unwanted behaviour. Edit3 I found another question with the same issue... And an accepted answer that solved the problem which seemed to be something to do with using mod_proxy and the [P] tag on the rule (if I try this, the page 404's).

    Read the article

  • CloudFront for dynamic content CDN

    - by Elad Lachmi
    I would like to use CF as a CDN for my entire site, including static and dynamic content. I have been using CF for static content for a while and I am very happy with the results. I am now doing POC of putting the web server completely behind CF. For the dynamic content I created a new distribution and set the origin to be my web server. Right now I'm looking to test the solution, so I have the web server on the original domain and the CF distribution on the amazon domain. This works with the exception of HTTPS urls and POST requests. For HTTPS requests, I see the requests are forwarded to the original site domain for now, but how will CF handle them when I move the distribution to the www cname? What configuration changes should I make so that CF forwards HTTPS requests to the origin? For POST requests, I want the post to be made to the origin server. Can I set this up in CF? Finally, the site has membership. Can I configure CF to pull all content from the origin if the user is logged in? Sorry for the long question. I'm a little lost and documentation for dynamic CF is still kind of scarce. Thank you!

    Read the article

  • How do I deal with content scrapers? [closed]

    - by aem
    Possible Duplicate: How to protect SHTML pages from crawlers/spiders/scrapers? My Heroku (Bamboo) app has been getting a bunch of hits from a scraper identifying itself as GSLFBot. Googling for that name produces various results of people who've concluded that it doesn't respect robots.txt (eg, http://www.0sw.com/archives/96). I'm considering updating my app to have a list of banned user-agents, and serving all requests from those user-agents a 400 or similar and adding GSLFBot to that list. Is that an effective technique, and if not what should I do instead? (As a side note, it seems weird to have an abusive scraper with a distinctive user-agent.)

    Read the article

  • Webmaster Tools word count

    - by Henrik Erlandsson
    Is there a way to somehow verify that the googlebot finds the headings and the content, for example by word count? I'm asking this because I tried a program called Screaming Frog, which fails to even fetch the first h1 on a validated page - for about 1/3 of all the pages(!) - and got insecure. Even though the site looks hunky dory in Webmaster Tools, I'd like to know what a googlebot-like content crawler finds on my page and in what order. Any tips on such tools is appreciated. This is not about keyword count.

    Read the article

  • MIX10 Windows Phone 7 Content Overview

    The tools available at MIX 10 and for public download are a Community Technology Preview, while not feature complete, the tools release at MIX 10 ships with a robust set of functionality. The white papers provide guidance on architecture, development, as well as design. In addition there are code samples and hands-on-labs that cover key topics such as Silverlight, the application bar, splash screen, navigation model, and XNA Game Studio. Training available now: Charles Petzolds preview of Programming...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • How to crawl a webPage with dynamic content added by javascript

    - by blunderboy
    I guess there is a news that Google bots have the capability to understand our javascript code. It means this is possible to fully crawl a webpage which has lazy loading feature enabled. I am using Apache Nutch to crawl websites but I don't think it has the capability to fetch the URLs being injected in HTML page by javascript when the page is scrolled down. I see a lot of websites doing lazy loading for performance issue. So Can somebody please explain me how can i crawl the data which comes in HTML page on lazy load. (On scrolling the page down).

    Read the article

  • HCM: North America: Year End Knowledge Content References

    - by CaroleB
    As we all know, the next couple of months will be busy ones for the Payroll and IT department in relation to preparing for Year End,as a means of assisting you to find documented knowledge in reference to North American (NA) Year End, the following reference guide has been put together: General Knowledge: Doc ID 404478.1 Americas (US, CA, MX) HCM High Priority Alert Doc ID 1577601.1 North American Year End 2013 / 2014 Year Begin Patch Information and Useful Links. Monitor this note as it will be updated as new information becomes available NA Year End Processing: Document 255466.1 - End of Year Processing Using Oracle HRMS (US)  Document 260344.1 - End Of Year Processing Using Oracle HRMS (Canada) Document 395622.1 - End Of Year Processing Using Oracle HRMS (Mexican) Patching : Document 216109.1 - Oracle Human Resources (HRMS) Payroll North America Annual Patching Schedule Document 1160507.1 - Oracle E-Business Suite - Consolidated HRMS Mandatory Patch List Document 1144633.1 - US Year End Patch Flow Advisor: E-Business Suite (EBS) Human Capital Management (HCM) for US Legislation patching 2013 YE Phase I Readme's US Document 1584795.1 Release 11i   - 2013 US Payroll Year End Phase 1 Readme Document 1584796.1 Release 12.0 - 2013 US Payroll Year End Phase 1 Readme Document 1584797.1 Release 12.1 - 2013 US Payroll Year End Phase 1 Readme CA Document 1585365.1 2013 Canadian Payroll Year End Phase 1 Readme Release 11i Document 1585366.1 2013 Canadian Payroll Year End Phase 1 Readme Release 12.0 Document 1585367.1 2013 Canadian Payroll Year End Phase 1 Readme Release 12.1 Known Issues / How To: Document 1527958.2 - Information Center: Oracle HRMS (US) (All Application Versions) Look specifically at the US- Year End Tab for information on: Year End Pre-Processor 1099R Federal, State, and Local Magnetic Media W-2 Paper Reports W-2 PDF W-2 Register Additional Resources: Webcast: Document 1455851.1 - Advisor Webcasts for Oracle E-Business Suite- Human Capital Management (HCM) Document 1592483.1 - Webcast: EBS North American Payroll Year End Process Flow November 20, 2013 at 3:30 pm ET, 2:30 pm CT, 1:30 pm MT, 12:30 pm PT Communities: Payroll – EBS HCM - EBS Community E-Business Patching Community

    Read the article

  • How can I transfer article content from old Joomla 1.5 site to new 2.5 site

    - by PaulHurleyuk
    I have an existing Joomla 1.5 site and am intending to wipe it and install a brand new 2.5 site. I will pick new plugins, template etc but would like to transfer the basic text / images of the articles on the 1.5 site to the new site. I am less concerned with categories and tags of those old articles, as they'll probably go in an 'old' category. I have several file and database backups of the 1.5 site. Has anyone done anything similar ? Are the two article db schemas similar enough to just transfer the data ?

    Read the article

  • Fix a jQuery/HTML5 dynamic content issue by upgrading jQuery

    - by Steve Albers
    The default NuGet template for MVC3 pushes down jQuery 1.5.1.  You can upgrade to a new version (1.7.1 is current when this is written) to avoid a problem with the creation of “unknown” HTML5 tags in IE6-8: Take this sample HTML page using HTML5Shiv to provide support for new HTML5 tags in IE6 – IE8.  The page has a number of <article> tags that are backwards compatible in Internet Explorer 6-8 thanks to the HTML5Shiv. After the article elements there is a jQuery 1.5.1 script tag, and a ready() event handler that appends a footer element with a copyright to each of the article tags.  This appears correctly in IE9, but in older IE browsers the unknown tag problem reappears for the dynamic <footer> elements, even though we have the HTML5Shiv at the top of the page.  The copyright text sits outside of the two separate footer tags. To solve the issue upgrade your jQuery files to an up-to-date version.  For instance in Visual Studio 2010: In the Solution Explorer right click on References and choose Manage NuGet Packages. In the Manage NuGet Packages window select the jQuery item on the middle of the page and click the “Upgrade” button. You may need to upgrade your script src references to point at the new version. Using the updated jQuery library the incorrect tags should disappear and styles should work properly:   You can find more information about the issue on the jQuery Bug Tracker site.

    Read the article

  • PHP accessible shared content between two websites on the same VPS on different domains/IPs

    - by Lee Fentress
    I have two ecommerce websites, selling music digital downloads, on the same VPS, currently using cPanel/WHM (but thinking of switching to Virtualmin). They have separate domains and IPs of course. They both share from the same set of music files, so I have duplicate copies in each website directory, which takes up a lot of disk space. How might I go about sharing the same set of music files across both sites, allowing PHP access, so that it does not break my shopping cart's functionality of serving customers the downloads after they have paid for them? I thought of maybe using symlinks or something, but I don't know if it's possible, or if it would have to somehow circumvent built-in security features of the server. I'm new to VPS management.

    Read the article

  • Google crawling the site but refusing to index dynamic content

    - by Omeoe
    I am trying to get Google to index an AJAX site (davidelifestyle.com). It's crawlable with JavaScript turned off and I have also recently implemented _escaped_content_ snapshot mechanism but all that's indexed is a home page and PDF files that are not directly available from the home page. Also when I use Fetch as Google in Webmaster Tools, it downloads the dynamic page but does not index it ("Submit to Index" just reloads the page). Any ideas what might be wrong?

    Read the article

  • MIX10: Yet another way to view video content sessions using their OData feed

    Well, MIX10 is over. It was a great time to meet a lot of people and see friends from afar. As anyone knows, the networking is a HUGE part of being in-person at any conferencethat vibe, value and friendship cannot be matched online. But the sessions there were a TON of them. It is quite impossible to be in 3 places at one time. Thankfully the MIX team recorded all regular sessions and make them available for viewing online or offline. For you Silverlight developers here are my pics to ensure you...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • 5 Tips For Creating Great Website Content

    If you're like most business owners marketing online, you rely pretty heavily on your website to draw in traffic and drive sales. Your site is, in fact, the place where it all happens. Ultimately, it's where you want your target market to end up.

    Read the article

  • SEO and links content

    - by AntonAL
    For usability purposes, entire article thumbnail is wrapped to a link. <a href="/some_article"> <h2>Article title</h2> <div class="summary">Lorem ipsum dolor sit amet</div> </a> User needs to click on any place of a thumb and it will be redirected to article. Does this approach have some negative effect to SEO ? Another question: What is more valueable for Search Engine ? Just a link to article in articles list <a href="/article1">Article 1</a> <a href="/article2">Article 2</a> <a href="/article3">Article 3</a> Or h2, wrapped to link: <a href="/article1"><h2>Article 1</h2></a> <a href="/article2"><h2>Article 2</h2></a> <a href="/article3"><h2>Article 3</h2></a>

    Read the article

  • Ubuntu One Joomla 1.6 Component, for sharing content

    - by Chris Machens
    Hello, id like to support Ubuntu One, and enhance my project at http://biochar.me The website uses Joomla.org 1 of the most widely used CMS on the net and version 1.6 got released a few weeks ago. Now my question: Are there plans to deliver a component, to enhance the user experience for example with sharing files? http://joomlapolis.com releases their CB - Community Builder component for joomla on the 14th and for example a CB Plugin for Ubuntu One integration would be a great addition. Looking forward to your feedback.

    Read the article

  • tomcat serving static content with directory listings

    - by milan
    I have tomcat 7 configured to serve static contents from a directory: <Host appBase="webapps" name="localhost"> ... <Context docBase="/var/projectA/static" path="/projectA/" /> </Host> This is available at localhost:8080/projectA/. Is it possible to somehow enable directory listings for this context? I know it's possible to do this with apache in front of tomcat, but that's not what I'm looking for. p.s. there's no 'tomcat' tag and my rep doesn't allow me to create tags...

    Read the article

  • Verify uniqueness of new content

    - by rogerkk
    I'm working on a review site, where there is a minor issue with almost duplicate reviews across items. Just a few words are changed. It would be very nice to be able to uncover these duplicates before they are approved by a moderator, and I'm hoping someone could chime in on the best strategy to get there. The site is running Ruby on Rails on a Postgres database and using Thinking Sphinx for search (all on Heroku), and so far the best option I see is to be pulling all the reviews out of the db and using a module like amatch to compare the strings. Not very efficient, so in this case I guess I'll have to limit the number/age of reviews to scan for dupes. Anyone got a better idea?

    Read the article

  • GPL/LGPL/MPL non-code content license

    - by user1103142
    I want to use a dictionary (basically a text file, and no code) that is included with an open office spell checking plug-in. The plug-in is under the tri-license GPL/LGPL/MPL which I don't understand. is that legal? If it is illegal, what if I wrote a script that uses the said open office plug-in to generate the dictionary (assuming it's technically possible, the script will generate all possible letter permutations, passes it to the plug-in and saves the correct ones) ? I will be using the dictionary in a closed source commercial application. The dictionary is in a language that has very little resources online, and short of making my own dictionary, there aren't any alternatives. Clarification: The script idea I mentioned above, isn't a weird technique, I would generate a document with all possible words and use open office with the plug-in installed to show spelling mistakes and remove them, isn't this the intended use of the plug-in (spell checking)?

    Read the article

  • Application Crash cleared the content of the Folder

    - by Ameya
    Recently while working on the LinuxDC++ over the network the application crashed while downloading files. Now my Downloads folder which had at least 60-80GB of data is completely cleaned but the system is not reporting the available the correct free space. Is there way to restore the contents of the folder only as the solution available are for the whole partition. I just want to recover the contents from one folder.

    Read the article

< Previous Page | 30 31 32 33 34 35 36 37 38 39 40 41  | Next Page >