Search Results

Search found 5543 results on 222 pages for 'legacy terms'.

Page 20/222 | < Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >

  • As webdevelopment is it same to legal issues to make a sex dating sites?

    - by YumYumYum
    Like i have created many other normal sites which are not related to any dating/sexual content. Is it for a developer same rules and regulation while making a sex related dating sites? where people meet together, learn each others, for having a sex relaionship (you know what i mean), having also a feature of webcam sex but not explicitly a porno sites. Does those sites have any special legal terms and condition's for the developers comparing with non sexual/dating sites legal terms and conditions?

    Read the article

  • Is Google a reliable document search engine?

    - by Miriam Schwab
    I have a site with PDFs and Word documents that I know have been indexed by Google because they appear in search results with filetype:pdf (or doc), and if I search for some very specific terms with quotation marks, they appear as well. But they don't appear for general search terms that do exist in the documents. Is Google a reliable document search engine? If not, are there other options for managing many documents and making them searchable to users?

    Read the article

  • recommended book for cocos2d?

    - by Paul Sanwald
    I'm an experienced programmer that recently got into iOS development by working through the big nerd ranch book by Aaron Hillegass and Joe Conway. I loved the way the book was structured in terms of typing in the code and doing the challenges. I'm interested in learning more about iOS gaming and cocos2d, but am a complete newbie in terms of game development/design. there are a number of books on amazon on cocos2d, can anyone recommend one in particular?

    Read the article

  • Site Review: Yahoo.com - Forms Evaluation

    Yahoo uses Ajax to suggest search terms to users when they are entering a search phrase into the search text box. Once the user has entered a search term and then presses the search button, the browser will post the search form to the search results page. I think that Yahoo is making great use of Ajax in this situation because they are helping users find information as well as suggesting alternative search terms for them to try based on what has already been added.

    Read the article

  • A Look at the Difference Between Web Design and Web Development

    Many people are interested in using the Internet to promote their business. Knowing the difference between web design and web development is important because while the terms are often used in place of one another, they are two very different things. Being able to speak with the people who are creating your website and use the proper terms can help reduce misunderstandings and speed up the creation process.

    Read the article

  • Getting More Website Traffic From Google - How to Know Which Keywords Will Make You a Profit

    When it comes to making a profit with Google AdWords everyone knows that you need to make sure you are using the right keywords to make it happen. But the problem is that unless you have a proven strategy for finding the right keywords you are going to end up losing a lot of money and pick the wrong search terms. In this article I want to show you exactly how to find the right search terms so you can maximize your profits.

    Read the article

  • Complete Beginner to Game Programming and Unreal Engine 4, Looking For Advice [on hold]

    - by onemic
    I am currently a 2nd year programming student(Just finished my first year so I will be starting my second year in September) and have mainly learned C and C++ in my classes. In terms of what I know of C++, I know about general inheritance, polymorphism, overloading operators, iterators, a little bit about templates(only class and function templates) etc. but not of the more advanced topics like linked lists and other sequential containers(containers in general I guess), enumerations, most of the standard library(other than like strings and vectors), and probably a bunch of other stuff I dont even know about yet. I subscribed to Unreal Engine 4 as I was very intrigued by their Unreal Tournament announcement earlier this month, especially after hearing that UE4 is going completely C++. Of course my end goal in doing this programming program is to eventually go into game/graphics programming. Since it's my summer off, I thought what better way then to actually apply some of my skills to a personal project so I actually have a firmer understanding of C++ past what my professors tell me. My questions are this: What would be the best way to start off making a small personal game in UE4 as a project for the summer? What should I be aiming for, especially for someone that is still learning C++? Should I focus on making a simple 2D game rather than a 3D one to get started? Seeing the Flappy Chicken showcase intrigued me because before I thought the UE engine was pretty much pigeonholed into being for FPS games What should my expectations be going into UE4 and a game engine for the first time?(UE4 will be my first foray into making a game) What can I expect to gain from making things in UE4, in terms of making games and in terms of further fleshing out my knowledge of C++? Would you recommend I start off 100% using C++ for scripting or using the visual blueprints? Since I'm not a designer, how would I be able to add objects and designs to my game? For someone at my level is retaining the UE4 subscription worth it or is it better to cancel and resub when I learn enough about UE4 and C++? Lastly is there anything to be gained in terms of knowledge/insight through me looking at the source code for UE4? I opened it in VS2013, but noticed that most of the files were C# files and not cpp's. Thanks in advance for taking the time to answer.

    Read the article

  • It is a Good Idea to Buy Backlinks

    Have you heard of backlinks? These are specific links from other websites that come to your website with the purpose of linking due to relevance in terms of content or in terms of merchandise, etc. For instance, if you are a modern art artist and specialize in customized art work, then you can have your service posted on relevant web pages, like an arts and craft website, etc.

    Read the article

  • Build One Way Links With Article Marketing

    Lots of web developers write web content as a strategy to build one way links with article marketing. In terms of link quality, one way links are the most valuable of all. They do not pose any control difficulties, they are built on quality, they do not require reciprocity and they boost up quality traffic. The possibility to build one way links with article marketing keeps lots of web developers focused and highly professional in terms of content quality.

    Read the article

  • SEO - How to Optimise For Long-Tail Queries

    There is a great deal of value in the long-tail of search. The long-tail is basically a query that is over three or four keywords long. Good examples of long-tail queries include "cheap flights to Japan May" or "buy back doors UK." Both of these terms exhibit a great deal of user intent - this means the users behind both terms are very far down the buying cycle and are looking for a website on which they can transact and buy a flight to Japan or purchase a back door.

    Read the article

  • SEO Copywriting - What is it?

    Search engine optimization copywriting, more commonly called SEO copywriting, is the technique of writing the text of a webpage in a way that is enjoyable and easy to read for the common internet surfer. The term also refers to targeting and using specific search terms with the goal of ranking the targeted terms higher on the list of search engine websites. The point of obtaining a higher search engine ranking is to make your site more accessible to potential clients.

    Read the article

  • SEO & Digital Marketing - Social Media Intangibles

    'Social Media' is certainly one of the most often used, yet least understood terms in the marketing space. In terms of purely defining the term, Social Media can be defined as communications among people in the digital space. These communications typically involve the exchange of ideas, experiences, information and insight, along with various media such as images and videos.

    Read the article

  • What Services Can an SEO Company Provide?

    An SEO company can provide a multitude of services. You will hear terms such as article submissions, directory submissions, article campaign services, website creation, link building, on page optimization, etc. For those who are as yet not familiar with such terms, read on.

    Read the article

  • JustMock and Moles – A short overview for TDD alpha geeks

    - by RoyOsherove
    People have been lurking near my house, asking me to write something about Moles and JustMock, so I’ll try to be as objective as possible, taking in the fact that I work at Typemock. If I were NOT working at Typemock I’d write: JustMock JustMock tries to be Typemock at so many levels it’s not even funny. Technically they work the same and the API almost looks like it’s a search and replace work based on the Isolator API (awesome compliment!), but JustMock still has too many growing pains and bugs to be usable. Also, JustMock is missing alot of the legacy abilities such as Non public faking, faking all types and various other things that are really needed in real legacy code. Biggest thing (in terms of isolation integration) is that it does not integrate with other profilers such as coverage, NCover etc.) When JustMock comes out of beta, I feel that it should cost about half as Isolator costs, as it currently provides about half the abilities. Moles Moles is an addon of Pex and was originally only intended to work within the Pex environment. It started as a research project and now it’s a power-tool for VS (so it’s a separate install) Now it’s it’s own little stubbing framework. It’s not really an Isolation framework in the classic sense, because it does not provide any kind of API built in to verify object interactions. You have to use manual flags all on your own to do that. It generates two types of classes per assembly: Manual Stubs(just like you’d hand code them) and Mole classes. Each Mole class is a special API to change and break the behavior that the corresponding type. so MDateTime is how you change behavior for DateTime. In that sense the API is al over the place, and it can become highly unreadable and unmentionable over time in your test. Also, the Moles API isn’t really designed to deal with real Legacy code. It only deals with public types and methods. anything internal or private is ignored and you can’t change its behavior. You also can’t control static constructors. That takes about 95% of legacy scenarios out of the picture if that’s what you’re trying to use it for. Personally, I found it hard to get used to the idea of two parallel APIs for different abilities, and when to choose which. and I know this stuff. I would expect more usability from the API to make it more widely used. I don’t think that Moles in planning to go that route. Publishing it as an Isolation framework is really an afterthought of a tool that was design with a specific task in mind, and generic Isolation isn’t it. it’s only hope is DEQ – a simple code example that shows a simple Isolation API built on the Moles generic engine. Moles can and should be used for very simple cases of detouring functionality such a simple static methods or interfaces and virtual functions (like rhinomock and MOQ do).   Oh, Wait. Ah, good thing I work at Typemock. I won’t write all that. I’ll just write: JustMock and Moles are great tools that enlarge the market space for isolation related technologies, and they prove that the idea of productivity and unit testing can go hand in hand and get people hooked. I look forward to compete with them at this growing market.

    Read the article

  • Moving monarchs and dragons: migrating the JDK bugs to JIRA

    - by darcy
    Among insects, monarch butterflies and dragonflies have the longest migrations; migrating JDK bugs involves a long journey as well! As previously announced by Mark back in March, we've been working according to a revised plan to transition the JDK bug management from Sun's legacy system to initially an Oracle-internal JIRA instance which is afterward made visible and usable externally. I've been busily working on this project for the last few months and the team has made good progress on many aspects of the effort: JDK bugs will be imported into JIRA regardless of age; bugs will also be imported regardless of state, including closed bugs. Consequently, the JDK bug project will start pre-populated with over 100,000 existing bugs, some dating all the way back to 1994. This will allow a continuity of information and allow new issues to be linked to old ones. Using a custom import process, the Sun bug numbers will be preserved in JIRA. For example, the Sun bug with bug number 4040458 will become "JDK-4040458" in JIRA. In JIRA the project name, "JDK" in our case, is part of the bug's identifier. Bugs created after the JIRA migration will be numbered starting at 8000000; bugs imported from the legacy system have numbers ranging between 1000000 and 79999999. We're working with the bugs.sun.com team to try to maintain continuity of the ability to both read JDK bug information as well as to file new incidents. At least for now, the overall architecture of bugs.sun.com will be the same as it is today: it will be a gateway bridging to an Oracle-internal system, but the internal system will change to JIRA from the legacy database. Generally we are aiming to preserve the visibility of bugs currently viewable on bugs.sun.com; however, bugs in areas not related to the JDK will not be visible after the transition to JIRA. New incoming incidents will be sent to a separate JIRA project for initial triage before possibly being moved into the JDK project. JDK bug management leans heavily on being able to track the state of bugs in multiple releases, especially to coordinate delivering synchronized security releases (known as CPUs, critital patch updates, in Oracle parlance). For a security release, it is common for half a dozen or more release trains to be affected (for example, JDK 5, JDK 6 update, OpenJDK 6, JDK 7 update, JDK 8, virtual releases for HotSpot express, etc.). We've determined we need to track at least the tuple of (release, responsible engineer/assignee for the release, status in the release) for the release trains a fix is going into. To do this in JIRA, we are creating a separate port/backport issue type along with a custom link type to allow the multiple release information to be easily grouped and presented together. The Sun legacy system had a three-level classification scheme, product, category, and subcategory. Out of the box, JIRA only has a one-level classification, component. We've implemented a custom second-level classification, subcomponent. As part of the bug migration we've taken the opportunity to think about how bugs should be grouped under a two-level system and we'll the new system will be simpler and more regular. The main top-level components of the JDK product will include: core-libs client-libs deploy install security-libs other-libs tools hotspot For the libs areas, the primary name of the subcomportment will be the package of the API in question. In the core-libs component, there will be subcomponents like: java.lang java.lang.class_loading java.math java.util java.util:i18n In the tools component, subcomponents will primarily correspond to command names in $JDK/bin like, jar, javac, and javap. The first several bulk imports of the JDK bugs into JIRA have gone well and we're continuing to refine the import to have greater fidelity to the current data, including by reconstructing information not brought over in a structured fashion during the previous large JDK bug system migration back in 2004. We don't currently have a firm timeline of when the new system will be usable externally, but as it becomes available, I'll share further information in follow-up blog posts.

    Read the article

  • Gnome-shell fails to load on 12.10

    - by Githlar
    I'm usually the one answering questions, but in this I'm throughly stumped! My Setup: Ubuntu 12.10 (Dist upgrade form 12.04) ATI M96 [Mobility Radeon HD 4650] Upon the first installation of 12.10 I had all kinds of issues getting the Legacy ATI drivers to install (I guess the source for the drivers isn't kosher with kernel 3.5). So, I added the repository ppa:makson96/fglrx - which has a version of the ATI source patched to work with kernel 3.5. After installation of fglrx-legacy from that PPA, gnome-shell and all my graphics work fine... until today. The Problem I unsuspended my computer today and the screen was black (not off, the black from the gnome lock screen). I'd move my mouse/hit a key and the background would flash and then it'd go back to black. Restarted via VT1 Logged into Gnome (gnome-shell) session, but no gnome-shell! Investigation: First, I went to VT1 and tried export DISPLAY=:0;gnome-shell --replace. It appeared to work fine, switch back to X and nothing. Went back to VT1 and saw this error message: JS ERROR: !!! Exception was: TypeError: Object 0x7fc748129c30 is not a subclass of (null), it's a xO JS ERROR: !!! message = '"Object 0x7fc748129c30 is not a subclass of (null), it's a xO"' JS ERROR: !!! fileName = '"/usr/share/gnome-shell/js/ui/tweener.js"' JS ERROR: !!! lineNumber = '218' JS ERROR: !!! stack = '"()@/usr/share/gnome-shell/js/ui/tweener.js:218 wrapper()@/usr/share/gjs-1.0/lang.js:204 ()@/usr/share/gjs-1.0/lang.js:145 ()@/usr/share/gjs-1.0/lang.js:239 init()@/usr/share/gnome-shell/js/ui/tweener.js:49 init()@/usr/share/gnome-shell/js/ui/environment.js:96 @<main>:1 "' Window manager warning: Log level 32: Execution of main.js threw exception: TypeError: Object 0x7fc748129c30 is not a subclass of (null), it's a xO Note: Everywhere it says "it's a xO", xO is actually garbled and changes every time (I'm thinking memory corruption?) This error is thrown by line 96 of /usr/share/gnome-shell/js/ui/environment.js: tweener.Init() Did a purge of fglrx-legacy, reboot, reinstall fglrx-legacy, reboot... same thing. Did a ppa-purge of ppa:gnome3-team/gnome3, and reinstalled gnome-shell and ubuntu-desktop from the standard repositores... same thing. I'm really at a loss here. I love gnome-shell and after using it for nearly a year now gnome classic just seems so archaic. Additional Information Apt log from the day I first suspended my machine (these are upgrades from the gnome3-team/gnome3 ppa and ubuntu-wine/ppa ppa): Start-Date: 2012-11-24 17:30:28 Commandline: aptdaemon role='role-commit-packages' sender=':1.618' Install: gkbd-capplet:amd64 (3.6.0-0ubuntu1), gnome-control-center-unity:amd64 (1.0-0ubuntu1~ubuntu12.10.1) Upgrade: nautilus:amd64 (3.6.2-0ubuntu0.1~quantal1, 3.6.3-0ubuntu2~ubuntu12.10.1), libgnome-control-center1:amd64 (3.4.2-0ubuntu19, 3.6.3-0ubuntu6~ubuntu12.10.1), wine1.5-i386:i386 (1.5.17-0ubuntu4, 1.5.18-0ubuntu1), wine1.5:amd64 (1.5.17-0ubuntu4, 1.5.18-0ubuntu1), gnome-settings-daemon:amd64 (3.4.2-0ubuntu14, 3.6.3-0ubuntu1~ubuntu12.10.1), gnome-control-center-data:amd64 (3.4.2-0ubuntu19, 3.6.3-0ubuntu6~ubuntu12.10.1), gnome-accessibility-themes:amd64 (3.6.0.2-0ubuntu1, 3.6.2-0ubuntu2~ubuntu12.10.1), gnome-themes-standard:amd64 (3.6.0.2-0ubuntu1, 3.6.2-0ubuntu2~ubuntu12.10.1), wine1.5-amd64:amd64 (1.5.17-0ubuntu4, 1.5.18-0ubuntu1), nautilus-data:amd64 (3.6.2-0ubuntu0.1~quantal1, 3.6.3-0ubuntu2~ubuntu12.10.1), gnome-control-center:amd64 (3.4.2-0ubuntu19, 3.6.3-0ubuntu6~ubuntu12.10.1), libnautilus-extension1a:amd64 (3.6.2-0ubuntu0.1~quantal1, 3.6.3-0ubuntu2~ubuntu12.10.1) End-Date: 2012-11-24 17:31:32 fglrxinfo (driver appears to be working): display: :0 screen: 0 OpenGL vendor string: Advanced Micro Devices, Inc. OpenGL renderer string: ATI Mobility Radeon HD 4650 OpenGL version string: 3.3.11653 Compatibility Profile Context Does anybody have any further ideas?

    Read the article

  • ASP.NET 2.0 app runs on Win 2003 in IIS 5 isolation mode but not in (default) IIS 6 mode

    - by Tex
    The app uses DLLImport to call a legacy unmanaged dll. Let's call this dll Unmanaged.dll for the sake of this question. Unmanaged.dll has dependencies on 5 other legacy dll's. All of the legacy dll's are placed in the WebApp/bin/ directory of my ASP.NET application. When IIS is running in 5.0 isolation mode, the app works fine - calls to the legacy dll are processed without error. When IIS is running in the default 6.0 mode, the app is able to initiate the Unmanaged.dll (InitMe()), but dies during a later call to it (ProcessString()). I'm pulling my hair out here. I've moved the unmanaged dll's to various locations, tried all kinds of security settings and searched long and hard for a solution. Help! Sample code: [DllImport("Unmanaged.dll", EntryPoint="initME", CharSet=System.Runtime.InteropServices.CharSet.Ansi, CallingConvention=CallingConvention.Cdecl)] internal static extern int InitME(); //Calls to InitMe work fine - Unmanaged.dll initiates and writes some entries in a dedicated log file [DllImport("Unmanaged.dll", EntryPoint="processString", CharSet=System.Runtime.InteropServices.CharSet.Ansi, CallingConvention=CallingConvention.Cdecl)] internal static extern int ProcessString(string inStream, int inLen, StringBuilder outStream, ref int outLen, int maxLen); //Calls to ProcessString cause the app to crash, without leaving much of a trace that I can find so far

    Read the article

  • How can you connect to a password protected MS Access Database from a Spring JdbcTemplate?

    - by Tim Visher
    I need to connect to a password protected MS Access 2003 DB using the JDBC-ODBC bridge. I can't find out how to specify the password in the connect string, or even if that is the correct method of connecting. It would probably be relevant to mention that this is a Spring App which is accessing the database through a JdbcTemplate configured as a datasource bean in our application context file. Some relevant snippets: from application-context.xml <bean id="jdbcTemplate" class="org.springframework.jdbc.core.JdbcTemplate"> <property name="dataSource" ref="legacyDataSource" /> </bean> <bean id="jobsheetLocation" class="java.lang.String"> <constructor-arg value="${jobsheet.location}"/> </bean> <bean id="legacyDataSource" class="org.springframework.jdbc.datasource.DriverManagerDataSource"> <property name="driverClassName" value="${jdbc.legacy.driverClassName}" /> <property name="url" value="${jdbc.legacy.url}"/> <property name="password" value="-------------" /> </bean> from our build properties jdbc.legacy.driverClassName=sun.jdbc.odbc.JdbcOdbcDriver jdbc.legacy.url=jdbc:odbc:Driver\={Microsoft Access Driver (*.mdb)};Dbq\=@LegacyDbPath@;DriverID\=22;READONLY\=true Any thoughts?

    Read the article

  • Word forms with too many ActiveX checkboxes load slowly.

    - by Luke
    Hi there, my company's software product has a feature that allows users to generate forms from Word templates. The program auto fills some fields from the SQL database and the user can fill in other data that they desire. So we have a .dotx template that holds the design of the form, and then the user gets the .docx file to fill out when they call it from our program. The problem we're having is that some of our users have been finding that the forms take an exceptionally long time to open up and then, once open, are so slow to respond (scroll around, etc) that they're unusable. So in my investigations so far, I've found out that the problem systems are one with lower powered CPUs (unfortunately it happens for systems above our system requirements) and the Word forms that cause the problems are ones with large amount of ActiveX style checkboxes on them. I verified that reducing the ActiveX checkboxes fixes the form loading problems. So I have the following questions about solutions (we're using Word 2007): 1) Is there any way to configure Word, or some other settings, so that there won't be such a strain opening a Word form with lots of ActiveX checkboxes? Any way of speeding up Word's opening? 2) Using Legacy style checkboxes instead of the ActiveX ones makes the forms load fine, but it looks like the user has to double-click the checkbox and change Default Value-Checked. Is there a way to configure it so that they can simply click on the checkbox to tick it? "Legacy Forms" checkbox as a name kind of worries me (Legacy…), does that mean a future version of word at some point wouldn't load the checkboxes because they're "legacy"? 3) Yes, it became clear to me after a little bit of research into solutions that Word is not the tool for the job for forms like I'm describing. InfoPath seems to be exactly what we should have been using all along but unfortunately I wasn't involved in the decision making or development of these forms, just tasked with coming up with a solution. I'd appreciate answers to any of these, or if anyone has any other ideas for solutions to this problem. Thanks

    Read the article

  • Parallelism in .NET – Part 1, Decomposition

    - by Reed
    The first step in designing any parallelized system is Decomposition.  Decomposition is nothing more than taking a problem space and breaking it into discrete parts.  When we want to work in parallel, we need to have at least two separate things that we are trying to run.  We do this by taking our problem and decomposing it into parts. There are two common abstractions that are useful when discussing parallel decomposition: Data Decomposition and Task Decomposition.  These two abstractions allow us to think about our problem in a way that helps leads us to correct decision making in terms of the algorithms we’ll use to parallelize our routine. To start, I will make a couple of minor points. I’d like to stress that Decomposition has nothing to do with specific algorithms or techniques.  It’s about how you approach and think about the problem, not how you solve the problem using a specific tool, technique, or library.  Decomposing the problem is about constructing the appropriate mental model: once this is done, you can choose the appropriate design and tools, which is a subject for future posts. Decomposition, being unrelated to tools or specific techniques, is not specific to .NET in any way.  This should be the first step to parallelizing a problem, and is valid using any framework, language, or toolset.  However, this gives us a starting point – without a proper understanding of decomposition, it is difficult to understand the proper usage of specific classes and tools within the .NET framework. Data Decomposition is often the simpler abstraction to use when trying to parallelize a routine.  In order to decompose our problem domain by data, we take our entire set of data and break it into smaller, discrete portions, or chunks.  We then work on each chunk in the data set in parallel. This is particularly useful if we can process each element of data independently of the rest of the data.  In a situation like this, there are some wonderfully simple techniques we can use to take advantage of our data.  By decomposing our domain by data, we can very simply parallelize our routines.  In general, we, as developers, should be always searching for data that can be decomposed. Finding data to decompose if fairly simple, in many instances.  Data decomposition is typically used with collections of data.  Any time you have a collection of items, and you’re going to perform work on or with each of the items, you potentially have a situation where parallelism can be exploited.  This is fairly easy to do in practice: look for iteration statements in your code, such as for and foreach. Granted, every for loop is not a candidate to be parallelized.  If the collection is being modified as it’s iterated, or the processing of elements depends on other elements, the iteration block may need to be processed in serial.  However, if this is not the case, data decomposition may be possible. Let’s look at one example of how we might use data decomposition.  Suppose we were working with an image, and we were applying a simple contrast stretching filter.  When we go to apply the filter, once we know the minimum and maximum values, we can apply this to each pixel independently of the other pixels.  This means that we can easily decompose this problem based off data – we will do the same operation, in parallel, on individual chunks of data (each pixel). Task Decomposition, on the other hand, is focused on the individual tasks that need to be performed instead of focusing on the data.  In order to decompose our problem domain by tasks, we need to think about our algorithm in terms of discrete operations, or tasks, which can then later be parallelized. Task decomposition, in practice, can be a bit more tricky than data decomposition.  Here, we need to look at what our algorithm actually does, and how it performs its actions.  Once we have all of the basic steps taken into account, we can try to analyze them and determine whether there are any constraints in terms of shared data or ordering.  There are no simple things to look for in terms of finding tasks we can decompose for parallelism; every algorithm is unique in terms of its tasks, so every algorithm will have unique opportunities for task decomposition. For example, say we want our software to perform some customized actions on startup, prior to showing our main screen.  Perhaps we want to check for proper licensing, notify the user if the license is not valid, and also check for updates to the program.  Once we verify the license, and that there are no updates, we’ll start normally.  In this case, we can decompose this problem into tasks – we have a few tasks, but there are at least two discrete, independent tasks (check licensing, check for updates) which we can perform in parallel.  Once those are completed, we will continue on with our other tasks. One final note – Data Decomposition and Task Decomposition are not mutually exclusive.  Often, you’ll mix the two approaches while trying to parallelize a single routine.  It’s possible to decompose your problem based off data, then further decompose the processing of each element of data based on tasks.  This just provides a framework for thinking about our algorithms, and for discussing the problem.

    Read the article

  • PHP self form validation

    - by Jordan Pagaduan
    <?php function VerifyForm(&$values, &$errors) { if (strlen($values['fname']) == 0) $errors['fname'] = 'Enter First Name'; if (strlen($values['lname']) == 0) $errors['lname'] = 'Enter Last Name'; if (strlen($values['mname']) == 0) $errors['mname'] = 'Enter Middle Name'; if (strlen($values['address']) == 0) $errors['address'] = 'Enter Address'; if (strlen($values['terms']) == 0) $errors['terms'] = 'Please Read Terms and Agreement and Check the box.'; if (!ereg('.*@.*\..{2,4}', $values['email'])) $errors['email'] = 'Email address invalid'; else if (strlen($values['email']) < 0) $errors['email'] = 'Enter Email Address'; return (count($errors) == 0); } function DisplayForm($values, $errors) { ?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <title>GIA Soap » Products » Customer Informations</title> <link href="stylesheet/style.css" rel="stylesheet" type="text/css" /> <script type="text/javascript" src="js_files/jquery.js"></script> <script type="text/javascript" src="js_files/sliding_effect.js"></script> <script type="text/javascript" src="js_files/slideshow.js"></script> </head> <body> <div class="bg_top"> <div class="bg_bottom"> <div class="wrapper"> <div class="header"> <div class="logo"> </div> <div class="logo_text"> <div class="logo_head_text">Gia Soap Making</div> <div class="logo_sub_text">Sub text here</div> </div> </div> <div class="h_nav"> <div class="h_nav_dash"> </div> </div> <div class="container"> <div class="content_term"> <div class="content_terms"> <br /> <h1><p>Customer Information</p></h1><br /> <p>Please the following correctly.</p> <div class="customer_info"> <?php if (count($errors) > 0) echo "<p>There were some errors in your submitted form, please correct them and try again.</p>"; ?> <form method="post" action="<?= $_SERVER['PHP_SELF'] ?>"> <!-- hidden values --> <input type="hidden" value="<?php echo $papaya; ?>" name="papaya" /> <input type="hidden" value="<?php echo $carrot; ?>" name="carrot" /> <input type="hidden" value="<?php echo $guava; ?>" name="guava" /> <label for="customer_fname">First Name (<i>Required</i>)</label> <input type="text" class="textbox" id="customer_fname" name="customer_fname" value="<?= htmlentities($values['fname']) ?>" /> <span class="error_msg"><?= $errors['fname'] ?></span> <label for="customer_lname">Last Name (<i>Required</i>)</label> <input type="text" class="textbox" id="customer_fname" name="customer_fname" value="<?= htmlentities($values['lname']) ?>" /> <span class="error_msg"><?= $errors['lname'] ?></span> <label for="customer_mname">Middle Name (<i>Required</i>)</label> <input type="text" class="textbox" id="customer_fname" name="customer_fname" value="<?= htmlentities($values['mname']) ?>" /> <span class="error_msg"><?= $errors['mname'] ?></span> <label for="customer_add">Address (<i>Required : Complete Address Please</i>)</label> <input type="text" class="textbox" id="customer_add" name="customer_add1" value="<?= htmlentities($values['address']) ?>" /><br /> <input type="text" class="textbox" id="customer_add" name="customer_add2" /><br /> <input type="text" class="textbox" id="customer_add" name="customer_add3" /> <span class="error_msg"><?= $errors['address'] ?></span> <label for="customer_email">Email Address (<i>Required</i>)</label> <input type="text" class="textbox" id="customer_email" name="customer_email" value="<?= htmlentities($values['email']) ?>" /> <span class="error_msg"><?= $errors['email'] ?></span> <label for="customer_phone">Phone Number </label> <input type="text" class="textbox" id="customer_phone" name="customer_phone" /> <label for="customer_mobile">Mobile Number </label> <input type="text" class="textbox" id="customer_mobile" name="customer_mobile" /> <br /><br /> <div class="terms"> <center> <h1>Terms and Agreement</h1><br /> <p>Please read the following.</p><br /> </div> <br /> <input type="checkbox" name="terms" value="<?= htmlentities($values['terms']) ?>" /> I Read the Terms and Agreement<br /><br /> <span class="error_msg"><?= $errors['terms'] ?></span> <input type="submit" value="Send Order" class="prod_subbtn" /> </center> </form> </div> </div> </div> <div class="clear"></div> </div> <?php include ('includes/footer.php'); ?> </div> </div> </div> </body> </html> <?php } function ProcessForm($values) { $papaya = $_POST['papaya']; $carrot = $_POST['carrot']; $guava = $_POST['guava']; $fname = $_POST['fname']; $lname = $_POST['lname']; $mname = $_POST['mname']; $address = $_POST['address']; } if ($_SERVER['REQUEST_METHOD'] == 'POST') { $formValues = $_POST; $formErrors = array(); if (!VerifyForm($formValues, $formErrors)) DisplayForm($formValues, $formErrors); else ProcessForm($formValues); } else DisplayForm(null, null); ?> The output is: [link text]1 Problem the value that I put is can be seen by users.

    Read the article

  • How to Hashtag (Without Being #Annoying)

    - by Mike Stiles
    The right tool in the wrong hands can be a dangerous thing. Giving a chimpanzee a chain saw would not be a pretty picture. And putting Twitter hashtags in the hands of social marketers who were never really sure how to use them can be equally unattractive. Boiled down, hashtags are for search and organization of tweets. A notch up from that, they can also be used as part of a marketing strategy. In terms of search, if you’re in the organic apple business, you want anyone who searches “organic” on Twitter to see your posts about your apples. It’s keyword tactics not unlike web site keyword search tactics. So get a clear idea of what keywords are relevant for your tweet. It’s reasonable to include #organic in your tweet. Is it fatal if you don’t hashtag the word? It depends on the person searching. If they search “organic,” your tweet’s going to come up even if you didn’t put the hashtag in front of it. If the searcher enters “#organic,” your tweet needs the hashtag. Err on the side of caution and hashtag it so it comes up no matter how the searcher enters it. You’ll also want to hashtag it for the second big reason people hashtag, organization. You can follow a hashtag. So can the rest of the Twitterverse. If you’re that into organic munchies, you can set up a stream populated only with tweets hashtagged #organic. If you’ve established a hashtag for your brand, like #nobugsprayapples, you (and everyone else) can watch what people are tweeting about your company. So what kind of hashtags should you include? They should be directly related to the core message of your tweet. Ancillary or very loosely-related hashtags = annoying. Hashtagging your brand makes sense. Hashtagging your core area of interest makes sense. Creating a specific event or campaign hashtag you want others to include and spread makes sense (the burden is on you to promote it and get it going). Hashtagging nearly every word in the tweet is highly annoying. Far and away, the majority of hashtagged words in such tweets have no relevance, are not terms that would be searched, and are not terms needed for categorization. It looks desperate and spammy. Two is fine. One is better. And it is possible to tweet with --gasp-- no hashtags! Make your hashtags as short as you can. In fact, if your brand’s name really is #nobugsprayapples, you’re burning up valuable, limited characters and risking the inability of others to retweet with added comments. Also try to narrow your topic hashtag down. You’ll find a lot of relevant users with #organic, but a lot of totally uninterested users with #food. Just as you can join online forums and gain credibility and a reputation by contributing regularly to that forum, you can follow hashtagged topics and gain the same kind of credibility in your area of expertise. Don’t just parachute in for the occasional marketing message. And if you’re constantly retweeting one particular person, stop it. It’s kissing up and it’s obvious. Which brings us to the king of hashtag annoyances, “hashjacking.” This is when you see what terms are hot and include them in your marketing tweet as a hashtag, even though it’s unrelated to your content. Justify it all you want, but #justinbieber has nothing to do with your organic apples. Equally annoying, piggybacking on a popular event’s hashtag to tweet something not connected to the event. You’re only fostering ill will and mistrust toward your account from the people you’ve tricked into seeing your tweet. Lastly, don’t @ mention people just to make sure they see your tweet. If the tweet’s not for them or about them, it’s spammy. What I haven’t covered is use of the hashtag for comedy’s sake. You’ll see this a lot and is a matter of personal taste. No one will search these hashtagged terms or need to categorize then, they’re just there for self-expression and laughs. Twitter is, after all, supposed to be fun.  What are some of your biggest Twitter pet peeves? #blogsovernow

    Read the article

  • Partitioned Repository for WebCenter Content using Oracle Database 11g

    - by Adao Junior
    One of the biggest challenges for content management solutions is related to the storage management due the high volumes of the unstoppable growing of information. Even if you have storage appliances and a lot of terabytes, thinks like backup, compression, deduplication, storage relocation, encryption, availability could be a nightmare. One standard option that you have with the Oracle WebCenter Content is to store data to the database. And the Oracle Database allows you leverage features like compression, deduplication, encryption and seamless backup. But with a huge volume, the challenge is passed to the DBA to keep the WebCenter Content Database up and running. One solution is the use of DB partitions for your content repository, but what are the implications of this? Can I fit this with my business requirements? Well, yes. It’s up to you how you will manage that, you just need a good plan. During you “storage brainstorm plan” take in your mind what you need, such as storage petabytes of documents? You need everything on-line? There’s a way to logically separate the “good content” from the “legacy content”? The first thing that comes to my mind is to use the creation date of the document, but you need to remember that this document could receive a lot of revisions and maybe you can consider the revision creation date. Your plan can have also complex rules like per Document Type or per a custom metadata like department or an hybrid per date, per DocType and an specific virtual folder. Extrapolation the use, you can have your repository distributed in different servers, different disks, different disk types (Such as ssds, sas, sata, tape,…), separated accordingly your business requirements, separating the “hot” content from the legacy and easily matching your compliance requirements. If you think to use by revision, the simple way is to consider the dId, that is the sequential unique id for every content created using the WebCenter Content or the dLastModified that is the date field of the FileStorage table that contains the date of inclusion of the content to the DB Table using SecureFiles. Using the scenario of partitioned repository using an hierarchical separation by date, we will transform the FileStorage table in an partitioned table using  “Partition by Range” of the dLastModified column (You can use the dId or a join with other tables for other metadata such as dDocType, Security, etc…). The test scenario bellow covers: Previous existent data on the JDBC Storage to be migrated to the new partitioned JDBC Storage Partition by Date Automatically generation of new partitions based on a pre-defined interval (Available only with Oracle Database 11g+) Deduplication and Compression for legacy data Oracle WebCenter Content 11g PS5 (Could present some customizations that do not affect the test scenario) For the test case you need some data stored using JDBC Storage to be the “legacy” data. If you do not have done before, just create an Storage rule pointed to the JDBC Storage: Enable the metadata StorageRule in the UI and upload some documents using this rule. For this test case you can run using the schema owner or an dba user. We will use the schema owner TESTS_OCS. I can’t forgot to tell that this is just a test and you should do a proper backup of your environment. When you use the schema owner, you need some privileges, using the dba user grant the privileges needed: REM Grant privileges required for online redefinition. GRANT EXECUTE ON DBMS_REDEFINITION TO TESTS_OCS; GRANT ALTER ANY TABLE TO TESTS_OCS; GRANT DROP ANY TABLE TO TESTS_OCS; GRANT LOCK ANY TABLE TO TESTS_OCS; GRANT CREATE ANY TABLE TO TESTS_OCS; GRANT SELECT ANY TABLE TO TESTS_OCS; REM Privileges required to perform cloning of dependent objects. GRANT CREATE ANY TRIGGER TO TESTS_OCS; GRANT CREATE ANY INDEX TO TESTS_OCS; In our test scenario we will separate the content as Legacy, Day1, Day2, Day3 and Future. This last one will partitioned automatically using 3 tablespaces in a round robin mode. In a real scenario the partition rule could be per month, per year or any rule that you choose. Table spaces for the test scenario: CREATE TABLESPACE TESTS_OCS_PART_LEGACY DATAFILE 'tests_ocs_part_legacy.dat' SIZE 500K AUTOEXTEND ON NEXT 500K MAXSIZE UNLIMITED; CREATE TABLESPACE TESTS_OCS_PART_DAY1 DATAFILE 'tests_ocs_part_day1.dat' SIZE 500K AUTOEXTEND ON NEXT 500K MAXSIZE UNLIMITED; CREATE TABLESPACE TESTS_OCS_PART_DAY2 DATAFILE 'tests_ocs_part_day2.dat' SIZE 500K AUTOEXTEND ON NEXT 500K MAXSIZE UNLIMITED; CREATE TABLESPACE TESTS_OCS_PART_DAY3 DATAFILE 'tests_ocs_part_day3.dat' SIZE 500K AUTOEXTEND ON NEXT 500K MAXSIZE UNLIMITED; CREATE TABLESPACE TESTS_OCS_PART_ROUND_ROBIN_A 'tests_ocs_part_round_robin_a.dat' DATAFILE SIZE 500K AUTOEXTEND ON NEXT 500K MAXSIZE UNLIMITED; CREATE TABLESPACE TESTS_OCS_PART_ROUND_ROBIN_B 'tests_ocs_part_round_robin_b.dat' DATAFILE SIZE 500K AUTOEXTEND ON NEXT 500K MAXSIZE UNLIMITED; CREATE TABLESPACE TESTS_OCS_PART_ROUND_ROBIN_C 'tests_ocs_part_round_robin_c.dat' DATAFILE SIZE 500K AUTOEXTEND ON NEXT 500K MAXSIZE UNLIMITED; Before start, gather optimizer statistics on the actual FileStorage table: EXEC DBMS_STATS.GATHER_TABLE_STATS(USER, 'FileStorage', cascade => TRUE); Now check if is possible execute the redefinition process: EXEC DBMS_REDEFINITION.CAN_REDEF_TABLE('TESTS_OCS', 'FileStorage',DBMS_REDEFINITION.CONS_USE_PK); If no errors messages, you are good to go. Create a Partitioned Interim FileStorage table. You need to create a new table with the partition information to act as an interim table: CREATE TABLE FILESTORAGE_Part ( DID NUMBER(*,0) NOT NULL ENABLE, DRENDITIONID VARCHAR2(30 CHAR) NOT NULL ENABLE, DLASTMODIFIED TIMESTAMP (6), DFILESIZE NUMBER(*,0), DISDELETED VARCHAR2(1 CHAR), BFILEDATA BLOB ) LOB (BFILEDATA) STORE AS SECUREFILE ( ENABLE STORAGE IN ROW NOCACHE LOGGING KEEP_DUPLICATES NOCOMPRESS ) PARTITION BY RANGE (DLASTMODIFIED) INTERVAL (NUMTODSINTERVAL(1,'DAY')) STORE IN (TESTS_OCS_PART_ROUND_ROBIN_A, TESTS_OCS_PART_ROUND_ROBIN_B, TESTS_OCS_PART_ROUND_ROBIN_C) ( PARTITION FILESTORAGE_PART_LEGACY VALUES LESS THAN (TO_DATE('05-APR-2012 12.00.00 AM', 'DD-MON-YYYY HH.MI.SS AM')) TABLESPACE TESTS_OCS_PART_LEGACY LOB (BFILEDATA) STORE AS SECUREFILE ( TABLESPACE TESTS_OCS_PART_LEGACY RETENTION NONE DEDUPLICATE COMPRESS HIGH ), PARTITION FILESTORAGE_PART_DAY1 VALUES LESS THAN (TO_DATE('06-APR-2012 07.25.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) TABLESPACE TESTS_OCS_PART_DAY1 LOB (BFILEDATA) STORE AS SECUREFILE ( TABLESPACE TESTS_OCS_PART_DAY1 RETENTION AUTO KEEP_DUPLICATES COMPRESS ), PARTITION FILESTORAGE_PART_DAY2 VALUES LESS THAN (TO_DATE('06-APR-2012 07.55.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) TABLESPACE TESTS_OCS_PART_DAY2 LOB (BFILEDATA) STORE AS SECUREFILE ( TABLESPACE TESTS_OCS_PART_DAY2 RETENTION AUTO KEEP_DUPLICATES NOCOMPRESS ), PARTITION FILESTORAGE_PART_DAY3 VALUES LESS THAN (TO_DATE('06-APR-2012 07.58.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) TABLESPACE TESTS_OCS_PART_DAY3 LOB (BFILEDATA) STORE AS SECUREFILE ( TABLESPACE TESTS_OCS_PART_DAY3 RETENTION AUTO KEEP_DUPLICATES NOCOMPRESS ) ); After the creation you should see your partitions defined. Note that only the fixed range partitions have been created, none of the interval partition have been created. Start the redefinition process: BEGIN DBMS_REDEFINITION.START_REDEF_TABLE( uname => 'TESTS_OCS' ,orig_table => 'FileStorage' ,int_table => 'FileStorage_PART' ,col_mapping => NULL ,options_flag => DBMS_REDEFINITION.CONS_USE_PK ); END; This operation can take some time to complete, depending how many contents that you have and on the size of the table. Using the DBA user you can check the progress with this command: SELECT * FROM v$sesstat WHERE sid = 1; Copy dependent objects: DECLARE redefinition_errors PLS_INTEGER := 0; BEGIN DBMS_REDEFINITION.COPY_TABLE_DEPENDENTS( uname => 'TESTS_OCS' ,orig_table => 'FileStorage' ,int_table => 'FileStorage_PART' ,copy_indexes => DBMS_REDEFINITION.CONS_ORIG_PARAMS ,copy_triggers => TRUE ,copy_constraints => TRUE ,copy_privileges => TRUE ,ignore_errors => TRUE ,num_errors => redefinition_errors ,copy_statistics => FALSE ,copy_mvlog => FALSE ); IF (redefinition_errors > 0) THEN DBMS_OUTPUT.PUT_LINE('>>> FileStorage to FileStorage_PART temp copy Errors: ' || TO_CHAR(redefinition_errors)); END IF; END; With the DBA user, verify that there's no errors: SELECT object_name, base_table_name, ddl_txt FROM DBA_REDEFINITION_ERRORS; *Note that will show 2 lines related to the constrains, this is expected. Synchronize the interim table FileStorage_PART: BEGIN DBMS_REDEFINITION.SYNC_INTERIM_TABLE( uname => 'TESTS_OCS', orig_table => 'FileStorage', int_table => 'FileStorage_PART'); END; Gather statistics on the new table: EXEC DBMS_STATS.GATHER_TABLE_STATS(USER, 'FileStorage_PART', cascade => TRUE); Complete the redefinition: BEGIN DBMS_REDEFINITION.FINISH_REDEF_TABLE( uname => 'TESTS_OCS', orig_table => 'FileStorage', int_table => 'FileStorage_PART'); END; During the execution the FileStorage table is locked in exclusive mode until finish the operation. After the last command the FileStorage table is partitioned. If you have contents out of the range partition, you should see the new partitions created automatically, not generating an error if you “forgot” to create all the future ranges. You will see something like: You now can drop the FileStorage_PART table: border-bottom-width: 1px; border-bottom-style: solid; text-align: left; border-left-color: silver; border-left-width: 1px; border-left-style: solid; padding-bottom: 4px; line-height: 12pt; background-color: #f4f4f4; margin-top: 20px; margin-right: 0px; margin-bottom: 10px; margin-left: 0px; padding-left: 4px; width: 97.5%; padding-right: 4px; font-family: 'Courier New', Courier, monospace; direction: ltr; max-height: 200px; font-size: 8pt; overflow-x: auto; overflow-y: auto; border-top-color: silver; border-top-width: 1px; border-top-style: solid; cursor: text; border-right-color: silver; border-right-width: 1px; border-right-style: solid; padding-top: 4px; " id="codeSnippetWrapper"> DROP TABLE FileStorage_PART PURGE; To check the FileStorage table is valid and is partitioned, use the command: SELECT num_rows,partitioned FROM user_tables WHERE table_name = 'FILESTORAGE'; You can list the contents of the FileStorage table in a specific partition, per example: SELECT * FROM FileStorage PARTITION (FILESTORAGE_PART_LEGACY) Some useful commands that you can use to check the partitions, note that you need to run using a DBA user: SELECT * FROM DBA_TAB_PARTITIONS WHERE table_name = 'FILESTORAGE';   SELECT * FROM DBA_TABLESPACES WHERE tablespace_name like 'TESTS_OCS%'; After the redefinition process complete you have a new FileStorage table storing all content that has the Storage rule pointed to the JDBC Storage and partitioned using the rule set during the creation of the temporary interim FileStorage_PART table. At this point you can test the WebCenter Content downloading the documents (Original and Renditions). Note that the content could be already in the cache area, take a look in the weblayout directory to see if a file with the same id is there, then click on the web rendition of your test file and see if have created the file and you can open, this means that is all working. The redefinition process can be repeated many times, this allow you test what the better layout, over and over again. Now some interesting maintenance actions related to the partitions: Make an tablespace read only. No issues viewing, the WebCenter Content do not alter the revisions When try to delete an content that is part of an read only tablespace, an error will occurs and the document will not be deleted The only way to prevent errors today is creating an custom component that checks the partitions and if you have an document in an “Read Only” repository, execute the deletion process of the metadata and mark the document to be deleted on the next db maintenance, like a new redefinition. Take an tablespace off-line for archiving purposes or any other reason. When you try open an document that is included in this tablespace will receive an error that was unable to retrieve the content, but the others online tablespaces are not affected. Same behavior when deleting documents. Again, an custom component is the solution. If you have an document “out of range”, the component can show an message that the repository for that document is offline. This can be extended to a option to the user to request to put online again. Moving some legacy content to an offline repository (table) using the Exchange option to move the content from one partition to a empty nonpartitioned table like FileStorage_LEGACY. Note that this option will remove the registers from the FileStorage and will not be able to open the stored content. You always need to keep in mind the indexes and constrains. An redefinition separating the original content (vault) from the renditions and separate by date ate the same time. This could be an option for DAM environments that want to have an special place for the renditions and put the original files in a storage with less performance. The process will be the same, you just need to change the script of the interim table to use composite partitioning. Will be something like: CREATE TABLE FILESTORAGE_RenditionPart ( DID NUMBER(*,0) NOT NULL ENABLE, DRENDITIONID VARCHAR2(30 CHAR) NOT NULL ENABLE, DLASTMODIFIED TIMESTAMP (6), DFILESIZE NUMBER(*,0), DISDELETED VARCHAR2(1 CHAR), BFILEDATA BLOB ) LOB (BFILEDATA) STORE AS SECUREFILE ( ENABLE STORAGE IN ROW NOCACHE LOGGING KEEP_DUPLICATES NOCOMPRESS ) PARTITION BY LIST (DRENDITIONID) SUBPARTITION BY RANGE (DLASTMODIFIED) ( PARTITION Vault VALUES ('primaryFile') ( SUBPARTITION FILESTORAGE_VAULT_LEGACY VALUES LESS THAN (TO_DATE('05-APR-2012 12.00.00 AM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_VAULT_DAY1 VALUES LESS THAN (TO_DATE('06-APR-2012 07.25.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_VAULT_DAY2 VALUES LESS THAN (TO_DATE('06-APR-2012 07.55.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_VAULT_DAY3 VALUES LESS THAN (TO_DATE('06-APR-2012 07.58.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_VAULT_FUTURE VALUES LESS THAN (MAXVALUE) ) ,PARTITION WebLayout VALUES ('webViewableFile') ( SUBPARTITION FILESTORAGE_WEBLAYOUT_LEGACY VALUES LESS THAN (TO_DATE('05-APR-2012 12.00.00 AM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_WEBLAYOUT_DAY1 VALUES LESS THAN (TO_DATE('06-APR-2012 07.25.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_WEBLAYOUT_DAY2 VALUES LESS THAN (TO_DATE('06-APR-2012 07.55.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_WEBLAYOUT_DAY3 VALUES LESS THAN (TO_DATE('06-APR-2012 07.58.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_WEBLAYOUT_FUTURE VALUES LESS THAN (MAXVALUE) ) ,PARTITION Special VALUES ('Special') ( SUBPARTITION FILESTORAGE_SPECIAL_LEGACY VALUES LESS THAN (TO_DATE('05-APR-2012 12.00.00 AM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_SPECIAL_DAY1 VALUES LESS THAN (TO_DATE('06-APR-2012 07.25.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_SPECIAL_DAY2 VALUES LESS THAN (TO_DATE('06-APR-2012 07.55.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_SPECIAL_DAY3 VALUES LESS THAN (TO_DATE('06-APR-2012 07.58.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_SPECIAL_FUTURE VALUES LESS THAN (MAXVALUE) ) )ENABLE ROW MOVEMENT; The next post related to partitioned repository will come with an sample component to handle the possible exceptions when you need to take off line an tablespace/partition or move to another place. Also, we can include some integration to the Retention Management and Records Management. Another subject related to partitioning is the ability to create an FileStore Provider pointed to a different database, raising the level of the distributed storage vs. performance. Let us know if this is important to you or you have an use case not listed, leave a comment. Cross-posted on the blog.ContentrA.com

    Read the article

< Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >