Search Results

Search found 14789 results on 592 pages for 'pro backup'.

Page 392/592 | < Previous Page | 388 389 390 391 392 393 394 395 396 397 398 399  | Next Page >

  • How to setup Lucene search for a B2B web app?

    - by Bill Paetzke
    Given: 5000 databases (spread out over 5 servers) 1 database per client (so you can infer there are 1000 clients) 2 to 2000 users per client (let's say avg is 100 users per client) Clients (databases) come and go every day (let's assume most remain for at least one year) Let's stay agnostic of language or sql brand, since Lucene (and Solr) have a breadth of support The Question: How would you setup Lucene search so that each client can only search within its database? How would you setup the index(es)? Would you need to add a filter to all search queries? If a client cancelled, how would you delete their (part of the) index? (this may be trivial--not sure yet) Possible Solutions: Make an index for each client (database) Pro: Search is faster (than one-index-for-all method). Indices are relative to the size of the client's data. Con: I'm not sure what this entails, nor do I know if this is beyond Lucene's scope. Have a single, gigantic index with a database_name field. Always include database_name as a filter. Pro: Not sure. Maybe good for tech support or billing dept to search all databases for info. Con: Search is slower (than index-per-client method). Flawed security if query filter removed. For Example: Joel Spolsky said in Podcast #11 that his hosted web app product, FogBugz On-Demand, uses Lucene. He has thousands of on-demand clients. And each client gets their own database. His situation is quite similar to mine. Although, he didn't elaborate on the setup (particularly indices); hence, the need for this question. One last thing: I would also accept an answer that uses Solr (the extension of Lucene). Perhaps it's better suited for this problem. Not sure.

    Read the article

  • Troubleshooting failover cluster problem in W2K8 / SQL05

    - by paulland
    I have an active/passive W2K8 (64) cluster pair, running SQL05 Standard. Shared storage is on a HP EVA SAN (FC). I recently expanded the filesystem on the active node for a database, adding a drive designation. The shared storage drives are designated as F:, I:, J:, L: and X:, with SQL filesystems on the first 4 and X: used for a backup destination. Last night, as part of a validation process (the passive node had been offline for maintenance), I moved the SQL instance to the other cluster node. The database in question immediately moved to Suspect status. Review of the system logs showed that the database would not load because the file "K:\SQLDATA\whatever.ndf" could not be found. (Note that we do not have a K: drive designation.) A review of the J: storage drive showed zero contents -- nothing -- this is where "whatever.ndf" should have been. Hmm, I thought. Problem with the server. I'll just move SQL back to the other server and figure out what's wrong.. Still no database. Suspect. Uh-oh. "Whatever.ndf" had gone into the bit bucket. I finally decided to just restore from the backup (which had been taken immediately before the validation test), so nothing was lost but a few hours of sleep. The question: (1) Why did the passive node think the whatever.ndf files were supposed to go to drive "K:", when this drive didn't exist as a resource on the active node? (2) How can I get the cluster nodes "re-syncd" so that failover can be accomplished? I don't know that there wasn't a "K:" drive as a cluster resource at some time in the past, but I do know that this drive did not exist on the original cluster at the time of resource move.

    Read the article

  • NHibernate: how to handle entity-based validation using session-per-request pattern, without control

    - by Seth Petry-Johnson
    What is the best way to do entity-based validation (each entity class has an IsValid() method that validates its internal members) in ASP.NET MVC, with a "session-per-request" model, where the controller has zero (or limited) knowledge of the ISession? Here's the pattern I'm using: Get an entity by ID, using an IFooRepository that wraps the current NH session. This returns a connected entity instance. Load the entity with potentially invalid data, coming from the form post. Validate the entity by callings its IsValid() method. If valid, call IFooRepository.Save(entity). Otherwise, display error message. The session is currently opened when the request begins and flushed when the request ends. Since my entity is connected to a session, flushing the session attempts to save the changes even if the object is invalid. What's the best way to keep validation logic in the entity class, limit controller knowledge of NH, and avoid saving invalid changes at the end of a request? Option 1: Explicitly evict on validation failure, implicitly flush: if the validation fails, I could manually evict the invalid object in the action method. If successful, I do nothing and the session is automatically flushed. Con: error prone and counter-intuitive ("I didn't call .Save(), why are my invalid changes being saved anyways?") Option 2: Explicitly flush, do nothing by default: By default I can dispose of the session on request end, only flushing if the controller indicates success. I'd probably create a SaveChanges() method in my base controller that sets a flag indicating success, and then query this flag when closing the session at request end. Pro: More intuitive to troubleshoot if dev forgets this step [relative to option 1] Con: I have to call IRepository.Save(entity)' and SaveChanges(). Option 3: Always work with disconnected objects: I could modify my repositories to return disconnected/transient objects, and modify the Repo.Save() method to re-attach them. Pro: Most intuitive, given that controllers don't know about NH. Con: Does this defeat many of the benefits I'd get from NH?

    Read the article

  • MSBuild file for deployment process

    - by Lee Englestone
    I could do with some pointers, code examples or references that may help me do the following in an msbuild file to help speed up the deployment process.. This scenario involves getting a developers 'local' version onto a 'development' server.. Increment a developers local Web Applications Assembly version number Publish a developers local Web Application files somewhere .rar the publsihed files or folder into the format v[IncrementedAssemblyNumber].rar Copy the .rar to somewhere Backup (.rar) the existing live website folder (located elsewhere) in the format Pre_v[IncrementedAssemblyNumber].rar Move the backed up .rar to a /Backup folder. Overwrite the development web files with the published local web files Should be simple for all those MSBUILD Gurus out there. Like I said, answers or 'Good and applicable' links would be much appreciated. Also i'm thinking of getting one of the MSbuild books. From what I can tell there are 2, possibly 3 contenders. I am not using TFS. Can anyone recommend a book for beginning MSBUILD? Ideally from people that have read more than one book on the subject. Cheers, -- Lee

    Read the article

  • Git force complete sync to master

    - by Jesse
    My workplace uses Subversion for source control so I have been playing around with git-svn for the advantages of my own branches, commit as often as I want without touching the main repo, etc. Since my git svn checkout is local, I have cloned it to a network share as well to act as a backup. My thinking is that if my desktop takes a dump I will at least have the repo on the network share to get changes that I have not had a chance to dcommit yet. My workflow is to work from the desktop, make changes, commit, etc. At the end of the day I want to update the repo on the network share with all of my current changes. I had setup the repo on the network share using git clone repo_on_my_desktop and then updating the repo on the network share with git pull origin master. The problem that I am running into is when I used do a git rebase to squish multiple commits prior to dcommitting to the main svn repository. When I do this, I get merge conflicts on the repo on the network share when I try to backup at night. Is there a way to simply sync entirely with the repository on my desktop without doing a new git clone each night?

    Read the article

  • google maps not working anymore at the drop of a hat

    - by Vordreller
    2 days ago, I was working on a project that involves google maps. The website showed the maps on the pages just fine. Now, I come back to my workstation, nothing has changed, expect for the fact that the google maps won't show up anymore. The code is identical, nobody has touched my machine since I was gone, I've checked the html, everything is perfect and still this isn't working... The Javascript console is giving no errors and the code is identical to a backup I make everytime I call it a day. 2 days ago it was working, today it isn't. I've even copied the source code, put it into an html file and tried that, but the same result. I'm at a loss here. This is my code: <script type="text/javascript"> //<![CDATA[ var map; var directionsPanel; var directions; function initialize() { if (GBrowserIsCompatible()) { map = new GMap2(document.getElementById("map")); map.addControl(new GLargeMapControl()); map.addControl(new GScaleControl()); map.addControl(new GMapTypeControl()); //the route description directionsPanel = document.getElementById("route"); directions = new GDirections(map, directionsPanel); directions.load({COMMAND}); } } //]]> </script> The {COMMAND} is something that the PHP template will parse, I've checked it, the format is 100% correct and like I allready said, code now is identical to the backup, and if it worked back then, it should work now. Did google update their API overnight and did a function that I use here become deprecated? I don't know what's going on here...

    Read the article

  • JavaScript snippet to read and output XML file on page load?

    - by Banderdash
    Hey guys, hoping I might get some help. Have XML file here of a list of books each with unique id and numeral value for whether they are checked out or not. I need a JavaScript snippet that requests the XML file after the page loads and displays the content of the XML file. XML file looks like this: <?xml version="1.0" encoding="UTF-8" ?> <response> <library name="My Library"> <book id="1" checked-out="1"> <authors> <author>John Resig</author> </authors> <title>Pro JavaScript Techniques (Pro)</title> <isbn-10>1590597273</isbn-10> </book> <book id="2" checked-out="0"> <authors> <author>Erich Gamma</author> <author>Richard Helm</author> <author>Ralph Johnson</author> <author>John M. Vlissides</author> </authors> <title>Design Patterns: Elements of Reusable Object-Oriented Software</title> <isbn-10>0201633612</isbn-10> </book> ... </library> </response> Would LOVE any and all help!

    Read the article

  • pdfmark for docinfo metadata in pdf is not accepting accented characters in Keywords or Subject

    - by rpilkey
    I am inserting metadata into postscript files with a program, to be distilled to pdf with Adobe Distiller. I am using this code that I grabbed from Thomas Merz's "Web Publishing with Acrobat-PDF": /pdfmark where {pop} {userdict /pdfmark /cleartomark load put} ifelse [ /Title (mot accenté) /Author (mot accenté) /Subject (mot accenté) /Keywords (mot accenté) /DOCINFO pdfmark When you look at the metadata, the accented characters turn into "?" in the Subject and Keyword fields, but not the Title and Author fields. The characters are the same ascii 233 I tried replacing them with octal encoding (\351), which came out the same (Title and Author okay, Subject and Keywords messed up). file encoding is latin-1,unix eol I found a mention on adobe forums, but the answer didn't make sense to me. http://forums.adobe.com/message/1165593 I changed the encoding to utf-8, inserted the characters binarily (in VIM : <Ctrl-v>u00e9), no change. I tried inserting the BOM in a few places, it didn't work. This is with Acrobat Pro 9 I didn't notice this problem with Acrobat Pro 7. Does anybody know of a workaround to get the accented characters into ALL the metadata fields when modifying a postscript file, or tell me if I'm doing it wrong? It seems weird that different fields would not accept the same bytes.

    Read the article

  • Use ini/appconfig file or sql server file to store user config?

    - by h2g2java
    I know that the preference for INI or appconfig XML is their human readability. Let's say user preferences stored for my app are hierarchical and numbers about a thousand items and it would be really confusing for a user to edit an INI to change things anyway. I have always been using a combination of INI with appconfig. I am leaning towards using sql server db file, now. Every time the user changes a preference while using the app, it would be stored into the db file - that's my line of thought. I am also thinking that such a config db file could move around with the app too, just like an INI. Before I do that, any advice 1. If there are any disadvantages against using a db file over INI or appconfig. 2. If a shop uses mysql or oracle, do you think your colleagues would lift up their pro-mysql or pro-oracle eyebrow questioning why you would use sql server technology in a mysql or oracle shop? I mean, I am just using it like an INI file or app.config anyway, right?

    Read the article

  • Using Tcl DSL in Python

    - by Sridhar Ratnakumar
    I have a bunch of Python functions. Let's call them foo, bar and baz. They accept variable number of string arguments and does other sophisticated things (like accessing the network). I want the "user" (let's assume he is only familiar with Tcl) to write scripts in Tcl using those functions. Here's an example (taken from Macports) that user can come up with: post-configure { if {[variant_isset universal]} { set conflags "" foreach arch ${configure.universal_archs} { if {${arch} == "i386"} {append conflags "x86 "} else { if {${arch} == "ppc64"} {append conflags "ppc_64 "} else { append conflags ${arch} " " } } } set profiles [exec find ${worksrcpath} -name "*.pro"] foreach profile ${profiles} { reinplace -E "s|^(CONFIG\[ \\t].*)|\\1 ${conflags}|" ${profile} # Cures an isolated case system "cd ${worksrcpath}/designer && \ ${qt_dir}/bin/qmake -spec ${qt_dir}/mkspecs/macx-g++ -macx \ -o Makefile python.pro" } } } Here, variant_issset, reinplace are so on (other than Tcl builtins) are implemented as Python functions. if, foreach, set, etc.. are normal Tcl constructs. post-configure is a Python function that accepts, well, a Tcl code block that can later be executed (which in turns would obviously end up calling the above mentioned Python "functions"). Is this possible to do in Python? If so, how? from Tkinter import *; root= Tk(); root.tk.eval('puts [array get tcl_platform]') is the only integration I know of, which is obviously very limited (not to mention the fact that it starts up X11 server on mac).

    Read the article

  • How to close all tabs in Safari using AppleScript?

    - by Form
    I have made a very simple AppleScript to close all tabs in Safari. The problem is, it works, but not completely. Here's the code: tell application "Safari" repeat with aWindow in windows repeat with aTab in tabs of aWindow aTab close end repeat end repeat end tell I've also tried this script: tell application "Safari" repeat with i from 0 to the number of items in windows set aWindow to item i of windows repeat with j from 0 to the number of tabs in aWindow set aTab to item j of tabs of aWindow aTab close end repeat end repeat end tell ... but it does not work either. I tried that on my system (MacBook Pro jan 2008), as well as on a Mac Pro G5 under Tiger and the script fails on both, albeit with a much less descriptive error on Tiger. The problem is that only a couple of tabs are closed. Running the script a few times closes a few tab each time until none is left, but always fails with the same error after closing a few tabs. Under Leopard I get an out of bounds error. Since I am using fast enumeration (not using "repeat from 0 to number of items in windows") I don't see how I can get an out of bounds error with this... My goal is to use the Cocoa Scripting Bridge to close tabs in Safari from my Objective-C Cocoa application but the Scripting Bridge fails in the same manner. The non-deletable tabs show as NULL in the Xcode debugger, while the other tabs are valid objects from which I can get values back (such as their title). In fact I tried with the Scripting Bridge first then told myself why not try this directly in AppleScript and I was surprised to see the same results. I must have a glaring omission or something in there... (seems like a bug in Safari AppleScript support to me... :S) I've used repeat loops and Obj-C 2.0 fast enumeration to iterate through collections before with zero problems, so I really don't see what's wrong here. Anyone can help? Thanks in advance!

    Read the article

  • Incremental deploy from a shell script

    - by WishCow
    I have a project, where I'm forced to use ftp as a means of deploying the files to the live server. I'm developing on linux, so I hacked together a bash script that makes a backup of the ftp server's contents, deletes all the files on the ftp, and uploads all the fresh files from the mercurial repository. (and taking care of user uploaded files and folders, and making post-deploy changes, etc) It's working well, but the project is starting to get big enough to make the deployment process too long. I'd like to modify the script to look up which files have changed, and only deploy the modified files. (the backup is fine atm as it is) I'm using mercurial as a VCS, so my idea is to somehow request the changed files between two revisions from it, iterate over the changed files, and upload each modified file, and delete each removed file. I can use hg log -vr rev1:rev2, and from the output, I can carve out the changed files with grep/sed/etc. Two problems: I have heard the horror stories that parsing the output of ls leads to insanity, so my guess is that the same applies to here, if I try to parse the output of hg log, the variables will undergo word-splitting, and all kinds of transformations. hg log doesn't tell me a file is modified/added/deleted. Differentiating between modified and deleted files would be the least. So, what would be the correct way to do this? I'm using yafc as an ftp client, in case it's needed, but willing to switch.

    Read the article

  • Powershell - Using variables in -Include filter

    - by TheD
    (again!) My final question for the night. I have a function which uses the Get-ChildItem to work out the newest file within a folder. The reason being I need to find the latest incremental backup in a folder and script an EXE to mount it. However, within this same folder there are multiple backup chains for all different servers: i.e. SERVER1_C_VOL_b001_i015.spi SERVER2_D_VOL_b001_i189.spi SERVER1_C_VOL_b002_i091.spi SERVER1_E_VOL_b002_i891.spi (this is the newest file created) I want only to look at SERVER1, look at only the C_VOL and look at only b001 - nothing else. I have all these seperate components: the drive letter, the Server Name, the b00X number stored in array. How then can I go and use the Get-ChildItem with the -Include filter to only look at: .spi SERVER1 C_VOL b001 Given I have all of these separate components in an array taken from a text file: Get-Content .\PostBackupCheck-TextFile.txt | Select-Object -First $i { $a = $_ -split ' ' ; $locationArray += "$($a[0]):\$($a[1])\$($a[2])" ; $imageArray += "$($a[2])_$($a[3])_VOL_b00$($a[4])_i$($a[5]).spi" } I then go onto try and filter then and then get stuck: $latestIncremental = Get-ChildItem -Path ${shadowProtectDataLocation}\*.* -Include *.spi | Sort-Object LastAccessTime -Descending | Select-Object -First 1 I have the .spi filtered, but how can I also just include the C (for volume), the number for the b00x and the server name. Thanks!

    Read the article

  • Wrong label for a nodereference in Drupal content-type

    - by MPD
    We have a content-type built using CCK. One of the fields is a node reference. The node picker is using a view to build the options. A few days ago, everything was working well. Today, it looks like all node reference fields using views to populate the selection options are displaying the wrong label. Every single label in the option is ``A'', but the actual node number is correct. The form actually works, just the labels are incorrect. We have tried just about every combination of edit/save, disable/enable, reboot, clear cache, clone the view, rebuild the view, new view, etc, but we still have a big list of As. If we create a brand new content type with a brand new node reference field, we get the problem. Through some backup/restore exercises, we have determined that the problem is actually in the database and not in the code. We can restore our last good backup, but we will lose a decent amount of work we have put into other parts of the database. We enabled mysql query logging, and the view is actually being called properly, but we cannot track down where the problem is creeping in after that (unraveling the CCK / Views / Drupal plumbing is a challenge). The install was build with latest stable versions as of April. The problems referred to in http://drupal.org/node/624422 is similar, but our code versions include the patches mentioned. Any ideas would be appreciated. Thanks.

    Read the article

  • PHP Mailer Class - Securing Email Credentials

    - by Alan A
    I am using the php mailer class to send email via my scripts. The structure is as follows: $mail = new PHPMailer; $mail->IsSMTP(); // Set mailer to use SMTP $mail->Host = 'myserver.com'; // Specify main and backup server $mail->SMTPAuth = true; // Enable SMTP authentication $mail->Username = '[email protected]'; // SMTP username $mail->Password = 'user123'; // SMTP password $mail->SMTPSecure = 'pass123'; It seems to me to be a bit of a security hole having the mailbox credentials in plain view. So I thought I might put these in an external file outside of the web root. My question is how would I then assign the $mail object these values. I of course no how to use include and/or requires... would it simple be a case of.... $mail->IsSMTP(); // Set mailer to use SMTP $mail->Host = 'myserver.com'; // Specify main and backup server $mail->SMTPAuth = true; // Enable SMTP authentication includes '../locationOutsideWebroot/emailCredntials.php'; $mail->SMTPSecure = 'pass123'; Then emailCredentails.php: <?php $mail->Username = '[email protected]'; $mail->Password = 'user123'; ?> Would this be sufficient and secure enough? Thanks, Alan.

    Read the article

  • MySQL problem reconnecting (mysqld.exe) keeps giving error...

    - by Ronedog
    Need some guidance figuring out what went wrong. I've been using mysql, phpmyadmin for just under a year on my home computer while I develop a webapp. 3 days ago I updated my windows vista with all the "wonderful" microsoft updates, security patches, etc...and now it's broke. I tried uninstalling all the upgrades, but there are 4 of them I can't unistall because microsoft says their "operating system" updates and can't be unistalled. My system is: windows vista, php 5+, mysql 5.1, Apache 2+. I can run my web app and it queries the database without any problems. However, when I run phpmyadmin to get into the database I get an error: "mysqld.exe has stopped working" and phpmyadmin crashes. I tried going to the command line for mysql to do a mysqldump to backup my database and it gives me an error "2013, could not connect to the server". If I restart the computer the webapp will work again. Basically, php can query the database, but if I try to get at the database through phpmyadmin, or the command prompt the mysqld.exe error occurs and blows mysql out. Any ideas what's going on here? Any ideas how to get around this to backup the db, so I can reinstall mysql?. I'm really lost where to start. I don't really know if the updates caused the problem, or if the 4 updates that can't be unistalled are really the problem. Any tips will be appreciated. thanks.

    Read the article

  • Problem deleting .svn directories on Windows XP

    - by John L
    I don't seem to have this problem on my home laptop with Windows XP, but then I don't do much work there. On my work laptop, with Windows XP, I have a problem deleting directories when it has directories that contain .svn directories. When it does eventually work, I have the same issue emptying the Recycle bin. The pop-up window says "Cannot remove folder text-base: The directory is not empty" or prop-base or other folder under .svn This continued to happen after I changed config of TortoiseSVN to stop the TSVN cache process from running and after a reboot of the system. Multiple tries will eventually get it done. But it is a huge annoyance because there are other issues I'm trying to fix, so I'm hoping it is related. 'Connected Backup PC' also runs on the laptop and the real problem is that cygwin commands don't always work. So I keep thinking the dot files and dot directories have something to do with both problems and/or the backup or other process scanning the directories is doing it. But I've run out of ideas of what to try or how to identify the problem further.

    Read the article

  • measure the response time of a link

    - by Ahoura Ghotbi
    I am trying to create a simple load balance script and I was wondering if it is possible to find the response time of a server live? By that I mean is it possible to measure how long it takes for a server to respond after the request has been sent out? What I am trying to do is fairly simple, I want to send a request to a link/server and do a count down, if the server took more than 5 seconds to reply, I would like to fall on the backup server. Note that it doesnt have to be in pure php, I wouldnt mind using other languages such as javascript, C/C++, asp, but I prefer to do it in PHP. if it is possible to do the task, could you just point me to the right direction so I can read up on it. Clarification What I want to do is not to download a file and see how long it took, my servers have high load and it takes a while for them to respond when you click on a file to download, what I want to do is to measure the time it takes the server to respond (in this situation, its the time it takes the server to respond and allow the user to download the file), and if it takes longer than x seconds, it should fall back on a backup server.

    Read the article

  • looping array inside list in c#

    - by 3yoon af
    I have a list of array that contains multiple arrays. each array has 2 index .. First, I want to loop the list. Then i want to loop the array inside the list .. How can i do that ? I try to use this way, but it doesn't work ! foreach (string[] s in ArrangList1) { int freq1 = int.Parse(s[1]); foreach (string[] s1 in ArrangList) { int freq2 = int.Parse(s1[1]); if (freq1 < freq2) { backup = s; index1 = ArrangList1.IndexOf(s); index2 = ArrangList.IndexOf(s1); ArrangList[index1] = s1; ArrangList[index2] = s; } backup = null; } } it give me error in line 4.. I try to do the loop using other way .. but i don't know how to continue ! for (int i = 0; i < ArrangList1.Count; i++) { for (int j = 0; j < ArrangList1[i].Length; j++) { ArrangList1[i][1]; } } I use C# language .. Can someone help me, please?

    Read the article

  • Perl Capture and Modify STDERR before it prints to a file

    - by MicrobicTiger
    I have a perl script which performs multiple external commands and prints the outputs from STDERR and STDOUT to a logfile along with a series of my own print statements to act as documentation on the process. My problem is that the STDERR repeats ~identical prints as example below. I'd like to capture this before it prints and replace with the final result for each of the commands i run. blocks evaluated : 0 blocks evaluated : 10000 blocks evaluated : 20000 blocks evaluated : 30000 ... blocks evaluated : 3420000 blocks evaluated : 3428776 Here's how I'm getting STDOUT and STDERR my $logfile = "Logfile.log"; #log file name #--- Open log file for append if specified --- if ( $logfile ) { open ( OLDOUT, ">&", STDOUT ) or die "ERROR: Can't backup STDOUT location.\n"; close STDOUT; open ( STDOUT, ">", $logfile ) or die "ERROR: Logfile [$logfile] cannot be opened.\n"; } if ( $logfile ) { open ( OLDERR, ">&", STDERR ) or die "ERROR: Can't backup STDERR location.\n"; close STDERR; open ( STDERR, '>&STDOUT' ) or die "ERROR: failed to pass STDERR to STDOUT.\n"; } and closing them close STDERR; open ( STDERR, ">&", OLDERR ) or die "ERROR: Can't fix that first thing you broke!\n"; close STDOUT; open ( STDOUT, ">&", OLDOUT ) or die "ERROR: Can't fix that other thing you broke!\n"; How do I access the STDERR when each print is occurring to do the replace? Or prevent it from printing if it isn't the last of the batch. Many Thanks in advance.

    Read the article

  • Images won't load if they are of high size

    - by Fahim Parkar
    I have created web-application using JSF 2.0 and mysql. I am storing images in DB using MEDIUMBLOB. When I try to load image, I am able to see those images. However if the image size is big (1 MB or more), I can see half or 3/4th image on the browser. Any idea how to overcome this issue? Do I need to set any variable in JSF or MySQL? I know I should have saved the images over disk instead of DB, however this was client requirement. Client wanted to backup data and provide it to someone else and client don't want to backup DB and images also. Edit 1 Do I need to set any variables on mysql like query_cache. Edit 2 When I download same image and put below code it works perfectly. <h:graphicImage value="images/myImage4.png" width="50%" /> Edit 3 code is as below. <h:graphicImage value="DisplayImage?mainID=drawing" /> DisplayImage.java String imgLen = rs1.getString(1); int len = imgLen.length(); byte[] rb = new byte[len]; InputStream readImg = rs1.getBinaryStream(1); InputStream inputStream = readImg; int index = readImg.read(rb, 0, len); response.reset(); response.setHeader("Content-Length", String.valueOf(len)); response.setHeader("Content-disposition", "inline;filename=/file.png"); response.setContentType("image/png"); response.getOutputStream().write(rb, 0, len); response.getOutputStream().flush(); When I print len I get value as len=1548432

    Read the article

  • C# Script to get modified date from URL directory

    - by Jynx
    Hello, I am fairly new to C# and was hoping for some assistance. I currently use Visual Studio 2008. What I am wanting to do is the following: I have a server (\backupserv) that runs a RoboCopy script nightly to backup directories from 18 other servers. These directories are then copied down to \backup in directories of their own: Example: It copies down "Dir1", "Dir2", and "Dir3" from Server1 into \backupserv\backups\Server1 into their own directories (\backupserv\backups\Server1\Dir1, \backupserv\backups\Server1\Dir2, and \backupserv\backups\Server1\Dir3). It does this for all 18 servers nightly between 12am and 6am. The RoboCopy runs via schedule task. A log file is created in \backupserv\backups\log and is named server1-dir1.log, server1-dir2.log, etc. What I am wanting to accomplish in C# is the ability to have a 'report' showing the modified date of each text log file. To do this I need to browse the \backupserv\backups\log directory, determine the modified date, and have a report displayed (prefer HTML if possible). Along with the modified date I will be showing more information, but that is later. Again, I am fairly new to C#, so, please be gentle. I was referred here by another programmer, and was told I would get some assistance. Thanks in advance. If I have missed any detail please let me know and I will do my best to answer.

    Read the article

  • text in div not going to next line

    - by Kia Dull
    for some reason the text in my div doesn't go to the next line, i've tried several different css elements which don't seem to work.... word-wrap:break word, just jumbles the letters... what i want is for one there is an extra word it goes down to the next line like it's supposed to here is my code this is the div it's in #top7 { width: 150px; height:auto; margin: 5px; display: block; float: left; word-wrap:break-word; } text that it's in #p6 { font-family: Myriad Pro; margin: 1px; font-size: 22px; background-color:#540f45; padding: 5px 5px 3px 4px; margin:4px; } a { text-decoration: none; color: white; text-align: right; font-family: Myriad Pro; } here is the php function that retrieves the data from the database <p id='p6'><?php echo "<a href='' "</a>"; ?></p> this is all wrapped in these two id's body { background:#603e4f; display: block; } #foursquare { background-color:#603e4f; width: 290px; display: block; position: absolute; }

    Read the article

  • CodePlex Daily Summary for Tuesday, May 31, 2011

    CodePlex Daily Summary for Tuesday, May 31, 2011Popular ReleasesNearforums - ASP.NET MVC forum engine: Nearforums v6.0: Version 6.0 of Nearforums, the ASP.NET MVC Forum Engine, containing new features: Authentication using Membership Provider for SQL Server and MySql Spam prevention: Flood Control Moderation: Flag messages Content management: Pages: Create pages (about us/contact/texts) through web administration Allow nearforums to run as an IIS subapp Migrated Facebook Connect to OAuth 2.0 Visit the project Roadmap for more details.NetOffice - The easiest way to use Office in .NET: NetOffice Release 0.8b: Changes: - fix critical issue 15922(AccessViolationException) once and for all update is strongly recommended Includes: - Runtime Binaries and Source Code for .NET Framework:......v2.0, v3.0, v3.5, v4.0 - Tutorials in C# and VB.Net:..............................................................COM Proxy Management, Events, etc. - Examples in C# and VB.Net:............................................................Excel, Word, Outlook, PowerPoint, Access - COMAddin Examples in C# and VB....Facebook Graph Toolkit: Facebook Graph Toolkit 1.5.4186: Updates the API in response to Facebook's recent change of policy: All Graph Api accessing feeds or posts must provide a AccessToken.SharePoint Farm Poster: SharePoint Farm Poster: SharePoint Farm Poster is generated by a PowerShell Script. Run this script under the Farm Admin Account. After downloading, unblock the file in the Property Window. Current version is beta : v0.3.0VCC: Latest build, v2.1.40530.0: Automatic drop of latest buildServiio for Windows Home Server: Beta Release 0.5.2.0: Ready for widespread beta. Synchronized build number to Serviio version to avoid confusion.AcDown????? - Anime&Comic Downloader: AcDown????? v3.0 Beta4: ??AcDown?????????????,??????????????,????、????。?????Acfun????? ????32??64? Windows XP/Vista/7 ????????????? ??:????????Windows XP???,?????????.NET Framework 2.0???(x86)?.NET Framework 2.0???(x64),?????"?????????"??? ??v3.0 Beta4 2011-5-31?? ???Bilibili.us????? ???? ?? ???"????" ???Bilibili.us??? ??????? ?? ??????? ?? ???????? ?? ?? ???Bilibili.us?????(??????????????????) ??????(6.cn)?????(????) ?? ?????Acfun?????????? ?????????????? ???QQ???????? ????????????Discussion...Terraria Map Generator: TerrariaMapTool 1.0.0.2 Beta: Version 1.0.0.2 Beta Release - Now has a Gui - Draws backgrounds (May still not be exact) - Hopefully fixed support on DirectX 9 machine.CodeCopy Auto Code Converter: Code Copy v0.1: Full add-in, setup project source code and setup fileEnhSim: EnhSim 2.4.5 ALPHA: 2.4.5 ALPHAThis release supports WoW patch 4.1 at level 85 To use this release, you must have the Microsoft Visual C++ 2010 Redistributable Package installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=A7B7A05E-6DE6-4D3A-A423-37BF0912DB84 To use the GUI you must have the .NET 4.0 Framework installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=9cfb2d51-5ff4-4491-b0e5-b386f32c0992 - Added in the T12 s...TerrariViewer: TerrariViewer v2.4.1: Added Piggy Bank editor and fixed some minor bugs.Kooboo CMS: Kooboo CMS 3.02: What is new in kooboo cms 3.02 The most important updates of this version is the Kooboo site builder, an unique and creative web design tool, design an professional website and export to Kooboo CMS. See: http://www.sitekin.com Add Version contorl on View, Layout and other elements. Add user CMS language selection, user can select a language to use on their CMS backend. Add User profile provider, you can use now stop website user information on a SQL database. Previously it stored on XML...mojoPortal: 2.3.6.6: see release notes on mojoportal.com http://www.mojoportal.com/mojoportal-2366-released Note that we have separate deployment packages for .NET 3.5 and .NET 4.0 The deployment package downloads on this page are pre-compiled and ready for production deployment, they contain no C# source code. To download the source code see the Source Code Tab I recommend getting the latest source code using TortoiseHG, you can get the source code corresponding to this release here.Terraria World Creator: Terraria World Creator: Version 1.01 Fixed a bug that would cause the application to crash. Re-named the Application.VidCoder: 0.9.0: New startup UI for one-click scanning of discs or opening a file/folder. New seek bar on the preview window to make switching previews easier (you can click anywhere on the bar). Added gradient backgrounds to the main window to visually group the sections. Added Open Video File and Open Video Folder options to the File menu. Moved preview button to be in line with the other control buttons. Fixed settings getting in a weird state if they were saved without an output folder being chos...General Media Access WebService: 0.2.0.0 Beta: Updated GMA release with sorting/ordering mechanisms. Several bug fixes.Microsoft All-In-One Code Framework - a centralized code sample library: All-In-One Code Framework 2011-05-26: Alternatively, you can install Sample Browser or Sample Browser VS extension, and download the code samples from Sample Browser. Improved and Newly Added Examples:For an up-to-date code sample index, please refer to All-In-One Code Framework Sample Catalog. NEW Samples for Dynamics Sample Description Owner CSDynamicsNAVWebServices The code sample shows syntax for calling Dynamics NAV Web Services. Lars Lohndorf-Larsen NEW Samples for WPF Sample Description Owner CSWPFDataGridCustomS...Terraria World Viewer: Version 1.1: Update May 26th Added Chest Filtering, this allows chests only containing certain items to have their symbol drawn. (Its under advanced settings tab) GUI elements (checkboxes/etc) are persistant between uses of the application Beta Worlds (i.e. Release #38) will work properly Symbols can be enabled or disabled on a per symbol basis Chest Information tab which is just a dump of the current chest information Meterorite is now visible as a bright magenta pink Application defaults to ...MVC Controls Toolkit: Mvc Controls Toolkit 1.1 RC: *Added: Compatibility with jQuery 1.6.1 Rendering of enumerables with images and/or customizable strings improved the client side tempate engine added new parameters to the template definition binding all new knockout bindings helpers have been fully implemented added a new overload for defining the client-side ViewModel The SetTme method has the option to store the theme in a permanent cookie If no CSS class is provided for the watermark of a TypedTextBox the watermark class of the current t...patterns & practices: Project Silk: Project Silk - Documentation Only Drop - May 24: To get the latest code, please see the previous drop here. Guidance Chapters Ready for Review The following chapters (provided in CHM or PDF format) are ready for community review. Our team very much appreciates your feedback and technical review. All documentation feedback should be posted in the Issue Tracker; if required, a document can be attached along with the feedback. Architecture jQuery UI Widgets Server-Side Implementation Security Unit Testing Web Applications Widget Q...New Projects#liveDB: liveDB is an in-memory database engine for Microsoft .NET providing full ACID support, lightning fast performance and offering a significant reduction of development and operational costs. liveDB is built on Live Domain Technology(TM).8 hours: 8hours Private studyABox2d: A port of Box 2d game engine doing it has an exercise to study how the game engine work.ADempiere.NET: If I have enough time and support I we will translate this into .NETAlmonaster: Almonaster is a turn-based multi-player war game. It is free for all players and comes with absolutely no warranty. The game is fully web-based and requires no downloads, Javascript, Java or ActiveX controls. ASPone API: ASPone partnerské API (aplikacní programové rozhraní) je rozhraní pro vytvorené a urcené pro partnery spolecnosti ASPone, s.r.o. Pomocí tohoto aplikacního rozhraní mužete zautomatizovat radu úkonu, které by pomocí webového rozhraní mohly být casove nárocnejší nebo vyžadují interakci cloveka. API umožnuje zautomatizovat radu úkonu souvisejících se správou domén, doménových kontaktu, webhostingu, databází, serveru a mnoha dalších. Pro zjednodušení práce s API jsou již pripraveni dva ukázkový...CodeCopy Auto Code Converter: This add-in project converts c# and vb.net codes in visual studio.drms: Data Resource Management SystemDrop Down CheckBoxList control (DropDownCheckBoxes): DropDownCheckBoxes is an ASP.NET server control directly inheriting standard ASP.NET CheckBoxList control and fully it supports parent's API (except members responsible for rendering and styling). Thus in most cases CheckBoxList control can be simply replaced with DropDownCheckBoxes with no need to change any data binding code or event handlers. In normal state the control is displayed as a select (DropDownList) control. Clicking the expand button shows a list with check boxes. When the se...Extended Registration module for Orchard CMS: This project has a dependency on the Contrib.Profile module. With this module enabled, users must fill out any parts you add to the User ContentItem in the Registration page. Ideal if you require additional information from your users.GreenWay: Car navigation softwareHost Profiles: Host Profiles is small tool to control, switch and management the hosts file of the computer. The hosts file is located in "c:\windows\system32\drivers\etc\hosts".HRM System MVVM sample code: This is the sample WPF MVVM application that i've described in my blog posts. I hope to give you a clear view of mvvm and other commonly used patterns.Mi Game Library: Ever wanted to store all the games you own into one place that you, could then later come see and search also with your own personal wish list!Micorrhiza: Micorrhiza is a client-server solution written in C# for voice- and video-communications between users in local and global networks.MPlayer.NET for Windows Forms & WPF: MPlayer.NET is a wrapper around MPlayer executable. It's developed on .NET platform and includes visual controls for both Windows Forms and WPF applications.MyGet - NuGet-as-a-Service: This project is the source for http://myget.org. MyGet offers you the possibility to create your own, private, filtered NuGet feed for use in the Visual Studio Package Manager. It can contain packages from the official NuGet feed as well as your private packages, hosted on MyGet.MZExtensions: A collection of handy C# Extension Methods.NCAds: NCadsNetSync: Universal file synchronization agent.OLE 1C7.7: OLE 1C7.7 ?????????? ??????? ??? ??????? ? 1?7.7 ????????? OLE ??????????.Pear 2.5: Pear 2.5 is a web browser which has MetroUI which is also known for WP7. Pear 2.5's graphics is totally made up with MetroUI and looks stunning when browse. This version has 3 builds - 2 alpha builds and 1 gamma delta (beta) build. It's developed in VB.NET which is the easiest.ProjectOne: ProjectOne is a Open Community Information Sharing Website regarding Realty as its primary source.russomi: russomiSopaco Server Foundation 1.x: The one earlier version of my server infrastructure(SSF, Sopaco Server Foundation 1.x, owned by ??)。 Network Layer Based On MINA, message meta in 1.x is hard coded to 6bytes message header like this struct NetworkMessageHeader { short msgId; int msgLength; } struct NetworkMTray Timer: A simple timer/stopwatch which runs fromt he system tray. I started it as a hobby learning project to understand the Win32 API. Now open sourcing it to get more inputs about the same, and at the same time it may prove helpful to othersVENSOFT DIPERCAX: Proyecto Final del Curso de Proyectos II de la Universidad Privada del NorteWindows Phone Blog Menu: A Silverlight navigation control that looks like a Windows Phone 7. The live tiles are links to websites. Use this control on your blog or website to show your love for WP7. It is a creative way to link to external sites you are interested in.

    Read the article

  • Partitioned Repository for WebCenter Content using Oracle Database 11g

    - by Adao Junior
    One of the biggest challenges for content management solutions is related to the storage management due the high volumes of the unstoppable growing of information. Even if you have storage appliances and a lot of terabytes, thinks like backup, compression, deduplication, storage relocation, encryption, availability could be a nightmare. One standard option that you have with the Oracle WebCenter Content is to store data to the database. And the Oracle Database allows you leverage features like compression, deduplication, encryption and seamless backup. But with a huge volume, the challenge is passed to the DBA to keep the WebCenter Content Database up and running. One solution is the use of DB partitions for your content repository, but what are the implications of this? Can I fit this with my business requirements? Well, yes. It’s up to you how you will manage that, you just need a good plan. During you “storage brainstorm plan” take in your mind what you need, such as storage petabytes of documents? You need everything on-line? There’s a way to logically separate the “good content” from the “legacy content”? The first thing that comes to my mind is to use the creation date of the document, but you need to remember that this document could receive a lot of revisions and maybe you can consider the revision creation date. Your plan can have also complex rules like per Document Type or per a custom metadata like department or an hybrid per date, per DocType and an specific virtual folder. Extrapolation the use, you can have your repository distributed in different servers, different disks, different disk types (Such as ssds, sas, sata, tape,…), separated accordingly your business requirements, separating the “hot” content from the legacy and easily matching your compliance requirements. If you think to use by revision, the simple way is to consider the dId, that is the sequential unique id for every content created using the WebCenter Content or the dLastModified that is the date field of the FileStorage table that contains the date of inclusion of the content to the DB Table using SecureFiles. Using the scenario of partitioned repository using an hierarchical separation by date, we will transform the FileStorage table in an partitioned table using  “Partition by Range” of the dLastModified column (You can use the dId or a join with other tables for other metadata such as dDocType, Security, etc…). The test scenario bellow covers: Previous existent data on the JDBC Storage to be migrated to the new partitioned JDBC Storage Partition by Date Automatically generation of new partitions based on a pre-defined interval (Available only with Oracle Database 11g+) Deduplication and Compression for legacy data Oracle WebCenter Content 11g PS5 (Could present some customizations that do not affect the test scenario) For the test case you need some data stored using JDBC Storage to be the “legacy” data. If you do not have done before, just create an Storage rule pointed to the JDBC Storage: Enable the metadata StorageRule in the UI and upload some documents using this rule. For this test case you can run using the schema owner or an dba user. We will use the schema owner TESTS_OCS. I can’t forgot to tell that this is just a test and you should do a proper backup of your environment. When you use the schema owner, you need some privileges, using the dba user grant the privileges needed: REM Grant privileges required for online redefinition. GRANT EXECUTE ON DBMS_REDEFINITION TO TESTS_OCS; GRANT ALTER ANY TABLE TO TESTS_OCS; GRANT DROP ANY TABLE TO TESTS_OCS; GRANT LOCK ANY TABLE TO TESTS_OCS; GRANT CREATE ANY TABLE TO TESTS_OCS; GRANT SELECT ANY TABLE TO TESTS_OCS; REM Privileges required to perform cloning of dependent objects. GRANT CREATE ANY TRIGGER TO TESTS_OCS; GRANT CREATE ANY INDEX TO TESTS_OCS; In our test scenario we will separate the content as Legacy, Day1, Day2, Day3 and Future. This last one will partitioned automatically using 3 tablespaces in a round robin mode. In a real scenario the partition rule could be per month, per year or any rule that you choose. Table spaces for the test scenario: CREATE TABLESPACE TESTS_OCS_PART_LEGACY DATAFILE 'tests_ocs_part_legacy.dat' SIZE 500K AUTOEXTEND ON NEXT 500K MAXSIZE UNLIMITED; CREATE TABLESPACE TESTS_OCS_PART_DAY1 DATAFILE 'tests_ocs_part_day1.dat' SIZE 500K AUTOEXTEND ON NEXT 500K MAXSIZE UNLIMITED; CREATE TABLESPACE TESTS_OCS_PART_DAY2 DATAFILE 'tests_ocs_part_day2.dat' SIZE 500K AUTOEXTEND ON NEXT 500K MAXSIZE UNLIMITED; CREATE TABLESPACE TESTS_OCS_PART_DAY3 DATAFILE 'tests_ocs_part_day3.dat' SIZE 500K AUTOEXTEND ON NEXT 500K MAXSIZE UNLIMITED; CREATE TABLESPACE TESTS_OCS_PART_ROUND_ROBIN_A 'tests_ocs_part_round_robin_a.dat' DATAFILE SIZE 500K AUTOEXTEND ON NEXT 500K MAXSIZE UNLIMITED; CREATE TABLESPACE TESTS_OCS_PART_ROUND_ROBIN_B 'tests_ocs_part_round_robin_b.dat' DATAFILE SIZE 500K AUTOEXTEND ON NEXT 500K MAXSIZE UNLIMITED; CREATE TABLESPACE TESTS_OCS_PART_ROUND_ROBIN_C 'tests_ocs_part_round_robin_c.dat' DATAFILE SIZE 500K AUTOEXTEND ON NEXT 500K MAXSIZE UNLIMITED; Before start, gather optimizer statistics on the actual FileStorage table: EXEC DBMS_STATS.GATHER_TABLE_STATS(USER, 'FileStorage', cascade => TRUE); Now check if is possible execute the redefinition process: EXEC DBMS_REDEFINITION.CAN_REDEF_TABLE('TESTS_OCS', 'FileStorage',DBMS_REDEFINITION.CONS_USE_PK); If no errors messages, you are good to go. Create a Partitioned Interim FileStorage table. You need to create a new table with the partition information to act as an interim table: CREATE TABLE FILESTORAGE_Part ( DID NUMBER(*,0) NOT NULL ENABLE, DRENDITIONID VARCHAR2(30 CHAR) NOT NULL ENABLE, DLASTMODIFIED TIMESTAMP (6), DFILESIZE NUMBER(*,0), DISDELETED VARCHAR2(1 CHAR), BFILEDATA BLOB ) LOB (BFILEDATA) STORE AS SECUREFILE ( ENABLE STORAGE IN ROW NOCACHE LOGGING KEEP_DUPLICATES NOCOMPRESS ) PARTITION BY RANGE (DLASTMODIFIED) INTERVAL (NUMTODSINTERVAL(1,'DAY')) STORE IN (TESTS_OCS_PART_ROUND_ROBIN_A, TESTS_OCS_PART_ROUND_ROBIN_B, TESTS_OCS_PART_ROUND_ROBIN_C) ( PARTITION FILESTORAGE_PART_LEGACY VALUES LESS THAN (TO_DATE('05-APR-2012 12.00.00 AM', 'DD-MON-YYYY HH.MI.SS AM')) TABLESPACE TESTS_OCS_PART_LEGACY LOB (BFILEDATA) STORE AS SECUREFILE ( TABLESPACE TESTS_OCS_PART_LEGACY RETENTION NONE DEDUPLICATE COMPRESS HIGH ), PARTITION FILESTORAGE_PART_DAY1 VALUES LESS THAN (TO_DATE('06-APR-2012 07.25.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) TABLESPACE TESTS_OCS_PART_DAY1 LOB (BFILEDATA) STORE AS SECUREFILE ( TABLESPACE TESTS_OCS_PART_DAY1 RETENTION AUTO KEEP_DUPLICATES COMPRESS ), PARTITION FILESTORAGE_PART_DAY2 VALUES LESS THAN (TO_DATE('06-APR-2012 07.55.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) TABLESPACE TESTS_OCS_PART_DAY2 LOB (BFILEDATA) STORE AS SECUREFILE ( TABLESPACE TESTS_OCS_PART_DAY2 RETENTION AUTO KEEP_DUPLICATES NOCOMPRESS ), PARTITION FILESTORAGE_PART_DAY3 VALUES LESS THAN (TO_DATE('06-APR-2012 07.58.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) TABLESPACE TESTS_OCS_PART_DAY3 LOB (BFILEDATA) STORE AS SECUREFILE ( TABLESPACE TESTS_OCS_PART_DAY3 RETENTION AUTO KEEP_DUPLICATES NOCOMPRESS ) ); After the creation you should see your partitions defined. Note that only the fixed range partitions have been created, none of the interval partition have been created. Start the redefinition process: BEGIN DBMS_REDEFINITION.START_REDEF_TABLE( uname => 'TESTS_OCS' ,orig_table => 'FileStorage' ,int_table => 'FileStorage_PART' ,col_mapping => NULL ,options_flag => DBMS_REDEFINITION.CONS_USE_PK ); END; This operation can take some time to complete, depending how many contents that you have and on the size of the table. Using the DBA user you can check the progress with this command: SELECT * FROM v$sesstat WHERE sid = 1; Copy dependent objects: DECLARE redefinition_errors PLS_INTEGER := 0; BEGIN DBMS_REDEFINITION.COPY_TABLE_DEPENDENTS( uname => 'TESTS_OCS' ,orig_table => 'FileStorage' ,int_table => 'FileStorage_PART' ,copy_indexes => DBMS_REDEFINITION.CONS_ORIG_PARAMS ,copy_triggers => TRUE ,copy_constraints => TRUE ,copy_privileges => TRUE ,ignore_errors => TRUE ,num_errors => redefinition_errors ,copy_statistics => FALSE ,copy_mvlog => FALSE ); IF (redefinition_errors > 0) THEN DBMS_OUTPUT.PUT_LINE('>>> FileStorage to FileStorage_PART temp copy Errors: ' || TO_CHAR(redefinition_errors)); END IF; END; With the DBA user, verify that there's no errors: SELECT object_name, base_table_name, ddl_txt FROM DBA_REDEFINITION_ERRORS; *Note that will show 2 lines related to the constrains, this is expected. Synchronize the interim table FileStorage_PART: BEGIN DBMS_REDEFINITION.SYNC_INTERIM_TABLE( uname => 'TESTS_OCS', orig_table => 'FileStorage', int_table => 'FileStorage_PART'); END; Gather statistics on the new table: EXEC DBMS_STATS.GATHER_TABLE_STATS(USER, 'FileStorage_PART', cascade => TRUE); Complete the redefinition: BEGIN DBMS_REDEFINITION.FINISH_REDEF_TABLE( uname => 'TESTS_OCS', orig_table => 'FileStorage', int_table => 'FileStorage_PART'); END; During the execution the FileStorage table is locked in exclusive mode until finish the operation. After the last command the FileStorage table is partitioned. If you have contents out of the range partition, you should see the new partitions created automatically, not generating an error if you “forgot” to create all the future ranges. You will see something like: You now can drop the FileStorage_PART table: border-bottom-width: 1px; border-bottom-style: solid; text-align: left; border-left-color: silver; border-left-width: 1px; border-left-style: solid; padding-bottom: 4px; line-height: 12pt; background-color: #f4f4f4; margin-top: 20px; margin-right: 0px; margin-bottom: 10px; margin-left: 0px; padding-left: 4px; width: 97.5%; padding-right: 4px; font-family: 'Courier New', Courier, monospace; direction: ltr; max-height: 200px; font-size: 8pt; overflow-x: auto; overflow-y: auto; border-top-color: silver; border-top-width: 1px; border-top-style: solid; cursor: text; border-right-color: silver; border-right-width: 1px; border-right-style: solid; padding-top: 4px; " id="codeSnippetWrapper"> DROP TABLE FileStorage_PART PURGE; To check the FileStorage table is valid and is partitioned, use the command: SELECT num_rows,partitioned FROM user_tables WHERE table_name = 'FILESTORAGE'; You can list the contents of the FileStorage table in a specific partition, per example: SELECT * FROM FileStorage PARTITION (FILESTORAGE_PART_LEGACY) Some useful commands that you can use to check the partitions, note that you need to run using a DBA user: SELECT * FROM DBA_TAB_PARTITIONS WHERE table_name = 'FILESTORAGE';   SELECT * FROM DBA_TABLESPACES WHERE tablespace_name like 'TESTS_OCS%'; After the redefinition process complete you have a new FileStorage table storing all content that has the Storage rule pointed to the JDBC Storage and partitioned using the rule set during the creation of the temporary interim FileStorage_PART table. At this point you can test the WebCenter Content downloading the documents (Original and Renditions). Note that the content could be already in the cache area, take a look in the weblayout directory to see if a file with the same id is there, then click on the web rendition of your test file and see if have created the file and you can open, this means that is all working. The redefinition process can be repeated many times, this allow you test what the better layout, over and over again. Now some interesting maintenance actions related to the partitions: Make an tablespace read only. No issues viewing, the WebCenter Content do not alter the revisions When try to delete an content that is part of an read only tablespace, an error will occurs and the document will not be deleted The only way to prevent errors today is creating an custom component that checks the partitions and if you have an document in an “Read Only” repository, execute the deletion process of the metadata and mark the document to be deleted on the next db maintenance, like a new redefinition. Take an tablespace off-line for archiving purposes or any other reason. When you try open an document that is included in this tablespace will receive an error that was unable to retrieve the content, but the others online tablespaces are not affected. Same behavior when deleting documents. Again, an custom component is the solution. If you have an document “out of range”, the component can show an message that the repository for that document is offline. This can be extended to a option to the user to request to put online again. Moving some legacy content to an offline repository (table) using the Exchange option to move the content from one partition to a empty nonpartitioned table like FileStorage_LEGACY. Note that this option will remove the registers from the FileStorage and will not be able to open the stored content. You always need to keep in mind the indexes and constrains. An redefinition separating the original content (vault) from the renditions and separate by date ate the same time. This could be an option for DAM environments that want to have an special place for the renditions and put the original files in a storage with less performance. The process will be the same, you just need to change the script of the interim table to use composite partitioning. Will be something like: CREATE TABLE FILESTORAGE_RenditionPart ( DID NUMBER(*,0) NOT NULL ENABLE, DRENDITIONID VARCHAR2(30 CHAR) NOT NULL ENABLE, DLASTMODIFIED TIMESTAMP (6), DFILESIZE NUMBER(*,0), DISDELETED VARCHAR2(1 CHAR), BFILEDATA BLOB ) LOB (BFILEDATA) STORE AS SECUREFILE ( ENABLE STORAGE IN ROW NOCACHE LOGGING KEEP_DUPLICATES NOCOMPRESS ) PARTITION BY LIST (DRENDITIONID) SUBPARTITION BY RANGE (DLASTMODIFIED) ( PARTITION Vault VALUES ('primaryFile') ( SUBPARTITION FILESTORAGE_VAULT_LEGACY VALUES LESS THAN (TO_DATE('05-APR-2012 12.00.00 AM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_VAULT_DAY1 VALUES LESS THAN (TO_DATE('06-APR-2012 07.25.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_VAULT_DAY2 VALUES LESS THAN (TO_DATE('06-APR-2012 07.55.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_VAULT_DAY3 VALUES LESS THAN (TO_DATE('06-APR-2012 07.58.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_VAULT_FUTURE VALUES LESS THAN (MAXVALUE) ) ,PARTITION WebLayout VALUES ('webViewableFile') ( SUBPARTITION FILESTORAGE_WEBLAYOUT_LEGACY VALUES LESS THAN (TO_DATE('05-APR-2012 12.00.00 AM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_WEBLAYOUT_DAY1 VALUES LESS THAN (TO_DATE('06-APR-2012 07.25.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_WEBLAYOUT_DAY2 VALUES LESS THAN (TO_DATE('06-APR-2012 07.55.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_WEBLAYOUT_DAY3 VALUES LESS THAN (TO_DATE('06-APR-2012 07.58.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_WEBLAYOUT_FUTURE VALUES LESS THAN (MAXVALUE) ) ,PARTITION Special VALUES ('Special') ( SUBPARTITION FILESTORAGE_SPECIAL_LEGACY VALUES LESS THAN (TO_DATE('05-APR-2012 12.00.00 AM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_SPECIAL_DAY1 VALUES LESS THAN (TO_DATE('06-APR-2012 07.25.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_SPECIAL_DAY2 VALUES LESS THAN (TO_DATE('06-APR-2012 07.55.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_SPECIAL_DAY3 VALUES LESS THAN (TO_DATE('06-APR-2012 07.58.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_SPECIAL_FUTURE VALUES LESS THAN (MAXVALUE) ) )ENABLE ROW MOVEMENT; The next post related to partitioned repository will come with an sample component to handle the possible exceptions when you need to take off line an tablespace/partition or move to another place. Also, we can include some integration to the Retention Management and Records Management. Another subject related to partitioning is the ability to create an FileStore Provider pointed to a different database, raising the level of the distributed storage vs. performance. Let us know if this is important to you or you have an use case not listed, leave a comment. Cross-posted on the blog.ContentrA.com

    Read the article

< Previous Page | 388 389 390 391 392 393 394 395 396 397 398 399  | Next Page >