Search Results

Search found 41250 results on 1650 pages for 'oracle database'.

Page 507/1650 | < Previous Page | 503 504 505 506 507 508 509 510 511 512 513 514  | Next Page >

  • Database Optimization techniques for amateurs.

    - by Zombies
    Can we get a list of basic optimization techniques going (anything from modeling to querying, creating indexes, views to query optimization). It would be nice to have a list of these, one technique per answer. As a hobbyist I would find this to be very useful, thanks. And for the sake of not being too vague, let's say we are using a maintstream DB such as MySQL or Oracle, and that the DB will contain 500,000-1m or so records across ~10 tables, some with foreign key contraints, all using the most typical storage engines (eg: InnoDB for MySQL). And of course, the basics such as PKs are defined as well as FK contraints.

    Read the article

  • Oracle SQL Developer Data Modeler: What Tables Aren’t In At Least One SubView?

    - by thatjeffsmith
    Organizing your data model makes the information easier to consume. One of the organizational tools provided by Oracle SQL Developer Data Modeler is the ‘SubView.’ In a nutshell, a SubView is a subset of your model. The Challenge: I’ve just created a model which represents my entire ____________ application. We’ll call it ‘residential lending.’ Instead of having all 100+ tables in a single model diagram, I want to break out the tables by module, e.g. appraisals, credit reports, work histories, customers, etc. I’ve spent several hours breaking out the tables to one or more SubViews, but I think i may have missed a few. Is there an easy way to see what tables aren’t in at least ONE subview? The Answer Yes, mostly. The mostly comes about from the way I’m going to accomplish this task. It involves querying the SQL Developer Data Modeler Reporting Schema. So if you don’t have the Reporting Schema setup, you’ll need to do so. Got it? Good, let’s proceed. Before you start querying your Reporting Schema, you might need a data model for the actual reporting schema…meta-meta data! You could reverse engineer the data modeler reporting schema to a new data model, or you could just reference the PDFs in \datamodeler\reports\Reporting Schema diagrams directory. Here’s a hint, it’s THIS one The Query Well, it’s actually going to be at least 2 queries. We need to get a list of distinct designs stored in your repository. For giggles, I’m going to get a listing including each version of the model. So I can query based on design and version, or in this case, timestamp of when it was added to the repository. We’ll get that from the DMRS_DESIGNS table: SELECT DISTINCT design_name, design_ovid, date_published FROM DMRS_designs Then I’m going to feed the design_ovid, down to a subquery for my child report. select name, count(distinct diagram_id) from DMRS_DIAGRAM_ELEMENTS where design_ovid = :dESIGN_OVID and type = 'Table' group by name having count(distinct diagram_id) < 2 order by count(distinct diagram_id) desc Each diagram element has an entry in this table, so I need to filter on type=’Table.’ Each design has AT LEAST one diagram, the master diagram. So any relational table in this table, only having one listing means it’s not in any SubViews. If you have overloaded object names, which is VERY possible, you’ll want to do the report off of ‘OBJECT_ID’, but then you’ll need to correlate that to the NAME, as I doubt you’re so intimate with your designs that you recognize the GUIDs So I’m going to cheat and just stick with names, but I think you get the gist. My Model Of my almost 90 tables, how many of those have I not added to at least one SubView? Now let’s run my report! Voila! My ‘BEER2′ table isn’t in any SubView! It says ’1′ because the main model diagram counts as a view. So if the count came back as ’2′, that would mean the table was in the main model diagram and in 1 SubView diagram. And I know what you’re thinking, what kind of residential lending program would have a table called ‘BEER2?’ Let’s just say, that my business model has some kinks to work out!

    Read the article

  • SQL Server Transaction Marks: Restoring multiple databases to a common relative point

    - by Mladen Prajdic
    We’re all familiar with the ability to restore a database to point in time using the RESTORE WITH STOPAT statement. But what if we have multiple databases that are accessed from one application or are modifying each other? And over multiple instances? And all databases have different workloads? And we want to restore all of the databases to some known common relative point? The catch here is that this common relative point isn’t the same point in time for all databases. This common relative point in time might be now in DB1, now-1 hour in DB2 and yesterday in DB3. And we don’t know the exact times. Let me introduce you to Transaction Marks. When we run a marked transaction using the WITH MARK option a flag is set in the transaction log and a row is added to msdb..logmarkhistory table. When restoring a transaction log backup we can restore to either before or after that marked transaction. The best thing is that we don’t even need to have one database modifying another database. All we have to do is use a marked transaction with the same name in different database. Let’s see how this works with an example. The code comments say what’s going on. USE master GOCREATE DATABASE TestTxMark1GOUSE TestTxMark1GOCREATE TABLE TestTable1( ID INT, VALUE UNIQUEIDENTIFIER) -- insert some data into the table so we can have a starting pointINSERT INTO TestTable1SELECT ROW_NUMBER() OVER(ORDER BY number) AS RN, NULLFROM master..spt_valuesORDER BY RNSELECT *FROM TestTable1GO-- TAKE A FULL BACKUP of the databseBACKUP DATABASE TestTxMark1 TO DISK = 'c:\TestTxMark1.bak'GO USE master GOCREATE DATABASE TestTxMark2GOUSE TestTxMark2GOCREATE TABLE TestTable2( ID INT, VALUE UNIQUEIDENTIFIER)-- insert some data into the table so we can have a starting pointINSERT INTO TestTable2SELECT ROW_NUMBER() OVER(ORDER BY number) AS RN, NEWID()FROM master..spt_valuesORDER BY RNSELECT *FROM TestTable2GO-- TAKE A FULL BACKUP of our databseBACKUP DATABASE TestTxMark2 TO DISK = 'c:\TestTxMark2.bak'GO -- start a marked transaction that modifies both databasesBEGIN TRAN TxDb WITH MARK -- update values from NULL to random value UPDATE TestTable1 SET VALUE = NEWID(); -- update first 100 values from random value -- to NULL in different DB UPDATE TestTxMark2.dbo.TestTable2 SET VALUE = NULL WHERE ID <= 100;COMMITGO     -- some time goes by here -- with various database activity... -- We see two entries for marks in each database. -- This is just informational and has no bearing on the restore itself.SELECT * FROM msdb..logmarkhistory USE masterGO-- create a log backup to restore to mark pointBACKUP LOG TestTxMark1 TO DISK = 'c:\TestTxMark1.trn'GO-- drop the database so we can restore it backDROP DATABASE TestTxMark1GO USE masterGO-- create a log backup to restore to mark pointBACKUP LOG TestTxMark2 TO DISK = 'c:\TestTxMark2.trn'GO-- drop the database so we can restore it backDROP DATABASE TestTxMark2GO -- RESTORE THE DATABASE BACK BEFORE OUR TRANSACTION-- restore the full backup RESTORE DATABASE TestTxMark1 FROM DISK = 'c:\TestTxMark1.bak' WITH NORECOVERY;-- restore the log backup to the transaction markRESTORE LOG TestTxMark1 FROM DISK = 'c:\TestTxMark1.trn' WITH RECOVERY, -- recover to state before the transaction STOPBEFOREMARK = 'TxDb'; -- recover to state after the transaction -- STOPATMARK = 'TxDb';GO -- RESTORE THE DATABASE BACK BEFORE OUR TRANSACTION-- restore the full backup RESTORE DATABASE TestTxMark2 FROM DISK = 'c:\TestTxMark2.bak' WITH NORECOVERY;-- restore the log backup to the transaction markRESTORE LOG TestTxMark2 FROM DISK = 'c:\TestTxMark2.trn' WITH RECOVERY, -- recover to state before the transaction STOPBEFOREMARK = 'TxDb'; -- recover to state after the transaction -- STOPATMARK = 'TxDb';GO USE TestTxMark1-- we restored to time before the transaction -- so we have NULL values in our tableSELECT * FROM TestTable1 USE TestTxMark2-- we restored to time before the transaction -- so we DON'T have NULL values in our tableSELECT * FROM TestTable2   Transaction marks can be used like a crude sync mechanism for cross database operations. With them we can mark our databases with a common “restore to” point so we know we have a valid state between all databases to restore to.

    Read the article

  • Linux and Windows Server Setup

    - by Brian
    Hello, I have an win 2008 R2 machine (a home machine of mine) that I am messing around with and learning the server technologies. I also wanted to try out oracle, and was wondering if its possible to setup a LINUX machine with Oracle, and have the two interoperate. What I mean by that is if I setup the server and my laptop on a domain, would it be possible to communicate to that LINUX machine and thus the Oracle database, and if so, are there any good resources on the setup? I was going to create a LINUX hyper v virtual... Any tips appreciated. Thanks.

    Read the article

  • After setting ulimit to unlimited, I am not able to login to machine

    - by user419534
    In one of requirment, I had to set ulimit on one of my machine to unlimited. For this I changed following in /etc/security/limits.conf as below # End of file oracle soft nofile unlimited oracle hard nofile unlimited oracle soft nproc 131072 oracle hard nproc 131072 oracle soft core unlimited oracle hard core unlimited oracle soft memlock 50000000 oracle hard memlock 50000000 * soft nofile unlimited * hard nofile unlimited and changed /etc/profile if [ $USER = "oracle" ]; then if [ $SHELL = "/bin/ksh" ]; then ulimit -p unlimited ulimit -n unlimited else ulimit -u unlimited -n unlimited fi fi I logged out. I am not able to connect ot machine at all. could you please someone help on this.

    Read the article

  • How can i simulate the production servers in my home for linux VMs [closed]

    - by user31
    I am thinking of making the small simulation of how the big companies run their system in my home environment to get the feeling. I have the server with 8GB ram , quad core processor. I am thinking of following setup if thats [possible because i have not worked with biger companies , so i want to know how can i do that I am thinking of creating 5 virtual machines VM1 will be database server and will have all databases like MySQL , postgreSQL , sqlite , mongodb and Oracle VM2 will be the web server and will have Apache and Tomcat installed VM3 will be the Filse server where i will have all the web sites file VM4 , i am thinking of as main box where i can install ptyon php java j2ee sites but not sure VM5 will have the server 22008 for c# .net applications my main idea is to be able to host the sites in php, python , java j2ee with spring Is my setup ok or i am missing few things. Please guide me with correct setup so that i can learn stuff

    Read the article

  • PROJECT HELP NEEDED. SOME BASIC CONCEPTS GREAT CONFUSION BECAUSE OF LACK OF PROPER MATERIAL PLEASE H

    - by user287745
    Task ATTENDENCE RECORDER AND MANAGEMENT SYSTEM IN DISTRIBUTED ENVIRONMENT ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Example implementation needed. a main server in each lab where the operator punches in the attendence of the student. =========================================================== scenerio:- a college, 10 departments, all departments have a computer lab with 60-100 computers, the computers within each lab are interconnected and all computers in any department have to dail to a particular number (THE NUMBER GIVEN BY THE COLLEGE INTERNET DEPARTMENT) to get connected to the internet. therefore safe to assume that there is a central location to which all the computers in the college are connected to. there is a 'students attendence portal' which can be accessed using internet explorer, students enter there id and get the particular attendence record regarding to the labs only. a description of the working is like:- 1) the user will select which department, which year has arrived to the lab 2) the selection will give the user a return of all the students name and there roll numbers belonging to that department; 'with a check box to "TICK MARK IF THE STUDENT IS PRESENT" ' 3) A SUBMIT BUTTON when pressed reads the 'id' of the checkbox to determine the "particular count number of the student" from that an id of the student is constructed and that id is inserted with a present. (there is also date and time and much more to normalize the db and to avoid conflicts and keep historic records etc but that you will have to assume) steps taken till this date:- ( please note we are not computer students, we are to select something of some other line as a project!, as you will read in my many post 'i" have designed small websites just out of liking. have never ever done any thing official to implement like this.) * have made the database fully normalized. * have made the website which does the functions required on the database. Testing :- deployed the db and site on a free aspspider server and it worked. tested from several computers. Now the problem please help thank youuuuuuuu a practical demonstration has to be done within the college network. no internet! we have been assigned a lab - 60 computers- to demonstrate. (please dont give replies as 60 computers only! is not a big deal one CPU can manage it. i know that; IT IS A HYPOTHETICAL SITUATION WHERE WE ASSUME THAT 60 IS NOT 60 BUT ITS LIKE 60,000 COMPUTERS) 1a) make a web server, yes iis and put files in www folder and configure server to run aspx files- although a link to a step by step guide will be appreciated)\ ? which version of windows should i ask for xp or win server 2000 something? 2a) make a database server. ( well yes install sql server 2005, okay but then what? just put the database file on a pc share it and append the connection string to the share? ) 3a) make the site accessible from the remaining computers ? http://localhost/sitename ? all users "being operators of the particular lab" have the right to edit, write or delete(in dispute), thereby any "users" who hate our program can make the database inconsistent by accessing te same record and doing different edits and then complaining? so how to prevent this? you know something like when the db table is being written to others can only read but not write.. one big confusion:- IN DISTRIBUTED ENVIRONMENT "how to implement this" where does "distributed environment" come in! meaning :- alright the labs are in different departments but the "database server will be one" the "web server will be one" so whats distributed!?

    Read the article

  • 2 way SSL between SOA and OSB

    - by Johnny Shum
    If you have a need to use 2 way SSL between SOA composite and external partner links, you can follow these steps. Create the identity keystores, trust keystores, and server certificates. Setup keystores and SSL on WebLogic Setup server to use 2 way SSL Configure your SOA composite's partner link to use 2 way SSL Configure SOA engine two ways SSL In this case,  I use SOA and OSB for the test.  I started with a separate OSB and SOA domains.  I deployed two soap based proxies on OSB and two composites on SOA.  In SOA, one composite invokes a OSB proxy service, the other is invoked by the OSB.  Similarly,  in OSB,  one proxy invokes a SOA composite and the other is invoked by SOA. 1. Create the identity keystores, trust keystores and the server certificates Since this is a development environment, I use JDK's keytool to create the stores and use self signing certificate.  For production environment, you should use certificates from a trusted certificate authority like Verisign.    I created a script below to show what is needed in this step.  The only requirement is when creating the SOA identity certificate, you MUST use the alias mykey. STOREPASS=welcome1KEYPASS=welcome1# generate identity keystore for soa and osb.  Note: For SOA, you MUST use alias mykeyecho "creating stores"keytool -genkey -alias mykey -keyalg "RSA" -sigalg "SHA1withRSA" -dname "CN=soa, C=US" -keystore soa-default-keystore.jks -storepass $STOREPASS -keypass $KEYPASS keytool -genkey -alias osbkey -keyalg "RSA" -sigalg "SHA1withRSA" -dname "CN=osb, C=US" -keystore osb-default-keystore.jks -storepass $STOREPASS -keypass $KEYPASS# listing keystore contentsecho "listing stores contents"keytool -list -alias mykey -keystore soa-default-keystore.jks -storepass $STOREPASSkeytool -list -alias osbkey -keystore osb-default-keystore.jks -storepass $STOREPASS# exporting certs from storesecho "export certs from  stores"keytool -exportcert -alias mykey -keystore soa-default-keystore.jks -storepass $STOREPASS -file soacert.derkeytool -exportcert -alias osbkey -keystore osb-default-keystore.jks -storepass $STOREPASS -file osbcert.der # import certs to trust storesecho "import certs"keytool -importcert -alias osbkey -keystore soa-trust-keystore.jks -storepass $STOREPASS -file osbcert.der -keypass $KEYPASSkeytool -importcert -alias mykey -keystore osb-trust-keystore.jks -storepass $STOREPASS -file soacert.der  -keypass $KEYPASS SOA suite uses the JDK's SSL implementation for outbound traffic instead of the WebLogic's implementation.  You will need to import the partner's public cert into the trusted keystore used by SOA.  The default trusted keystore for SOA is DemoTrust.jks and it is located in $MW_HOME/wlserver_10.3/server/lib.   (This is set in the startup script -Djavax.net.ssl.trustStore).   If you use your own trusted keystore, then you will need to import it into your own trusted keystore. keytool -importcert -alias osbkey -keystore $MW_HOME/wlserver_10.3/server/lib/DemoTrust.jks -storepass DemoTrustKeyStorePassPhrase  -file osbcert.der -keypass $KEYPASS If you do not perform this step, you will encounter this exception in runtime when SOA invokes OSB service using 2 way SSL Message send failed: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target  2.  Setup keystores and SSL on WebLogic First, you will need to login to the WebLogic console, navigate to the server's configuration->Keystore's tab.   Change the Keystores type to Custom Identity and Custom Trust and enter the rest of the fields. Then you navigate to the SSL tab, enter the fields in the identity section and expand the Advanced section.  Since I am using self signing cert on my VM enviornment, I disabled Hostname verification.  In real production system, this should not be the case.   I also enabled the option "Use Server Certs", so that the application uses the server cert to initiate https traffic (it is important to enable this in OSB). Last, you enable SSL listening port in the Server's configuration->General tab. 3.  Setup server to use 2 way SSL If you follow the screen shot in previous step, you can see in the Server->Configuration->SSL->Advanced section, there is an option for Two Way Client Cert Behavior,  you should set this to Client Certs Requested and Enforced. Repeat step 2 and 3 done on OSB.  After all these configurations,  you have to restart all the servers. 4.  Configure your SOA composite's partner link to use 2 way SSL You do this by modifying the composite.xml in your project, locate the partner's link reference and add the property oracle.soa.two.way.ssl.enabled.   <reference name="callosb" ui:wsdlLocation="helloword.wsdl">    <interface.wsdl interface="http://www.examples.com/wsdl/HelloService.wsdl#wsdl.interface(Hello_PortType)"/>    <binding.ws port="http://www.examples.com/wsdl/HelloService.wsdl#wsdl.endpoint(Hello_Service/Hello_Port)"                location="helloword.wsdl" soapVersion="1.1">      <property name="weblogic.wsee.wsat.transaction.flowOption"                type="xs:string" many="false">WSDLDriven</property>   <property name="oracle.soa.two.way.ssl.enabled">true</property>    </binding.ws>  </reference> In OSB, you should have checked the HTTPS required flag in the proxy's transport configuration.  After this,  rebuilt the composite jar file and ready to deploy in the EM console later. 5.  Configure SOA engine two ways SSL Oracle SOA Suite uses both Oracle WebLogic Server and Sun Secure Socket Layer (SSL) stacks for two-way SSL configurations. For the inbound web service bindings, Oracle SOA Suite uses the Oracle WebLogic Server infrastructure and, therefore, the Oracle WebLogic Server libraries for SSL.  This is already done by step 2 and 3 in the previous section. For the outbound web service bindings, Oracle SOA Suite uses JRF HttpClient and, therefore, the Sun JDK libraries for SSL.  You do this by configuring the SOA Engine in the Enterprise Manager Console, select soa-infra->SOA Administration->Common Properties Then click at the link at the bottom of the page:  "More SOA Infra Advances Infrastructure Configuration Properties" and then enter the full path of soa identity keystore in the value field of the KeyStoreLocation attribute.  Click Apply and Return then navigate to the domain->security->credential. Here, you provide the password to the keystore.  Note: the alias of the certficate must be mykey as described in step 1, so you only need to provide the password to the identity keystore.   You accomplish this by: Click Create Map In the Map Name field, enter SOA, and click OK Click Create Key Enter the following details where the password is the password for the SOA identity keystore. 6.  Test and Trouble Shooting Once the setup is complete and server restarted, you can deploy the composite in the EM console and test it.  In case of error,  you can read the server log file to determine the cause of the error.  For example, If you have not setup step 5 and test 2 way SSL, you will see this in the log when invoking OSB from BPEL: java.lang.Exception: oracle.sysman.emSDK.webservices.wsdlapi.SoapTestException: oracle.fabric.common.FabricInvocationException: Unable to access the following endpoint(s): https://localhost.localdomain:7002/default/helloword ####<Sep 22, 2012 2:07:37 PM CDT> <Error> <oracle.soa.bpel.engine.ws> <rhel55> <AdminServer> <[ACTIVE] ExecuteThread: '1' for queue: 'weblogic.kernel.Default (self-tuning)'> <<anonymous>> <BEA1-0AFDAEF20610F8FD89C5> ............ <11d1def534ea1be0:-4034173:139ef56d9f0:-8000-00000000000002ec> <1348340857956> <BEA-000000> <got FabricInvocationException sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target If you have not enable WebLogic SSL to use server certificate in the console and invoke SOA composite from OSB using two ways SSL, you will see this error: ####<Sep 22, 2012 2:07:37 PM CDT> <Warning> <Security> <rhel55> <AdminServer> <[ACTIVE] ExecuteThread: '6' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <11d1def534ea1be0:-51f5c76a:139ef5e1e1a:-8000-00000000000000e2> <1348340857776> <BEA-090485> <CERTIFICATE_UNKNOWN alert was received from localhost.localdomain - 127.0.0.1. The peer has an unspecified issue with the certificate. SSL debug tracing should be enabled on the peer to determine what the issue is.> ####<Sep 22, 2012 2:07:37 PM CDT> <Warning> <Security> <rhel55> <AdminServer> <[ACTIVE] ExecuteThread: '6' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <11d1def534ea1be0:-51f5c76a:139ef5e1e1a:-8000-00000000000000e4> <1348340857786> <BEA-090485> <CERTIFICATE_UNKNOWN alert was received from localhost.localdomain - 127.0.0.1. The peer has an unspecified issue with the certificate. SSL debug tracing should be enabled on the peer to determine what the issue is.> ####<Sep 22, 2012 2:27:21 PM CDT> <Warning> <Security> <rhel55> <AdminServer> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <<anonymous>> <> <11d1def534ea1be0:-51f5c76a:139ef5e1e1a:-8000-0000000000000124> <1348342041926> <BEA-090497> <HANDSHAKE_FAILURE alert received from localhost - 127.0.0.1. Check both sides of the SSL configuration for mismatches in supported ciphers, supported protocol versions, trusted CAs, and hostname verification settings.> References http://docs.oracle.com/cd/E23943_01/admin.1111/e10226/soacompapp_secure.htm#CHDCFABB   Section 5.6.4 http://docs.oracle.com/cd/E23943_01/web.1111/e13707/ssl.htm#i1200848

    Read the article

  • In Which We Demystify A Few Docupresentment Settings And Learn the Ethos of the Author

    - by Andy Little
    It's no secret that Docupresentment (part of the Oracle Documaker suite) is powerful tool for integrating on-demand and interactive applications for publishing with the Oracle Documaker framework.  It's also no secret there are are many details with respect to the configuration of Docupresentment that can elude even the most erudite of of techies.  To be sure, Docupresentment will work for you right out of the box, and in most cases will suit your needs without toying with a configuration file.  But, where's the adventure in that?   With this inaugural post to That's The Way, I'm going to introduce myself, and what my aim is with this blog.  If you didn't figure it out already by checking out my profile, my name is Andy and I've been with Oracle (nee Skywire Software nee Docucorp nee Formmaker) since the formative years of 1998.  Strangely, it doesn't seem that long ago, but it's certainly a lifetime in the age of technology.  I recall running a BBS from my parent's basement on a 1200 baud modem, and the trepidation and sweaty-palmed excitement of upgrading to the power and speed of 2400 baud!  Fine, I'll admit that perhaps I'm inflating the experience a bit, but I was kid!  This is the stuff of War Games and King's Quest I and the demise of TI-99 4/A.  Exciting times.  So fast-forward a bit and I'm 12 years into a career in the world of document automation and publishing working for the best (IMHO) software company on the planet.  With That's The Way I hope to shed a little light and peek under the covers of some of the more interesting aspects of implementations involving the tech space within the Oracle Insurance Global Business Unit (IGBU), which includes Oracle Documaker, Rating & Underwriting, and Policy Administration to name a few.  I may delve off course a bit, and you'll likely get a dose of humor (at least in my mind) but I hope you'll glean at least a tidbit of usefulness with each post.  Feel free to comment as I'm a fairly conversant guy and happy to talk -- it's stopping the talking that's the hard part... So, back to our regularly-scheduled post, already in progress.  By this time you've visited Oracle's E-Delivery site and acquired your properly-licensed version of Oracle Documaker.  Wait -- you didn't find it?  Understandable -- navigating the voluminous download library within Oracle can be a daunting task.  It's pretty simple once you’ve done it a few times.  Login to the e-delivery site, and accept the license terms and restrictions.  Then, you’ll be able to select the Oracle Insurance Applications product pack and your appropriate platform. Click Go and you’ll see a list of applicable products, and you’ll click on Oracle Documaker Media Pack (as I went to press with this article the version is 11.4): Finally, click the Download button next to Docupresentment (again, version at press time is 2.2 p5). This should give you a ZIP file that contains the installation packages for the Docupresentment Server and Client, cryptically named IDSServer22P05W32.exe and IDSClient22P05W32.exe. At this time, I’d like to take a little detour and explain that the world of Oracle, like most technical companies, is rife with acronyms.  One of the reasons Skywire Software was a appealing to Oracle was our use of many acronyms, including the occasional use of multiple acronyms with the same meaning.  I apologize in advance and will try to point these out along the way.  Here’s your first sticky note to go along with that: IDS = Internet Document Server = Docupresentment Once you’ve completed the installation, you’ll have a shiny new Docupresentment server and client, and if you installed the default location it will be living in c:\docserv. Unix users, I’m one of you!  You’ll find it by default in  ~/docupresentment/docserv.  Forging onward with the meat of this post is learning about some special configuration options.  By now you’ve read the documentation included with the download (specifically ids_book.pdf) which goes into some detail of the rubric of the configuration file and in fact there’s even a handy utility that provides an interface to the configuration file (see Running IDSConfig in the documentation).  But who wants to deal with a configuration utility when we have the tools and technology to edit the file <gasp> by hand! I shall now proceed with the standard Information Technology Under the Hood Disclaimer: Please remember to back up any files before you make changes.  I am not responsible for any havoc you may wreak! Go to your installation directory, and locate your docserv.xml file.  Open it in your favorite XML editor.  I happen to be fond of Notepad++ with the XML Tools plugin.  Almost immediately you will behold the splendor of the configuration file.  Just take a moment and let that sink in.  Ok – moving on.  If you reviewed the documentation you know that inside the root <configuration> node there are multiple <section> nodes, each containing a specific group of settings.  Let’s take a look at <section name=”DocumentServer”>: There are a few entries I’d like to discuss.  First, <entry name=”StartCommand”>. This should be pretty self-explanatory; it’s the name of the executable that’s run when you fire up Docupresentment.  Immediately following that is <entry name=”StartArguments”> and as you might imagine these are the arguments passed to the executable.  A few things to point out: The –Dids.configuration=docserv.xml parameter specifies the name of your configuration file. The –Dlogging.configuration=logconf.xml parameter specifies the name of your logging configuration file (this uses log4j so bone up on that before you delve here). The -Djava.endorsed.dirs=lib/endorsed parameter specifies the path where 3rd party Java libraries can be located for use with Docupresentment.  More on that in another post. The <entry name=”Instances”> allows you to specify the number of instances of Docupresentment that will be started.  By default this is two, and generally two instances per CPU is adequate, however you will always need to perform load testing to determine the sweet spot based on your hardware and types of transactions.  You may have many, many more instances than 2. Time for a sidebar on instances.  An instance is nothing more than a separate process of Docupresentment.  The Docupresentment service that you fire up with docserver.bat or docserver.sh actually starts a watchdog process, which is then responsible for starting up the actual Docupresentment processes.  Each of these act independently from one another, so if one crashes, it does not affect any others.  In the case of a crashed process, the watchdog will start up another instance so the number of configured instances are always running.  Bottom line: instance = Docupresentment process. And now, finally, to the settings which gave me pause on an not-too-long-ago implementation!  Docupresentment includes a feature that watches configuration files (such as docserv.xml and logconf.xml) and will automatically restart its instances to load the changes.  You can configure the time that Docupresentment waits to check these files using the setting <entry name=”FileWatchTimeMillis”>.  By default the number is 12000ms, or 12 seconds.  You can save yourself a few CPU cycles by extending this time, or by disabling  the check altogether by setting the value to 0.  This may or may not be appropriate for your environment; if you have 100% uptime requirements then you probably don’t want to bring down an entire set of processes just to accept a new configuration value, so it’s best to leave this somewhere between 12 seconds to a few minutes.  Another point to keep in mind: if you are using Documaker real-time processing under Docupresentment the Master Resource Library (MRL) files and INI options are cached, and if you need to affect a change, you’ll have to “restart” Docupresentment.  Touching the docserv.xml file is an easy way to do this (other methods including using the RSS request, but that’s another post). The next item up: <entry name=”FilePurgeTimeSeconds”>.  You may already know that the Docupresentment system can generate many temporary files based on certain request types that are processed through the system.  What you may not know is how those files are cleaned up.  There are many rules in Docupresentment that cause the creation of temporary files.  When these files are created, Docupresentment writes an entry into a properties file called the file cache.  This file contains the name, creation date, and expiration time of each temporary file created by each instance of Docupresentment.  Periodically Docupresentment will check the file cache to determine if there are files that are past the expiration time, not unlike that block of cheese festering away in the back of my refrigerator.  However, unlike my ‘fridge cleaning tendencies, Docupresentment is quick to remove files that are past their expiration time.  You, my friend, have the power to control how often Docupresentment inspects the file cache.  Simply set the value for <entry name=”FilePurgeTimeSeconds”> to the number of seconds appropriate for your requirements and you’re set.  Note that file purging happens on a separate thread from normal request processing, so this shouldn’t interfere with response times unless the CPU happens to be really taxed at the point of cache processing.  Finally, after all of this, we get to the final setting I’m going to address in this post: <entry name=”FilePurgeList”>.  The default is “filecache.properties”.  This establishes the root name for the Docupresentment file cache that I mentioned previously.  Docupresentment creates a separate cache file for each instance based on this setting.  If you have two instances, you’ll see two files created: filecache.properties.1 and filecache.properties.2.  Feel free to open these up and check them out. I hope you’ve enjoyed this first foray into the configuration file of Docupresentment.  If you did enjoy it, feel free to drop a comment, I welcome feedback.  If you have ideas for other posts you’d like to see, please do let me know.  You can reach me at [email protected]. ‘Til next time! ###

    Read the article

  • DocumentDB - Another Azure NoSQL Storage Service

    - by Shaun
    Originally posted on: http://geekswithblogs.net/shaunxu/archive/2014/08/25/documentdb---another-azure-nosql-storage-service.aspxMicrosoft just released a bunch of new features for Azure on 22nd and one of them I was interested in most is DocumentDB, a document NoSQL database service on the cloud.   Quick Look at DocumentDB We can try DocumentDB from the new azure preview portal. Just click the NEW button and select the item named DocumentDB to create a new account. Specify the name of the DocumentDB, which will be the endpoint we are going to use to connect later. Select the capacity unit, resource group and subscription. In resource group section we can select which region our DocumentDB will be located. Same as other azure services select the same location with your consumers of the DocumentDB, for example the website, web services, etc.. After several minutes the DocumentDB will be ready. Click the KEYS button we can find the URI and primary key, which will be used when connecting. Now let's open Visual Studio and try to use the DocumentDB we had just created. Create a new console application and install the DocumentDB .NET client library from NuGet with the keyword "DocumentDB". You need to select "Include Prerelase" in NuGet Package Manager window since this library was not yet released. Next we will create a new database and document collection under our DocumentDB account. The code below created an instance of DocumentClient with the URI and primary key we just copied from azure portal, and create a database and collection. And it also prints the document and collection link string which will be used later to insert and query documents. 1: static void Main(string[] args) 2: { 3: var endpoint = new Uri("https://shx.documents.azure.com:443/"); 4: var key = "LU2NoyS2fH0131TGxtBE4DW/CjHQBzAaUx/mbuJ1X77C4FWUG129wWk2oyS2odgkFO2Xdif9/ZddintQicF+lA=="; 5:  6: var client = new DocumentClient(endpoint, key); 7: Run(client).Wait(); 8:  9: Console.WriteLine("done"); 10: Console.ReadKey(); 11: } 12:  13: static async Task Run(DocumentClient client) 14: { 15:  16: var database = new Database() { Id = "testdb" }; 17: database = await client.CreateDatabaseAsync(database); 18: Console.WriteLine("database link = {0}", database.SelfLink); 19:  20: var collection = new DocumentCollection() { Id = "testcol" }; 21: collection = await client.CreateDocumentCollectionAsync(database.SelfLink, collection); 22: Console.WriteLine("collection link = {0}", collection.SelfLink); 23: } Below is the result from the console window. We need to copy the collection link string for future usage. Now if we back to the portal we will find a database was listed with the name we specified in the code. Next we will insert a document into the database and collection we had just created. In the code below we pasted the collection link which copied in previous step, create a dynamic object with several properties defined. As you can see we can add some normal properties contains string, integer, we can also add complex property for example an array, a dictionary and an object reference, unless they can be serialized to JSON. 1: static void Main(string[] args) 2: { 3: var endpoint = new Uri("https://shx.documents.azure.com:443/"); 4: var key = "LU2NoyS2fH0131TGxtBE4DW/CjHQBzAaUx/mbuJ1X77C4FWUG129wWk2oyS2odgkFO2Xdif9/ZddintQicF+lA=="; 5:  6: var client = new DocumentClient(endpoint, key); 7:  8: // collection link pasted from the result in previous demo 9: var collectionLink = "dbs/AAk3AA==/colls/AAk3AP6oFgA=/"; 10:  11: // document we are going to insert to database 12: dynamic doc = new ExpandoObject(); 13: doc.firstName = "Shaun"; 14: doc.lastName = "Xu"; 15: doc.roles = new string[] { "developer", "trainer", "presenter", "father" }; 16:  17: // insert the docuemnt 18: InsertADoc(client, collectionLink, doc).Wait(); 19:  20: Console.WriteLine("done"); 21: Console.ReadKey(); 22: } the insert code will be very simple as below, just provide the collection link and the object we are going to insert. 1: static async Task InsertADoc(DocumentClient client, string collectionLink, dynamic doc) 2: { 3: var document = await client.CreateDocumentAsync(collectionLink, doc); 4: Console.WriteLine(await JsonConvert.SerializeObjectAsync(document, Formatting.Indented)); 5: } Below is the result after the object had been inserted. Finally we will query the document from the database and collection. Similar to the insert code, we just need to specify the collection link so that the .NET SDK will help us to retrieve all documents in it. 1: static void Main(string[] args) 2: { 3: var endpoint = new Uri("https://shx.documents.azure.com:443/"); 4: var key = "LU2NoyS2fH0131TGxtBE4DW/CjHQBzAaUx/mbuJ1X77C4FWUG129wWk2oyS2odgkFO2Xdif9/ZddintQicF+lA=="; 5:  6: var client = new DocumentClient(endpoint, key); 7:  8: var collectionLink = "dbs/AAk3AA==/colls/AAk3AP6oFgA=/"; 9:  10: SelectDocs(client, collectionLink); 11:  12: Console.WriteLine("done"); 13: Console.ReadKey(); 14: } 15:  16: static void SelectDocs(DocumentClient client, string collectionLink) 17: { 18: var docs = client.CreateDocumentQuery(collectionLink + "docs/").ToList(); 19: foreach(var doc in docs) 20: { 21: Console.WriteLine(doc); 22: } 23: } Since there's only one document in my collection below is the result when I executed the code. As you can see all properties, includes the array was retrieve at the same time. DocumentDB also attached some properties we didn't specified such as "_rid", "_ts", "_self" etc., which is controlled by the service.   DocumentDB Benefit DocumentDB is a document NoSQL database service. Different from the traditional database, document database is truly schema-free. In a short nut, you can save anything in the same database and collection if it could be serialized to JSON. We you query the document database, all sub documents will be retrieved at the same time. This means you don't need to join other tables when using a traditional database. Document database is very useful when we build some high performance system with hierarchical data structure. For example, assuming we need to build a blog system, there will be many blog posts and each of them contains the content and comments. The comment can be commented as well. If we were using traditional database, let's say SQL Server, the database schema might be defined as below. When we need to display a post we need to load the post content from the Posts table, as well as the comments from the Comments table. We also need to build the comment tree based on the CommentID field. But if were using DocumentDB, what we need to do is to save the post as a document with a list contains all comments. Under a comment all sub comments will be a list in it. When we display this post we just need to to query the post document, the content and all comments will be loaded in proper structure. 1: { 2: "id": "xxxxx-xxxxx-xxxxx-xxxxx", 3: "title": "xxxxx", 4: "content": "xxxxx, xxxxxxxxx. xxxxxx, xx, xxxx.", 5: "postedOn": "08/25/2014 13:55", 6: "comments": 7: [ 8: { 9: "id": "xxxxx-xxxxx-xxxxx-xxxxx", 10: "content": "xxxxx, xxxxxxxxx. xxxxxx, xx, xxxx.", 11: "commentedOn": "08/25/2014 14:00", 12: "commentedBy": "xxx" 13: }, 14: { 15: "id": "xxxxx-xxxxx-xxxxx-xxxxx", 16: "content": "xxxxx, xxxxxxxxx. xxxxxx, xx, xxxx.", 17: "commentedOn": "08/25/2014 14:10", 18: "commentedBy": "xxx", 19: "comments": 20: [ 21: { 22: "id": "xxxxx-xxxxx-xxxxx-xxxxx", 23: "content": "xxxxx, xxxxxxxxx. xxxxxx, xx, xxxx.", 24: "commentedOn": "08/25/2014 14:18", 25: "commentedBy": "xxx", 26: "comments": 27: [ 28: { 29: "id": "xxxxx-xxxxx-xxxxx-xxxxx", 30: "content": "xxxxx, xxxxxxxxx. xxxxxx, xx, xxxx.", 31: "commentedOn": "08/25/2014 18:22", 32: "commentedBy": "xxx", 33: } 34: ] 35: }, 36: { 37: "id": "xxxxx-xxxxx-xxxxx-xxxxx", 38: "content": "xxxxx, xxxxxxxxx. xxxxxx, xx, xxxx.", 39: "commentedOn": "08/25/2014 15:02", 40: "commentedBy": "xxx", 41: } 42: ] 43: }, 44: { 45: "id": "xxxxx-xxxxx-xxxxx-xxxxx", 46: "content": "xxxxx, xxxxxxxxx. xxxxxx, xx, xxxx.", 47: "commentedOn": "08/25/2014 14:30", 48: "commentedBy": "xxx" 49: } 50: ] 51: }   DocumentDB vs. Table Storage DocumentDB and Table Storage are all NoSQL service in Microsoft Azure. One common question is "when we should use DocumentDB rather than Table Storage". Here are some ideas from me and some MVPs. First of all, they are different kind of NoSQL database. DocumentDB is a document database while table storage is a key-value database. Second, table storage is cheaper. DocumentDB supports scale out from one capacity unit to 5 in preview period and each capacity unit provides 10GB local SSD storage. The price is $0.73/day includes 50% discount. For storage service the highest price is $0.061/GB, which is almost 10% of DocumentDB. Third, table storage provides local-replication, geo-replication, read access geo-replication while DocumentDB doesn't support. Fourth, there is local emulator for table storage but none for DocumentDB. We have to connect to the DocumentDB on cloud when developing locally. But, DocumentDB supports some cool features that table storage doesn't have. It supports store procedure, trigger and user-defined-function. It supports rich indexing while table storage only supports indexing against partition key and row key. It supports transaction, table storage supports as well but restricted with Entity Group Transaction scope. And the last, table storage is GA but DocumentDB is still in preview.   Summary In this post I have a quick demonstration and introduction about the new DocumentDB service in Azure. It's very easy to interact through .NET and it also support REST API, Node.js SDK and Python SDK. Then I explained the concept and benefit of  using document database, then compared with table storage.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Export from EndNote into MySQL database

    - by Tomba
    I would like to export some records from an EndNote (reference management software) library and import into a MySQL database. Does anyone have any experience with this? I've tried creating custom EndNote "output styles" containing either comma-delimited values or even SQL code, but have had mixed results, either because EndNote filters out some characters (like `) or because EndNote doesn't (or I can't work out how to make it) escape text, which might include characters like ' and ". I realize this might be a bit off-topic but any help would be appreciated.

    Read the article

  • Backup and Restore a SQLCE .sdf database

    - by Ruben Trancoso
    My application needs to backup and restore .sdf files. There is a single dataSet the the whole application and some bindngSource and table adapters on forms using this same dataset. Just for a sake of test I tryied to copy the .sdf in runtime for a backup folder and back to restore it and I got my application not finding the file like it was not there anymore. How should I manage connections to open and close the database since the dataSet do it automaticaly at begin and end of my application?

    Read the article

  • [LAZARUS] Access Remote SQL Server Database on WinCE programming

    - by Dels
    I'm programming using Lazarus (Freepascal IDE, Delphi Like), and i have a problem when i need to connect into a remote SQL Server database on the network. My question: Is there any way to connect to a remote SQLdb on Lazarus? What is required connector type for SQL Server 2005? Is there any ODBC driver available for Windows CE (Windows Mobile 5/6)? (If so, I could use TODBCConnection...) I already searched and asked on the Lazarus community forum but didn't get any response

    Read the article

  • Data type mismatch when retrieving records from an access database using a DateTimePicker

    - by Daniel
    I get a Data type mismatch criteria expression error when i try to retrieve records from an access database between two dates using a DateTimePicker in C#. This is the Select statement else if (dtpDOBFrom.Value < dtpDOBTo.Value) { cmdSearch.CommandText = "SELECT [First Name], [Surname], [Contact Type], [Birthdate] FROM [Contacts] WHERE [Birthdate] >= '" + dtpDOBFrom.Value +"' AND [Birthdate] <= '" + dtpDOBTo.Value +"'"; }

    Read the article

  • OperationalError "unable to open database file" processing query results with SQLAlchemy and SQLite3

    - by Peter
    I'm running into this little problem that I hope is just a dumb user error. It looks like some sort of a size limit with a query to a SQLite database. I managed to reproduce the issue with an in-memory DB and a simple script shown below. I can make it work by either reducing the number of records in the DB; or by reducing the size of each record; or by dropping the order_by() call. I am using Python 2.5.5 and SQLAlchemy 0.6.0 in a Cygwin environment. Thanks! #!/usr/bin/python from sqlalchemy.orm import sessionmaker import sqlalchemy import sqlalchemy.orm class Person(object): def __init__(self, name): self.name = name engine = sqlalchemy.create_engine('sqlite:///:memory:') Session = sessionmaker(bind=engine) metadata = sqlalchemy.schema.MetaData(bind=engine) person_table = sqlalchemy.Table('person', metadata, sqlalchemy.Column('id', sqlalchemy.types.Integer, primary_key=True), sqlalchemy.Column('name', sqlalchemy.types.String)) metadata.create_all(engine) sqlalchemy.orm.mapper(Person, person_table) session = Session() session.add_all([Person("012345678901234567890123456789012") for i in range(5000)]) session.commit() persons = session.query(Person).order_by(Person.name).all() print "count =", len(persons) session.close() The all() call to the query result fails with the OperationalError exception: Traceback (most recent call last): File "./stress.py", line 27, in <module> persons = session.query(Person).order_by(Person.name).all() File "/usr/lib/python2.5/site-packages/sqlalchemy/orm/query.py", line 1343, in all return list(self) File "/usr/lib/python2.5/site-packages/sqlalchemy/orm/query.py", line 1451, in __iter__ return self._execute_and_instances(context) File "/usr/lib/python2.5/site-packages/sqlalchemy/orm/query.py", line 1456, in _execute_and_instances mapper=self._mapper_zero_or_none()) File "/usr/lib/python2.5/site-packages/sqlalchemy/orm/session.py", line 737, in execute clause, params or {}) File "/usr/lib/python2.5/site-packages/sqlalchemy/engine/base.py", line 1109, in execute return Connection.executors[c](self, object, multiparams, params) File "/usr/lib/python2.5/site-packages/sqlalchemy/engine/base.py", line 1186, in _execute_clauseelement return self.__execute_context(context) File "/usr/lib/python2.5/site-packages/sqlalchemy/engine/base.py", line 1215, in __execute_context context.parameters[0], context=context) File "/usr/lib/python2.5/site-packages/sqlalchemy/engine/base.py", line 1284, in _cursor_execute self._handle_dbapi_exception(e, statement, parameters, cursor, context) File "/usr/lib/python2.5/site-packages/sqlalchemy/engine/base.py", line 1282, in _cursor_execute self.dialect.do_execute(cursor, statement, parameters, context=context) File "/usr/lib/python2.5/site-packages/sqlalchemy/engine/default.py", line 277, in do_execute cursor.execute(statement, parameters) sqlalchemy.exc.OperationalError: (OperationalError) unable to open database file u'SELECT person.id AS person_id, person.name AS person_name \nFROM person ORDER BY person.name' ()

    Read the article

  • Database Mirroring of SQL server

    - by jbp117
    I have two databases that are mirrored to another server using database mirroring. The mirror server has to be down for some reason for few days. Now the production server is having principal databases in (PRINCIPAL/DISCONNECTED) State. Clients can access those databases. So what happens when they keep on adding data to these databases?? Will the data get committed or waits till the mirror comes up?

    Read the article

  • Automatic incremental SQL Script generation for incremental, nightly builds when using Team Build in

    - by Steve Johnson
    hi all, hope that everybody here is OK. We are using VS 2008 as development tool, TFS 2008 as version control as well as build automation. Some of our developer use dbpro for databases changes and some use SQL Server management studio. I am trying to automate build for Web Application built using C# and VB.Net. Our scenario is such that we have a central database to which our web application connects. Whenever we supply our clients with a new functionality or a bug fix, we supply them incremental builds. The SQL script is checked into source control for every incremental build when they have made and tested there changes on our central DB Server. I want to generate Differential script that can be run at the client as an incremental update script. Now to come about it is a problem. Sometimes our developers tend to forget the database change-sets and the script in the source control is missing an SP or a two. Also, sometimes we need to insert default data into some of the tables that have strict stringent values and not test values. Like a table that contains Services provided by the panel, we add a new service name, signature, credentials and service address, etc etc in the ServiceTable. Besides this many other tables may have test data that may not be needed. If we use DataCompare, it will generate changeset for required data (important for client to enable certain services) and our test data that was added to the database as a result of our testing of the functionality or bug fix. Currently i am using SQLSchemaCompareTask (from Visual Studio 2008 Team Database Professional Power Tools API) in the TFSBuild.proj file of the build definition for TFS 2008. Using SQLSchemaCompareTask, the script generated contains database names like [dbo]. etc which are not desired as the script fails when run against SQL Server 2000 databses (Some of our client still use SQL Server 2000) databases as teh backend of the application. Also default data can't be generated by this process. To overcome this problem, i have to come up with a solution that can compare databases and generate script automatically that does not have to be manually reviewed again before being sent to the client. Please suggest effective methodology of such SQL script generation and suggest whether two different databases may be used or something ? Is there any toolkit or api that can enable build automation for SQL Server databases? Thank you all. Regards Steve

    Read the article

  • Best Role-Based Access Control (RBAC) database model

    - by jhs
    What is the best database schema to track role-based access controls for a web application? I am using Rails, but the RBAC plugin linked by Google looks unmaintained (only 300 commits to SVN; latest was almost a year ago). The concept is simple enough to implement from scratch, yet complex and important enough that it's worth getting right. So how do others architect and implement their RBAC model?

    Read the article

< Previous Page | 503 504 505 506 507 508 509 510 511 512 513 514  | Next Page >