Search Results

Search found 30300 results on 1212 pages for 'temporal database'.

Page 34/1212 | < Previous Page | 30 31 32 33 34 35 36 37 38 39 40 41  | Next Page >

  • Syncronizing XML file with MySQL database

    - by Fred K
    My company uses an internal management software for storing products. They want to transpose all the products in a MySql database so they can do available their products on the company website. Notice: they will continue to use their own internal software. This software can exports all the products in various file format (including XML). The syncronization not have to be in real time, they are satisfied to syncronize the MySql database once a day (late night). Also, each product in their software has one or more images, then I have to do available also the images on the website. Here is an example of an XML export: <?xml version="1.0" encoding="UTF-8"?> <export_management userid="78643"> <product id="1234"> <version>100</version> <insert_date>2013-12-12 00:00:00</insert_date> <warrenty>true</warrenty> <price>139,00</price> <model> <code>324234345</code> <model>Notredame</model> <color>red</color> <size>XL</size> </model> <internal> <color>green</color> <size>S</size> </internal> <options> <s_option>some option</standard_option> <s_option>some option</standard_option> <extra_option>some option</extra> <extra_option>some option</extra> </options> <images>   </images> </product> </export_management> Some ideas for how can I do it? Or if you have better ideas to do that.

    Read the article

  • Putting data from local SQL database to remote SQL database without remote SQL access enabled (PHP)

    - by Shyam
    Hi, I have a local database, and all the tables are defined. Eventually I need to publish my data remotely, which I can do easily with PHPmyadmin. Problem however is that my remote host doesn't allow remote SQL connections at all, so writing a script that does a mysqldump and run it through a client (which would've been ideal) won't help me here. Since the schema won't change, but the data will, I need some kind of PHP client that works "reverse". My question is if such a client exists and what would be recommended to use (by experience). I just need an one way trip here, from my local database (Rails) to the remote database (supports PHP), preferable as simple and slick as possible. Thank you for your replies, comments and feedback!

    Read the article

  • Entitiy Framework: "Update Database from Model" instead of "Generate Database from Model"

    - by David
    Hey everyone, I have created a Entity Framework 4 model with Visual Studio 2010 and generated a database from it. Now I found myself adding new properties (with default values), changing documentation of columns, changing names of columns, changing types of columns several times. All tasks that do not require much "extra work" in order not to be possible to be achieved automatically (in my humble opinion). Everytime I did "Generate Database from Model" and lost of course the table data. Is there a way just to update the database's architecture so to say - leaving the table data untouched? Maybe with some user interaction especially when changing types etc.? Or would this functionality be simply too difficult to be realized to work in a reliable way? Thanks in advance! Cheers, David

    Read the article

  • Ways to generate database full structure based on Fluent NHibernate mappings

    - by Mendy
    I'm looking for ways to generate the application database full structure based on the NHibernate mapping data. The idea is to give the user an option to supply a database-connection string and then to build their a database with the structure that the application needs. The database need to independent - it means that it needs to work with any database that are supported by NHibernate. By full structure I mean that I want to generate also the index fields, and the relationship between tables. Is their few ways to accomplish this with NHibernate? Is so, what are they?

    Read the article

  • Using Flex to keep local SQLite database in sync with live server database

    - by DaveC
    I want to create a Flex 3 application running in Adobe Air that accesses an SQLite database and I need to keep this database in sync with an SQL server 2005 database running a website. Is this something that Flex supports or is it going to be a custom script? Also, has anybody done anything like this? Edit: The synchronisation can be done on a daily basis rather than real time. The data will be read only from a front end perspective with a CMS to do updates on the website.

    Read the article

  • Separating weakly linked database schemas

    - by jldugger
    I've been tasked with revisiting a database schema we designed and use internally for various ticketing and reporting systems. Currently there exists about 40 tables in one Oracle database schema supporting perhaps six webapps. However, there's one unifying relationship amongst them all: a rooms table describing the room. Room name, purpose and other data are thrown into a shared table for each app. My initial idea was to pull each of these applications into a separate database, and perform joins between a given database and the room database. But I've discovered this solution prevents foreign key constraints in SQL Server 2005. It seems silly to duplicate one table for each app and keep those multiple copies synchronized. Should I just leave everything in one large DB, or is there something else I can do separate the tables without losing FK constraints?

    Read the article

  • How to securely communicate with a database using a java applet

    - by WarmWaffles
    I have been writing web applications for quite sometime in PHP with MySQL. I always stored my database connection information into a configuration variable and connected to the database that way. A client wants a java applet for their website to communicate with their database. I'm very hesitant on this because the applet is going to be public and I am not sure how I would go about storing the database connection information. I'm paranoid that someone would decompile my application or find some way to extract my database connection information and use it maliciously. Any suggestions on how to do this securely?

    Read the article

  • Secure database connection. DAL .net architecture best practice

    - by Andrew Florko
    We have several applications that are installed in several departments that interact with database via Intranet. Users tend to use weak passwords or store login/password written on a shits of paper where everybody can see them. I'm worried about login/password leakage & want to minimize consequences. Minimizing database-server attack surface by hiding database-server from Intranet access would be a great idea also. I'm thinking about intermediary data access service method-based security. It seems more flexible than table-based or connection-based database-server one. This approach also allows to hide database-server from public Intranet. What kind of .net technologies and best practices would you suggest? Thank in you in advance!

    Read the article

  • Map null column as 0 in a legacy database (JPA)

    - by Indrek
    Using Play! framework and it's JPASupport class I have run into a problem with a legacy database. I have the following class: @Entity @Table(name="product_catalog") public class ProductCatalog extends JPASupport { @Id @GeneratedValue(strategy = GenerationType.IDENTITY) public Integer product_catalog; @OneToOne @JoinColumn(name="upper_catalog") public ProductCatalog upper_catalog; public String name; } Some product catalogs don't have an upper catalog, and this is referenced as 0 in a legacy database. If I supply the upper_catalog as NULL, then expectedly JPA inserts a NULL value to that database column. How could I force the null values to be 0 when writing to the database and the other way around when reading from the database?

    Read the article

  • Please suggest me the best way to design my database.

    - by Raymond Ho
    I have a table named "Pages" and a table named "Categories". Each entry of the table "Pages" is linked to the table "Categories". The "Categories" table have 5 entries, they are: "Car", "Websites", "Technology", "Mobile Phones", and "Interest". So each time I put an entry to the "Pages" table, I need to map it to the "Categories" table so are arranged properly. Here's my table: Pages ______ id [PK] name url Categories ______ id [PK] Categoryname Pages2Categories ______ Pages.id Categories.id So my question is, is this the most efficient way to create this kind of relationships between tables? It seems very amateur

    Read the article

  • Is there a database with git-like qualities?

    - by Mat
    I'm looking for a database where multiple users can contribute and commit new data; other users can then pull that data into their own database repository, all in a git-like manner. A transcriptional database, if you like; does such a thing exist? My current thinking is to dump the database to a single file as SQL, but that could well get unwieldy once it is of any size. Another option is to dump the database and use the filesystem, but again it gets unwieldy once of any size.

    Read the article

  • Multiple user database design

    - by dieguitoweb
    I have to develop a basic social network for an academic purpose; but I need some tips for the users management.. The users are subdivided into 3 groups with different privilege: admins,analysts and standards users. For every user should be stored into the database the following information: name,lastname,e-mail,age,password. I'm not quite sure how I should design the database between theese two solutions: 1)one table called 'users' with the 'role' attribute that explain what a user can do and what can't do, and the permissions are managed via php 2)every application user is a database user created with the query 'CREATE ROLE' (It's a postgres database) and he has permissions on some tables granted with the 'GRANT' statement You should take into account that the project is for a database exam.. thanks

    Read the article

  • how to send database data to a MySQL server to update the server database.

    - by haisergeant
    Hi everyone, I am developing Smoking counter and I need to send all the smoking record (about the time user smoking) to database on the server. The server database is MySQL database. I knew that I must have send data to PHP page/script, and this page/script will run on the data to insert record to database. I would like to know: is there another way to update database, because I don't have knowledge about PHP. I used to work on java and c/c++/objective-c. If you know another way to do this task, please let me know. Any help would be appreciated.

    Read the article

  • Can Django flush its database(s) between every unit test

    - by mikem
    Django (1.2 beta) will reset the database(s) between every test that runs, meaning each test runs on an empty DB. However, the database(s) are not flushed. One of the effects of flushing the database is the auto_increment counters are reset. Consider a test which pulls data out of the database by primary key: class ChangeLogTest(django.test.TestCase): def test_one(self): do_something_which_creates_two_log_entries() log = LogEntry.objects.get(id=1) assert_log_entry_correct(log) log = LogEntry.objects.get(id=2) assert_log_entry_correct(log) This will pass because only two log entries were ever created. However, if another test is added to ChangeLogTest and it happens to run before test_one, the primary keys of the log entries are no longer 1 and 2, they might be 2 and 3. Now test_one fails. This is actually a two part question: Is it possible to force ./manage.py test to flush the database between each test case? Since Django doesn't flush the DB between each test by default, maybe there is a good reason. Does anyone know?

    Read the article

  • Designing Relational Survey Questionnaires Database

    - by user1213055
    I'm trying to build a simple sql database for following access database. Currently there is no relationship and I just have two tables male and female with 6 sections in each form. How can I design it a better way so end user can connect to the database and analyze using STATA or SPSS ? I'm really confused whether I should create one table with all fields or break down into different tables. The database is specific to this study only so I'm not looking for a generic survey database where user can create surveys and capture them. Any feedback or suggestion is much appreciated.

    Read the article

  • Connecting Android app to MySQL database

    - by opuhliyvladyslav
    Can somebody help me with question related with MySQL database using? Yesterday i was making app for getting some text data and sending them to database located on remote server! I was making POST request to database and sent few text fields to one table in database. So MY QUESTION IS: "Can I use MySQL directly (not though POST method )? and how to use it ?" (will be so glad to see url to solutions or examples) P.S. when i was sending data to server i have some errors in database fields when sending russian characters (what type of encoding sending from my app to server????)

    Read the article

  • Can I use an "IN" filter on an Advantage Database Table (TAdsTable)?

    - by Anthony
    I want to apply a filter to an advantage table using multiple values for an Integer field. The equivalent SQL would be: SELECT * FROM TableName WHERE FieldName IN (1, 2, 3) Is it possible to do the same on an AdsTable within having to repeat the field using an "OR"? I want to something like: Filter := 'FieldName IN (1, 2, 3)' Instead of: Filter := 'FieldName = 1 OR FieldName = 2 OR FieldName = 3'

    Read the article

  • Retrieving images when database is in remote location...

    - by sasidhar
    Hi everyone, I am developing an application using java, my application would be accessed by number of different users simultaneously and the database resides in a central server. The access of the database from remote server is handled by just giving the appropriate IP of the server in the hibernate configure file. My question is, i have to store a picture regarding each user of the database, i heard that storing the image in the database and retrieving it from the database is not advised and has negative impact on the performance. Is it so ? What are the other possible ways i can implement this ? What is the best way to do it..? Please help....

    Read the article

  • Guide to reduce TFS database growth using the Test Attachment Cleaner

    - by terje
    Recently there has been several reports on TFS databases growing too fast and growing too big.  Notable this has been observed when one has started to use more features of the Testing system.  Also, the TFS 2010 handles test results differently from TFS 2008, and this leads to more data stored in the TFS databases. As a consequence of this there has been released some tools to remove unneeded data in the database, and also some fixes to correct for bugs which has been found and corrected during this process.  Further some preventive practices and maintenance rules should be adopted. A lot of people have blogged about this, among these are: Anu’s very important blog post here describes both the problem and solutions to handle it.  She describes both the Test Attachment Cleaner tool, and also some QFE/CU releases to fix some underlying bugs which prevented the tool from being fully effective. Brian Harry’s blog post here describes the problem too This forum thread describes the problem with some solution hints. Ravi Shanker’s blog post here describes best practices on solving this (TBP) Grant Holidays blogpost here describes strategies to use the Test Attachment Cleaner both to detect space problems and how to rectify them.   The problem can be divided into the following areas: Publishing of test results from builds Publishing of manual test results and their attachments in particular Publishing of deployment binaries for use during a test run Bugs in SQL server preventing total cleanup of data (All the published data above is published into the TFS database as attachments.) The test results will include all data being collected during the run.  Some of this data can grow rather large, like IntelliTrace logs and video recordings.   Also the pushing of binaries which happen for automated test runs, including tests run during a build using code coverage which will include all the files in the deployment folder, contributes a lot to the size of the attached data.   In order to handle this systematically, I have set up a 3-stage process: Find out if you have a database space issue Set up your TFS server to minimize potential database issues If you have the “problem”, clean up the database and otherwise keep it clean   Analyze the data Are your database( s) growing ?  Are unused test results growing out of proportion ? To find out about this you need to query your TFS database for some of the information, and use the Test Attachment Cleaner (TAC) to obtain some  more detailed information. If you don’t have too many databases you can use the SQL Server reports from within the Management Studio to analyze the database and table sizes. Or, you can use a set of queries . I find queries often faster to use because I can tweak them the way I want them.  But be aware that these queries are non-documented and non-supported and may change when the product team wants to change them. If you have multiple Project Collections, find out which might have problems: (Disclaimer: The queries below work on TFS 2010. They will not work on Dev-11, since the table structure have been changed.  I will try to update them for Dev-11 when it is released.) Open a SQL Management Studio session onto the SQL Server where you have your TFS Databases. Use the query below to find the Project Collection databases and their sizes, in descending size order.  use master select DB_NAME(database_id) AS DBName, (size/128) SizeInMB FROM sys.master_files where type=0 and substring(db_name(database_id),1,4)='Tfs_' and DB_NAME(database_id)<>'Tfs_Configuration' order by size desc Doing this on one of our SQL servers gives the following results: It is pretty easy to see on which collection to start the work   Find out which tables are possibly too large Keep a special watch out for the Tfs_Attachment table. Use the script at the bottom of Grant’s blog to find the table sizes in descending size order. In our case we got this result: From Grant’s blog we learnt that the tbl_Content is in the Version Control category, so the major only big issue we have here is the tbl_AttachmentContent.   Find out which team projects have possibly too large attachments In order to use the TAC to find and eventually delete attachment data we need to find out which team projects have these attachments. The team project is a required parameter to the TAC. Use the following query to find this, replace the collection database name with whatever applies in your case:   use Tfs_DefaultCollection select p.projectname, sum(a.compressedlength)/1024/1024 as sizeInMB from dbo.tbl_Attachment as a inner join tbl_testrun as tr on a.testrunid=tr.testrunid inner join tbl_project as p on p.projectid=tr.projectid group by p.projectname order by sum(a.compressedlength) desc In our case we got this result (had to remove some names), out of more than 100 team projects accumulated over quite some years: As can be seen here it is pretty obvious the “Byggtjeneste – Projects” are the main team project to take care of, with the ones on lines 2-4 as the next ones.  Check which attachment types takes up the most space It can be nice to know which attachment types takes up the space, so run the following query: use Tfs_DefaultCollection select a.attachmenttype, sum(a.compressedlength)/1024/1024 as sizeInMB from dbo.tbl_Attachment as a inner join tbl_testrun as tr on a.testrunid=tr.testrunid inner join tbl_project as p on p.projectid=tr.projectid group by a.attachmenttype order by sum(a.compressedlength) desc We then got this result: From this it is pretty obvious that the problem here is the binary files, as also mentioned in Anu’s blog. Check which file types, by their extension, takes up the most space Run the following query use Tfs_DefaultCollection select SUBSTRING(filename,len(filename)-CHARINDEX('.',REVERSE(filename))+2,999)as Extension, sum(compressedlength)/1024 as SizeInKB from tbl_Attachment group by SUBSTRING(filename,len(filename)-CHARINDEX('.',REVERSE(filename))+2,999) order by sum(compressedlength) desc This gives a result like this:   Now you should have collected enough information to tell you what to do – if you got to do something, and some of the information you need in order to set up your TAC settings file, both for a cleanup and for scheduled maintenance later.    Get your TFS server and environment properly set up Even if you have got the problem or if have yet not got the problem, you should ensure the TFS server is set up so that the risk of getting into this problem is minimized.  To ensure this you should install the following set of updates and components. The assumption is that your TFS Server is at SP1 level. Install the QFE for KB2608743 – which also contains detailed instructions on its use, download from here. The QFE changes the default settings to not upload deployed binaries, which are used in automated test runs. Binaries will still be uploaded if: Code coverage is enabled in the test settings. You change the UploadDeploymentItem to true in the testsettings file. Be aware that this might be reset back to false by another user which haven't installed this QFE. The hotfix should be installed to The build servers (the build agents) The machine hosting the Test Controller Local development computers (Visual Studio) Local test computers (MTM) It is not required to install it to the TFS Server, test agents or the build controller – it has no effect on these programs. If you use the SQL Server 2008 R2 you should also install the CU 10 (or later).  This CU fixes a potential problem of hanging “ghost” files.  This seems to happen only in certain trigger situations, but to ensure it doesn’t bite you, it is better to make sure this CU is installed. There is no such CU for SQL Server 2008 pre-R2 Work around:  If you suspect hanging ghost files, they can be – with some mental effort, deduced from the ghost counters using the following SQL query: use master SELECT DB_NAME(database_id) as 'database',OBJECT_NAME(object_id) as 'objectname', index_type_desc,ghost_record_count,version_ghost_record_count,record_count,avg_record_size_in_bytes FROM sys.dm_db_index_physical_stats (DB_ID(N'<DatabaseName>'), OBJECT_ID(N'<TableName>'), NULL, NULL , 'DETAILED') The problem is a stalled ghost cleanup process.  Restarting the SQL server after having stopped all components that depends on it, like the TFS Server and SPS services – that is all applications that connect to the SQL server. Then restart the SQL server, and finally start up all dependent processes again.  (I would guess a complete server reboot would do the trick too.) After this the ghost cleanup process will run properly again. The fix will come in the next CU cycle for SQL Server R2 SP1.  The R2 pre-SP1 and R2 SP1 have separate maintenance cycles, and are maintained individually. Each have its own set of CU’s. When it comes I will add the link here to that CU. The "hanging ghost file” issue came up after one have run the TAC, and deleted enourmes amount of data.  The SQL Server can get into this hanging state (without the QFE) in certain cases due to this. And of course, install and set up the Test Attachment Cleaner command line power tool.  This should be done following some guidelines from Ravi Shanker: “When you run TAC, ensure that you are deleting small chunks of data at regular intervals (say run TAC every night at 3AM to delete data that is between age 730 to 731 days) – this will ensure that small amounts of data are being deleted and SQL ghosted record cleanup can catch up with the number of deletes performed. “ This rule minimizes the risk of the ghosted hang problem to occur, and further makes it easier for the SQL server ghosting process to work smoothly. “Run DBCC SHRINKDB post the ghosted records are cleaned up to physically reclaim the space on the file system” This is the last step in a 3 step process of removing SQL server data. First they are logically deleted. Then they are cleaned out by the ghosting process, and finally removed using the shrinkdb command. Cleaning out the attachments The TAC is run from the command line using a set of parameters and controlled by a settingsfile.  The parameters point out a server uri including the team project collection and also point at a specific team project. So in order to run this for multiple team projects regularly one has to set up a script to run the TAC multiple times, once for each team project.  When you install the TAC there is a very useful readme file in the same directory. When the deployment binaries are published to the TFS server, ALL items are published up from the deployment folder. That often means much more files than you would assume are necessary. This is a brute force technique. It works, but you need to take care when cleaning up. Grant has shown how their settings file looks in his blog post, removing all attachments older than 180 days , as long as there are no active workitems connected to them. This setting can be useful to clean out all items, both in a clean-up once operation, and in a general There are two scenarios we need to consider: Cleaning up an existing overgrown database Maintaining a server to avoid an overgrown database using scheduled TAC   1. Cleaning up a database which has grown too big due to these attachments. This job is a “Once” job.  We do this once and then move on to make sure it won’t happen again, by taking the actions in 2) below.  In this scenario you should only consider the large files. Your goal should be to simply reduce the size, and don’t bother about  the smaller stuff. That can be left a scheduled TAC cleanup ( 2 below). Here you can use a very general settings file, and just remove the large attachments, or you can choose to remove any old items.  Grant’s settings file is an example of the last one.  A settings file to remove only large attachments could look like this: <!-- Scenario : Remove large files --> <DeletionCriteria> <TestRun /> <Attachment> <SizeInMB GreaterThan="10" /> </Attachment> </DeletionCriteria> Or like this: If you want only to remove dll’s and pdb’s about that size, add an Extensions-section.  Without that section, all extensions will be deleted. <!-- Scenario : Remove large files of type dll's and pdb's --> <DeletionCriteria> <TestRun /> <Attachment> <SizeInMB GreaterThan="10" /> <Extensions> <Include value="dll" /> <Include value="pdb" /> </Extensions> </Attachment> </DeletionCriteria> Before you start up your scheduled maintenance, you should clear out all older items. 2. Scheduled maintenance using the TAC If you run a schedule every night, and remove old items, and also remove them in small batches.  It is important to run this often, like every night, in order to keep the number of deleted items low. That way the SQL ghost process works better. One approach could be to delete all items older than some number of days, let’s say 180 days. This could be combined with restricting it to keep attachments with active or resolved bugs.  Doing this every night ensures that only small amounts of data is deleted. <!-- Scenario : Remove old items except if they have active or resolved bugs --> <DeletionCriteria> <TestRun> <AgeInDays OlderThan="180" /> </TestRun> <Attachment /> <LinkedBugs> <Exclude state="Active" /> <Exclude state="Resolved"/> </LinkedBugs> </DeletionCriteria> In my experience there are projects which are left with active or resolved workitems, akthough no further work is done.  It can be wise to have a cleanup process with no restrictions on linked bugs at all. Note that you then have to remove the whole LinkedBugs section. A approach which could work better here is to do a two step approach, use the schedule above to with no LinkedBugs as a sweeper cleaning task taking away all data older than you could care about.  Then have another scheduled TAC task to take out more specifically attachments that you are not likely to use. This task could be much more specific, and based on your analysis clean out what you know is troublesome data. <!-- Scenario : Remove specific files early --> <DeletionCriteria> <TestRun > <AgeInDays OlderThan="30" /> </TestRun> <Attachment> <SizeInMB GreaterThan="10" /> <Extensions> <Include value="iTrace"/> <Include value="dll"/> <Include value="pdb"/> <Include value="wmv"/> </Extensions> </Attachment> <LinkedBugs> <Exclude state="Active" /> <Exclude state="Resolved" /> </LinkedBugs> </DeletionCriteria> The readme document for the TAC says that it recognizes “internal” extensions, but it does recognize any extension. To run the tool do the following command: tcmpt attachmentcleanup /collection:your_tfs_collection_url /teamproject:your_team_project /settingsfile:path_to_settingsfile /outputfile:%temp%/teamproject.tcmpt.log /mode:delete   Shrinking the database You could run a shrink database command after the TAC has run in cases where there are a lot of data being deleted.  In this case you SHOULD do it, to free up all that space.  But, after the shrink operation you should do a rebuild indexes, since the shrink operation will leave the database in a very fragmented state, which will reduce performance. Note that you need to rebuild indexes, reorganizing is not enough. For smaller amounts of data you should NOT shrink the database, since the data will be reused by the SQL server when it need to add more records.  In fact, it is regarded as a bad practice to shrink the database regularly.  So on a daily maintenance schedule you should NOT shrink the database. To shrink the database you do a DBCC SHRINKDATABASE command, and then follow up with a DBCC INDEXDEFRAG afterwards.  I find the easiest way to do this is to create a SQL Maintenance plan including the Shrink Database Task and the Rebuild Index Task and just execute it when you need to do this.

    Read the article

  • Oracle Database 11gR2 11.2.0.3 Certified with E-Business Suite on HP-UX PA-RISC

    - by John Abraham
    As a follow up to our original announcement, Oracle Database 11g Release 2 (11.2.0.3) is now certified with Oracle E-Business Suite Release 11i and Release 12 on the following HP-UX platforms: Release 11i (11.5.10.2 + ATG PF.H RUP 6 and higher) : HP-UX PA-RISC (64-bit) (11.31) Release 12 (12.0.4 and higher, 12.1.1 and higher): HP-UX PA-RISC (64-bit) (11.31) This announcement for Oracle E-Business Suite 11i and R12 includes: Real Application Clusters (RAC) Oracle Database Vault Transparent Data Encryption (Column Encryption) TDE Tablespace Encryption Advanced Security Option (ASO)/Advanced Networking Option (ANO) Export/Import Process for Oracle E-Business Suite Release 11i and Release 12 Database Instances Transportable Database and Transportable Tablespaces Data Migration Processes for Oracle E-Business Suite Release 11i and Release 12 References MOS Document 881505.1 - Interoperability Notes - Oracle E-Business Suite Release 11i with Oracle Database 11g Release 2 (11.2.0) MOS Document 1058763.1 - Interoperability Notes - Oracle E-Business Suite Release 12 with Oracle Database 11g Release 2 (11.2.0) MOS Document 1091086.1 - Integrating Oracle E-Business Suite Release 11i with Oracle Database Vault 11gR2 MOS Document 1091083.1 - Integrating Oracle E-Business Suite Release 12 with Oracle Database Vault 11gR2 MOS Document 216205.1 - Database Initialization Parameters for Oracle E-Business Suite 11i MOS Document 396009.1 - Database Initialization Parameters for Oracle Applications Release 12 MOS Document 761570.1 - Database Preparation Guidelines for an Oracle E-Business Suite Release 12.1.1 Upgrade MOS Document 823586.1 - Using Oracle 11g Release 2 Real Application Clusters with Oracle E-Business Suite Release 11i MOS Document 823587.1 - Using Oracle 11g Release 2 Real Application Clusters with Oracle E-Business Suite Release 12 MOS Document 403294.1 - Using Transparent Data Encryption (TDE) Column Encryption with Oracle E-Business Suite Release 11i MOS Document 732764.1 - Using Transparent Data Encryption (TDE) Column Encryption with Oracle E-Business Suite Release 12 MOS Document 828223.1 - Using TDE Tablespace Encryption with Oracle E-Business Suite Release 11i MOS Document 828229.1 - Using TDE Tablespace Encryption with Oracle E-Business Suite Release 12 MOS Document 391248.1 - Encrypting Oracle E-Business Suite Release 11i Network Traffic using Advanced Security Option and Advanced Networking Option MOS Document 732764.1 - Using Transparent Data Encryption (TDE) Column Encryption with Oracle E-Business Suite Release 12 MOS Document 557738.1 - Export/Import Process for Oracle E-Business Suite Release 11i Database Instances Using Oracle Database 11g Release 1 or 11g Release 2 MOS Document 741818.1 - Export/Import Process for Oracle E-Business Suite Release 12 Database Instances Using Oracle Database 11g Release 1 or 11g Release 2 MOS Document 1366265.1 - Using Transportable Tablespaces to Migrate Oracle Applications 11i Using Oracle Database 11g Release 2 MOS Document 1311487.1 - Using Transportable Tablespaces to Migrate Oracle E-Business Suite Release 12 Using Oracle Database 11g Release 2 MOS Document 729309.1 - Using Transportable Database to Migrate Oracle E-Business Suite Release 11i Using Oracle Database 10g Release 2 or 11g MOS Document 734763.1 - Using Transportable Database to Migrate Oracle E-Business Suite Release 12 Using Oracle Database 10g Release 2 or 11g Please also review the platform-specific Oracle Database Installation Guides for operating system and other prerequisites.

    Read the article

  • Best algorithm/practice when creating a search mechanism for your database?

    - by Alex Hope O'Connor
    I have been designing a database where it is very important to provide users with a good search mechanism. So I was wondering what some of the best practices are for using keywords to search over multiple database tables and return the relevent records? Some other things I am curious about: The users location, if they provide an address The speed of the algorithm Additional Information: I am using C# and LINQ-To-SQL.

    Read the article

  • What are the advantages of storing xml in a relational database?

    - by Chris
    I was poking around the AdventureWorks database today and I noticed that a number of tables (HumanResources.JobCandidate and Sales.Individual for example) have a column which is storing xml data. What I would to know is, what is the advantage of storing basically a database table row's worth of data in another table's column? Doesn't this make it difficult to query off of this information? Or is the assumption that the data won't need to be queried and just needs to be stored?

    Read the article

< Previous Page | 30 31 32 33 34 35 36 37 38 39 40 41  | Next Page >