Search Results

Search found 31891 results on 1276 pages for 'database schema'.

Page 124/1276 | < Previous Page | 120 121 122 123 124 125 126 127 128 129 130 131  | Next Page >

  • Connect to SQL Express database (5 replies)

    I have just joined the &quot;I'm sure I've missed something obvious club&quot; I have VBExpress 2008 installed with SQL Express 2008 with the SQL Express management studio. I started building a prototype database in Management Studio: nothing complex, just a cascade of administratve tables to create a logical context for the real data. Next I created a project which would provide simple linked controls to p...

    Read the article

  • Connect to SQL Express database (5 replies)

    I have just joined the &quot;I'm sure I've missed something obvious club&quot; I have VBExpress 2008 installed with SQL Express 2008 with the SQL Express management studio. I started building a prototype database in Management Studio: nothing complex, just a cascade of administratve tables to create a logical context for the real data. Next I created a project which would provide simple linked controls to p...

    Read the article

  • ??????????????????????????????????????????????????Oracle Database 11g Enterprise Edition??????Oracle Data Guard?

    - by Yusuke.Yamamoto
    ????? ??:2011/05/25 ??:???? ??3??????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????? Oracle Database 11g Enterprise Edition ?????????????????????????·???????????Oracle Data Guard????????????????? Oracle Data Guard ??????????????????????????????? ????????????????????????? Oracle Data Guard???????"???"????????????????????????????????????WAN????????????????????????Oracle Data Guard ?????? ????????? ????????????????? http://oracletech.jp/products/pickup/000298.html

    Read the article

  • ???????/?????????NAS??????!Oracle Database?I/O???????NFS????????????SSD???????????

    - by user788995
    ????? ??:2012/01/23 ??:??????/?? ???Unified Storage????????????????????????????NAS??????????????????????????????????Oracle Database ????????Direct NFS??????NAS?????????????????SSD?????????????????????????????????DB???????????????? Oracle GRID Center ????????????????? IntroductionDirect NFS ????Direct NFS ???Direct NFS ??? ????????? ????????????????? http://otndnld.oracle.co.jp/ondemand/otn-seminar/movie/120113_C-11_DirectNFS.wmv http://otndnld.oracle.co.jp/ondemand/otn-seminar/movie/mp4/120113_C-11_DirectNFS.mp4 http://www.oracle.com/technetwork/jp/ondemand/db-technique/c-11-directnfs-1484611-ja.pdf

    Read the article

  • ???????/???Oracle Database 11g Release 2?????????????????

    - by user788995
    ????? ??:2012/01/23 ??:??????/?? Oracle Database 11g Release 2 ??????????????????????????????????????????????????????????????????????????????????????? ?????????????????????????????????????????????????? ????????? ????????????????? http://otndnld.oracle.co.jp/ondemand/otn-seminar/movie/111212_C-5_11gR2Upgrade.wmv http://otndnld.oracle.co.jp/ondemand/otn-seminar/movie/mp4/111212_C-5_11gR2Upgrade.mp4 http://www.oracle.com/technetwork/jp/ondemand/db-technique/c-5-11gr2upgrade-1448385-ja.pdf

    Read the article

  • ???????/?????????????????????Oracle Database?????

    - by user788995
    ????? ??:2012/01/23 ??:??????/?? ??? Oracle Database 11g ????????????????????????????????????????Enterprise Manager ??????????????SSD??????????????????????????????????????????????????????????????? ????OLTP??????????DWH?????????? ????????? ????????????????? http://otndnld.oracle.co.jp/ondemand/otn-seminar/movie/120106_D-6_DB_1.wmv http://otndnld.oracle.co.jp/ondemand/otn-seminar/movie/mp4/120106_D-6_DB_1.mp4 http://www.oracle.com/technetwork/jp/ondemand/db-technique/d-6-db11g-1484773-ja.pdf

    Read the article

  • Oracle Data Integrator 11.1.1.5 Complex Files as Sources and Targets

    - by Alex Kotopoulis
    Overview ODI 11.1.1.5 adds the new Complex File technology for use with file sources and targets. The goal is to read or write file structures that are too complex to be parsed using the existing ODI File technology. This includes: Different record types in one list that use different parsing rules Hierarchical lists, for example customers with nested orders Parsing instructions in the file data, such as delimiter types, field lengths, type identifiers Complex headers such as multiple header lines or parseable information in header Skipping of lines  Conditional or choice fields Similar to the ODI File and XML File technologies, the complex file parsing is done through a JDBC driver that exposes the flat file as relational table structures. Complex files are mapped to one or more table structures, as opposed to the (simple) file technology, which always has a one-to-one relationship between file and table. The resulting set of tables follows the same concept as the ODI XML driver, table rows have additional PK-FK relationships to express hierarchy as well as order values to maintain the file order in the resulting table.   The parsing instruction format used for complex files is the nXSD (native XSD) format that is already in use with Oracle BPEL. This format extends the XML Schema standard by adding additional parsing instructions to each element. Using nXSD parsing technology, the native file is converted into an internal XML format. It is important to understand that the XML is streamed to improve performance; there is no size limitation of the native file based on memory size, the XML data is never fully materialized.  The internal XML is then converted to relational schema using the same mapping rules as the ODI XML driver. How to Create an nXSD file Complex file models depend on the nXSD schema for the given file. This nXSD file has to be created using a text editor or the Native Format Builder Wizard that is part of Oracle BPEL. BPEL is included in the ODI Suite, but not in standalone ODI Enterprise Edition. The nXSD format extends the standard XSD format through nxsd attributes. NXSD is a valid XML Schema, since the XSD standard allows extra attributes with their own namespaces. The following is a sample NXSD schema: <?xml version="1.0"?> <xsd:schema xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:nxsd="http://xmlns.oracle.com/pcbpel/nxsd" elementFormDefault="qualified" xmlns:tns="http://xmlns.oracle.com/pcbpel/demoSchema/csv" targetNamespace="http://xmlns.oracle.com/pcbpel/demoSchema/csv" attributeFormDefault="unqualified" nxsd:encoding="US-ASCII" nxsd:stream="chars" nxsd:version="NXSD"> <xsd:element name="Root">         <xsd:complexType><xsd:sequence>       <xsd:element name="Header">                 <xsd:complexType><xsd:sequence>                         <xsd:element name="Branch" type="xsd:string" nxsd:style="terminated" nxsd:terminatedBy=","/>                         <xsd:element name="ListDate" type="xsd:string" nxsd:style="terminated" nxsd:terminatedBy="${eol}"/>                         </xsd:sequence></xsd:complexType>                         </xsd:element>                 </xsd:sequence></xsd:complexType>         <xsd:element name="Customer" maxOccurs="unbounded">                 <xsd:complexType><xsd:sequence>                 <xsd:element name="Name" type="xsd:string" nxsd:style="terminated" nxsd:terminatedBy=","/>                         <xsd:element name="Street" type="xsd:string" nxsd:style="terminated" nxsd:terminatedBy="," />                         <xsd:element name="City" type="xsd:string" nxsd:style="terminated" nxsd:terminatedBy="${eol}" />                         </xsd:sequence></xsd:complexType>                         </xsd:element>                 </xsd:sequence></xsd:complexType> </xsd:element> </xsd:schema> The nXSD schema annotates elements to describe their position and delimiters within the flat text file. The schema above uses almost exclusively the nxsd:terminatedBy instruction to look for the next terminator chars. There are various constructs in nXSD to parse fixed length fields, look ahead in the document for string occurences, perform conditional logic, use variables to remember state, and many more. nXSD files can either be written manually using an XML Schema Editor or created using the Native Format Builder Wizard. Both Native Format Builder Wizard as well as the nXSD language are described in the Application Server Adapter Users Guide. The way to start the Native Format Builder in BPEL is to create a new File Adapter; in step 8 of the Adapter Configuration Wizard a new Schema for Native Format can be created:   The Native Format Builder guides through a number of steps to generate the nXSD based on a sample native file. If the format is complex, it is often a good idea to “approximate” it with a similar simple format and then add the complex components manually.  The resulting *.xsd file can be copied and used as the format for ODI, other BPEL constructs such as the file adapter definition are not relevant for ODI. Using this technique it is also possible to parse the same file format in SOA Suite and ODI, for example using SOA for small real-time messages, and ODI for large batches. This nXSD schema in this example describes a file with a header row containing data and 3 string fields per row delimited by commas, for example: Redwood City Downtown Branch, 06/01/2011 Ebeneezer Scrooge, Sandy Lane, Atherton Tiny Tim, Winton Terrace, Menlo Park The ODI Complex File JDBC driver exposes the file structure through a set of relational tables with PK-FK relationships. The tables for this example are: Table ROOT (1 row): ROOTPK Primary Key for root element SNPSFILENAME Name of the file SNPSFILEPATH Path of the file SNPSLOADDATE Date of load Table HEADER (1 row): ROOTFK Foreign Key to ROOT record ROWORDER Order of row in native document BRANCH Data BRANCHORDER Order of Branch within row LISTDATE Data LISTDATEORDER Order of ListDate within row Table ADDRESS (2 rows): ROOTFK Foreign Key to ROOT record ROWORDER Order of row in native document NAME Data NAMEORDER Oder of Name within row STREET Data STREETORDER Order of Street within row CITY Data CITYORDER Order of City within row Every table has PK and/or FK fields to reflect the document hierarchy through relationships. In this example this is trivial since the HEADER and all CUSTOMER records point back to the PK of ROOT. Deeper nested documents require this to identify parent elements. All tables also have a ROWORDER field to define the order of rows, as well as order fields for each column, in case the order of columns varies in the original document and needs to be maintained. If order is not relevant, these fields can be ignored. How to Create an Complex File Data Server in ODI After creating the nXSD file and a test data file, and storing it on the local file system accessible to ODI, you can go to the ODI Topology Navigator to create a Data Server and Physical Schema under the Complex File technology. This technology follows the conventions of other ODI technologies and is very similar to the XML technology. The parsing settings such as the source native file, the nXSD schema file, the root element, as well as the external database can be set in the JDBC URL: The use of an external database defined by dbprops is optional, but is strongly recommended for production use. Ideally, the staging database should be used for this. Also, when using a complex file exclusively for read purposes, it is recommended to use the ro=true property to ensure the file is not unnecessarily synchronized back from the database when the connection is closed. A data file is always required to be present  at the filename path during design-time. Without this file, operations like testing the connection, reading the model data, or reverse engineering the model will fail.  All properties of the Complex File JDBC Driver are documented in the Oracle Fusion Middleware Connectivity and Knowledge Modules Guide for Oracle Data Integrator in Appendix C: Oracle Data Integrator Driver for Complex Files Reference. David Allan has created a great viewlet Complex File Processing - 0 to 60 which shows the creation of a Complex File data server as well as a model based on this server. How to Create Models based on an Complex File Schema Once physical schema and logical schema have been created, the Complex File can be used to create a Model as if it were based on a database. When reverse-engineering the Model, data stores(tables) for each XSD element of complex type will be created. Use of complex files as sources is straightforward; when using them as targets it has to be made sure that all dependent tables have matching PK-FK pairs; the same applies to the XML driver as well. Debugging and Error Handling There are different ways to test an nXSD file. The Native Format Builder Wizard can be used even if the nXSD wasn’t created in it; it will show issues related to the schema and/or test data. In ODI, the nXSD  will be parsed and run against the existing test XML file when testing a connection in the Dataserver. If either the nXSD has an error or the data is non-compliant to the schema, an error will be displayed. Sample error message: Error while reading native data. [Line=1, Col=5] Not enough data available in the input, when trying to read data of length "19" for "element with name D1" from the specified position, using "style" as "fixedLength" and "length" as "". Ensure that there is enough data from the specified position in the input. Complex File FAQ Is the size of the native file limited by available memory? No, since the native data is streamed through the driver, only the available space in the staging database limits the size of the data. There are limits on individual field sizes, though; a single large object field needs to fit in memory. Should I always use the complex file driver instead of the file driver in ODI now? No, use the file technology for all simple file parsing tasks, for example any fixed-length or delimited files that just have one row format and can be mapped into a simple table. Because of its narrow assumptions the ODI file driver is easy to configure within ODI and can stream file data without writing it into a database. The complex file driver should be used whenever the use case cannot be handled through the file driver. Are we generating XML out of flat files before we write it into a database? We don’t materialize any XML as part of parsing a flat file, either in memory or on disk. The data produced by the XML parser is streamed in Java objects that just use XSD-derived nXSD schema as its type system. We use the nXSD schema because is the standard for describing complex flat file metadata in Oracle Fusion Middleware, and enables users to share schemas across products. Is the nXSD file interchangeable with SOA Suite? Yes, ODI can use the same nXSD files as SOA Suite, allowing mixed use cases with the same data format. Can I start the Native Format Builder from the ODI Studio? No, the Native Format Builder has to be started from a JDeveloper with BPEL instance. You can get BPEL as part of the SOA Suite bundle. Users without SOA Suite can manually develop nXSD files using XSD editors. When is the database data written back to the native file? Data is synchronized using the SYNCHRONIZE and CREATE FILE commands, and when the JDBC connection is closed. It is recommended to set the ro or read_only property to true when a file is exclusively used for reading so that no unnecessary write-backs occur. Is the nXSD metadata part of the ODI Master or Work Repository? No, the data server definition in the master repository only contains the JDBC URL with file paths; the nXSD files have to be accessible on the file systems where the JDBC driver is executed during production, either by copying or by using a network file system. Where can I find sample nXSD files? The Application Server Adapter Users Guide contains nXSD samples for various different use cases.

    Read the article

  • SQL Alter database failed - being used by checkpoint process

    - by Manjot
    Hi, On my SQL server 2008, i have a SQL agent job to restore a database on nightly basis. Procedure: find latest backup on other server Kill all conenction to the destination database Restore destination database with replace, recovery It failed last weekend because the database was being used by a system process (spid 11 checkpoint). since I couldnt kill the system process, I fixed this by restarting sql server. It failed this weekend as well with same error (checkpint process in this database as from sp_who) and when I run: SELECT session_id,request_id,command,status,start_time FROM sys.dm_exec_requests WHERE session_id = 11 It shows: 11 0 CHECKPOINT background 2010-04-06 10:17:49.103 I cant restart the server every time it fails. Can anyone please help me in fixing this? Thanks in advance Manjot

    Read the article

  • Roundcube can't connect to PostgreSQL database

    - by kenny.r
    I'm trying to install Roundcube on a CentOS 5.5 server, with a PostgreSQL 8.1.22 database. The first page of the installer script, that checks for the presence of php libraries and such, gives me green OKs across the board. I even went out of my way to install the optional ones. Page two generates me two configuration files (main.inc.php and db.inc.php) which I put into place. Page three is where things go wrong: Check DB config DSN (write): NOT OK(MDB2 Error: connect failed) Make sure that the configured database exists and that the user has write privileges DSN: pgsql://roundcube:password@localhost/roundcubemail The info you see there (user roundcube, password password, server localhost and database roundcubemail) is all correct. The database roundcubemail belongs to the user roundcube and it has write permissions. I have no clue why it can't connect to that database. I'm managing it with phpPgAdmin, which is running on the very same Apache, on the same server!

    Read the article

  • database on SSD: data only or the DBM program too?

    - by simone
    I plan on moving the data I use for statistical analysis (100-ish Gb) onto an SSD. The data is either sqlite single-file db's, or postgresql-managed data. The SSD is 240 Gb, 550 MB/s read and 520 MB/s write. Should I reserve that space for the data only, or would it be a good idea to install the operating system (Mac OS X) and the application directory (Adobe Suite, Microsoft Office and the like) on the SSD too? And would it make a substantial speed difference whether I also install the postgresql binaries on the SSD? I have plenty of other space (another 300Gb hard-drive, and a 1Tb one). Don't know the features of the non-SSD drives, though they're our standard equipment on all Macs, and they're definitely OK. Thanks.

    Read the article

  • Issue with Exchange 2010 and Removing a Mailbox Database

    - by ThaKidd
    I did a 2003 to 2010 transition and everything is working well. During the 2010 install, a database was copied over with a random number at the end. I found out and moved three system mailboxes out of it into the database that all of the client accounts are in. I used the EMS to move those mailboxes to the other store then used the EMC to remove the mailbox database. Problem is, I am getting an error every few hours in event viewer now complaining about this database. Error is: MSExchageRepl - 4098 The Microsoft Exchange Replication service couldn't find a valid configuration for database '5f012f40-3bad-4003-a373-dbc0ffb6736f' on server 'SERVER'. Error: (nothing reported after this) Does anyone know how to fix this issue? In advance, I appreciate your help and thx for your valuable input!

    Read the article

  • Are there any reasons to duplicate table in the same database ?

    - by bob
    Let says we have several MySQL server, one master and some slaves. A member table which contains more than 5.000.000 peoples. Are there any reasons (performance, atomicity, etc..) to use duplicate tables like member_1, member_2, member_3 and then switch randomly when doing operation on it ? (especialy SELECT query) ?

    Read the article

  • Moving the Windows Workflow database: safe enough?

    - by Chris
    We have a Windows Workflow service that is running in the IIS context and persisting to a database in between hydrates. It has the Tracking Service turned on, as well. We're looking to move the database to another server, and I wanted to make sure there are no gotchas in doing so. My current plan would just be to spin down IIS to stop all activity, back up the database, migrate the database, then flip connection strings in my application to point to the new one. My main concern was if existing workflows somehow need to stay on the same database or not, or if some activity needs to happen for them to work after the move. I wouldn't think so, but just planning ahead.

    Read the article

  • Troubleshooting a slow database server with no load

    - by user1721724
    I'm getting ready to soft launch my website and I've run into some problems with what I think is being caused by my MySQL database running on Fedora. All websites run fine, just as I'd expect, but any pages that establish a database connection hang until the connection is established, and then bang, the site loads as it should. Ex. My landing page (http://www.thrusong.com) doesn't make a database connection and loads quickly. User profile pages (http://www.thrusong.com/john) make a database connection and load slowly, even though most of the data comes from memcached and the database currently has no load on it. This problem just came up yesterday when my router died and I began using my Pace 2Wire modem with built-in router. Before, my old router was set to handle everything. My ISP says the settings in the modem are correct. Any ideas? Thanks in advance.

    Read the article

  • Why database partitioning didn't work? Extract from thedailywtf.com

    - by questzen
    Original link. http://thedailywtf.com/Articles/The-Certified-DBA.aspx. Article summary: The DBA suggests an approach involving rigorous partitioning, 10 partitions per disk (3 actual disks and 3 raid). The stats show that the performance is non-optimal. Then the DBA suggests an alternative of 1 partition per disk (with more added disks). This also fails. The sys-admin then sets up a single disk, single partition and saves the day. The size of disks was not mentioned but given today,s typical disk sizes (of the order of 100 GB), the partitions ; would be huge, it surprises me that a single disk with all partitions outperformed. Initially I suspect that the data was segregated and hence faster reads. But how come the performance didn't degrade as time went by with all the inserts and updates happening? Saw this on reddit, but the explanation was by far spindle/platter centered. There was no mention in the article about this. Is there any other reason? I can only guess that the tables were using a incorrect hash distribution causing non-uniform allocation across disks (wrong partitioning); this would increase fetch times. Any thoughts?

    Read the article

  • Access denied for user 'diduser'@'localhost' to database 'diddata' (1044, 42000)

    - by Arlen Beiler
    I am trying to setup a MySQL server and when I went to create a second user it wouldn't give it permissions for the database. I can connect fine as long as I don't specify a database. Access denied for user 'user'@'localhost' to database 'diddata' The connection details are: { 'host' : 'localhost', 'user' : 'user', 'password' : 'password' , 'database': 'diddata' }; And to create the DB and table I did: CREATE DATABASE IF NOT exists diddata; CREATE USER 'user'@'localhost' IDENTIFIED BY 'password'; GRANT ALL ON user.* TO 'user'@'localhost'; Note that I've changed the username and password in this question. I've already checked the privileges in MySQL workbench and they are there.

    Read the article

  • What is the easiest way to apply database functionality into my daily life?

    - by Daddy Warbox
    let me try to explain it by listing some of the things I want to do: Submit random thoughts, notes, facts, and to-do tasks of any sort and at any time. Tag each of these submissions freely. Manage these tags centrally. Associate meta-data with submissions and tags. Search, filter, and sort submissions. I want lots of power here. Display views of submissions (including within searches) in a hierarchy. Create said hierarchies easily out by ordering relevant tags. I'm thinking towards some kind of desktop program that allows me to quickly do all of these things. A web service could also work, too, but it will need offline capabilities. I don't want to have to pay for this, if that's possible. Also, as I know regex and SQL, I wouldn't mind solutions involving the use of either.

    Read the article

  • LLBLGen Pro v3.0 with Entity Framework v4.0 (12m video)

    - by FransBouma
    Today I recorded a video in which I illustrate some of the database-first functionality available in LLBLGen Pro v3.0. LLBLGen Pro v3.0 also supports model-first functionality, which I hope to illustrate in an upcoming video. LLBLGen Pro v3.0 is currently in beta and is scheduled to RTM some time in May 2010. It supports the following frameworks out of the box, with more scheduled to follow in the coming year: LLBLGen Pro RTL (our own o/r mapper framework), Linq to Sql, NHibernate and Entity Framework (v1 and v4). The video I linked to below illustrates the creation of an entity model for Entity Framework v4, by reverse engineering the SQL Server 2008 example database 'AdventureWorks'. The following topics (among others) are included in the video: Abbreviation support (example: convert 'Qty' into 'Quantity' during name construction) Flexible, framework specific settings Attribute definitions for various elements (so no requirement for buddy-classes or messing with generated code or templates) Retrieval of relational model data from a database Reverse engineering of tables into entities, automatically placed in groups Auto-creation of inheritance hierarchies Refactoring of entity fields into Value Type Definitions (DDD) Mapping a Typed view onto a stored procedure resultset Creation of a Typed list (definition of a query with a projection) on a set of related entities Validation and correction of found inconsistencies and errors Generating code using one of the pre-defined presets Illustration of the code in vs.net 2010 It also gives a good overview of what it takes with LLBLGen Pro v3.0 to start from a new project, point it to a database, get an entity model, perform tweaks and validation and generate code which is ready to run. I am no video recording expert so there's no audio and some mouse movements might be a little too quickly. If that's the case, please pause the video. It's rather big (52MB). Click here to open the HTML page with the video (Flash). Opens in a new window. LLBLGen Pro v3.0 is currently in beta (available for v2.x customers) and scheduled to be released somewhere in May 2010.

    Read the article

  • Why is TDD not working here?

    - by TobiMcNamobi
    I want to write a class A that has a method calculate(<params>). That method should calculate a value using database data. So I wrote a class Test_A for unit testing (TDD). The database access is done using another class which I have mocked with a class, let's call it Accessor_Mockup. Now, the TDD cycle requires me to add a test that fails and make the simplest changes to A so that the test passes. So I add data to Accessor_Mockup and call A.calculate with appropriate parameters. But why should A use the accessor class at all? It would be simpler (!) if the class just "knows" the values it could retrieve from the database. For every test I write I could introduce such a new value (or an if-branch or whatever). But wait ... TDD is more. There is the refactoring part. But that sounds to me like "OK, I can do this all with a big if-elseif construct. I could refactor it using a new class ... but instead I make use of the DB accessor and do this in a totally different way. The code will not necessarily look better afterwards but I know I WANT to use the database".

    Read the article

  • Lösungen zum Anfassen – die Oracle Demo-Plattform

    - by A&C Redaktion
    Mit der neuen Demo-Plattform möchte Oracle den schnellen Zugang zu vorbereiteten Demo-Umgebungen anbieten. Denn manchmal sagt eine kurze Demonstration mehr, als tausend Erklärungsversuche. Oracle hat daher eine Demo-Plattform eingerichtet, auf der laufend neue Lösungen und Produkte anschaulich vorgeführt werden. Dabei geht es nicht um die theoretischen Möglichkeiten, sondern um ganz praktische Problemfälle – und wie diese bewältigt werden. Das aktuelle Thema ist Database Security am Beispiel der E-Business Suite – ein Thema, das so mancher Partner im Kundengespräch gut gebrauchen kann. In der folgenden Demo-Umgebung können Sie die Datenbank-Sicherheitsfunktionen wie die transparente Verschlüsselung von Applikationsdaten (hier am Beispiel E-Business Suite – es funktioniert aber auch mit SAP oder anderen Anwendungen) und das Rechtekonzept für Anwender und DBAs Ihren Kunden direkt vorstellen. In der Demo können Sie die Funktionalität von Oracle Database Vault, Oracle Advanced Security, Security Option und Oracle Label Security erläutern. Oracle Advanced Security Address Industry and Privacy Regulations with Encryption Protect Application Data with Transparent Data Encryption Encrypt Data on the Network Oracle Database Vault Increase Security For Data Consolidation and Out-Sourced Administration Protect Application Data with Privileged User Controls  Enforce Multi-factor Authorization and Separation of Duty Oracle Label Security Use Security Groups to control data access Assign OLS attributes to application, not necessarily database, users Jede Demo stellt Ihnen einen beispielhaften Demo-Guide zur Verfügung, an dem Sie sich orientieren können. Dies ist der direkte Weg zur Demo-Plattform, auf der Sie für Ihre eigenen Lernzwecke die Demo anschauen können sowie auch einen Zeitraum für Kundenpräsentationen reservieren können. 

    Read the article

  • Delphi Client-Server Application using Firebird 2.5 error

    - by Japie Bosman
    I have got a lengthy question to ask. First of all Im still very new when it comes to Delphi programming and my experience has beem mostly developing small single user database applications using ADO and an Access database. I need to take the transition now to a client server application and this is where the problem starts. I decided to use Firebird 2.5 embeded as my database, as it is open source, and it is can be used with the interbase components in Delphi and that multiple clients can access the database simultanously. So I followed the interbase tutorial in Delphi. I managed to connect the client to the server and see the data in the example (While both are running on my pc), but when i tried to move the client to another pc, keeping the server on mine and running it to see if I can connect to the server it gave me the following error. Exception EIdSocketError in module clientDemo.exe at 0029DCAC. Socket Error # 10061 Connection refused. I understand that this might be because the host is defined as localhost in the client. But here is my first question. In the TSQLConncetion you can set die hostname under Driver-Hostname. The thing I want to know is how do you do this at run time, as I cannot get the property when I try and make an edit box to allow the user to enter the value and then set it via code like for example: SQLConncetion1.Driver.Hostname := edtHost.text; The thing is there is not such property to set, so how do you set the hostname at run time? Im using Delphi XE2 There is still a lot of questions to come especially when it comes to deployment, but I will take this piece by piece and I appreciate the advice.

    Read the article

  • Prepared statement alternatives for this middle-man program?

    - by user2813274
    I have an program that is using a prepared statement to connect and write to a database working nicely, and now need to create a middle-man program to insert between this program and the database. This middle-man program will actually write to multiple databases and handle any errors and connection issues. I would like advice as to how to replicate the prepared statements such as to create minimal impact to the existing program, however I am not sure where to start. I have thought about creating a "SQL statement class" that mimics the prepared statement, only that seems silly. The existing program is in Java, although it's going to be networked anyways so I would be open to writing it in just about anything that would make sense. The databases are currently MySQL, although I would like to be open to changing the database type in the future. My main question is what should the interface for this program look like, and does doing this even make sense? A distributed DB would be the ideal solution, but they seem overly complex and expensive for my needs. I am hoping to replicate the main functionality of a distributed DB via this middle-man. I am not too familiar with sql-based servers distributing data (or database in general...) - perhaps I am fighting an uphill battle by trying to solve it via programming, but I would like to make an attempt at least.

    Read the article

  • How to handle monetary values in PHP and MySql?

    - by Songo
    I've inherited a huge pile of legacy code written in PHP on top of a MySQL database. The thing I noticed is that the application uses doubles for storage and manipulation of data. Now I came across of numerous posts mentioning how double are not suited for monetary operations because of the rounding errors. However, I have yet to come across a complete solution to how monetary values should be handled in PHP code and stored in a MySQL database. Is there a best practice when it comes to handling money specifically in PHP? Things I'm looking for are: How should the data be stored in the database? column type? size? How should the data be handling in normal addition, subtraction. multiplication or division? When should I round the values? How much rounding is acceptable if any? Is there a difference between handling large monetary values and low ones? Note: A VERY simplified sample code of how I might encounter money values in everyday life: $a= $_POST['price_in_dollars']; //-->(ex: 25.06) will be read as a string should it be cast to double? $b= $_POST['discount_rate'];//-->(ex: 0.35) value will always be less than 1 $valueToBeStored= $a * $b; //--> any hint here is welcomed $valueFromDatabase= $row['price']; //--> price column in database could be double, decimal,...etc. $priceToPrint=$valueFromDatabase * 0.25; //again cast needed or not? I hope you use this sample code as a means to bring out more use cases and not to take it literally of course. Bonus Question If I'm to use an ORM such as Doctrine or PROPEL, how different will it be to use money in my code.

    Read the article

< Previous Page | 120 121 122 123 124 125 126 127 128 129 130 131  | Next Page >