Search Results

Search found 3710 results on 149 pages for 'databases'.

Page 37/149 | < Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >

  • Protecting Consolidated Data on Engineered Systems

    - by Steve Enevold
    In this time of reduced budgets and cost cutting measures in Federal, State and Local governments, the requirement to provide services continues to grow. Many agencies are looking at consolidating their infrastructure to reduce cost and meet budget goals. Oracle's engineered systems are ideal platforms for accomplishing these goals. These systems provide unparalleled performance that is ideal for running applications and databases that traditionally run on separate dedicated environments. However, putting multiple critical applications and databases in a single architecture makes security more critical. You are putting a concentrated set of sensitive data on a single system, making it a more tempting target.  The environments were previously separated by iron so now you need to provide assurance that one group, department, or application's information is not visible to other personnel or applications resident in the Exadata system. Administration of the environments requires formal separation of duties so an administrator of one application environment cannot view or negatively impact others. Also, these systems need to be in protected environments just like other critical production servers. They should be in a data center protected by physical controls, network firewalls, intrusion detection and prevention, etc Exadata also provides unique security benefits, including a reducing attack surface by minimizing packages and services to only those required. In addition to reducing the possible system areas someone may attempt to infiltrate, Exadata has the following features: 1.    Infiniband, which functions as a secure private backplane 2.    IPTables  to perform stateful packet inspection for all nodes               Cellwall implements firewall services on each cell using IPTables 3.    Hardware accelerated encryption for data at rest on storage cells Oracle is uniquely positioned to provide the security necessary for implementing Exadata because security has been a core focus since the company's beginning. In addition to the security capabilities inherent in Exadata, Oracle security products are all certified to run in an Exadata environment. Database Vault Oracle Database Vault helps organizations increase the security of existing applications and address regulatory mandates that call for separation-of-duties, least privilege and other preventive controls to ensure data integrity and data privacy. Oracle Database Vault proactively protects application data stored in the Oracle database from being accessed by privileged database users. A unique feature of Database Vault is the ability to segregate administrative tasks including when a command can be executed, or that the DBA can manage the health of the database and objects, but may not see the data Advanced Security  helps organizations comply with privacy and regulatory mandates by transparently encrypting all application data or specific sensitive columns, such as credit cards, social security numbers, or personally identifiable information (PII). By encrypting data at rest and whenever it leaves the database over the network or via backups, Oracle Advanced Security provides the most cost-effective solution for comprehensive data protection. Label Security  is a powerful and easy-to-use tool for classifying data and mediating access to data based on its classification. Designed to meet public-sector requirements for multi-level security and mandatory access control, Oracle Label Security provides a flexible framework that both government and commercial entities worldwide can use to manage access to data on a "need to know" basis in order to protect data privacy and achieve regulatory compliance  Data Masking reduces the threat of someone in the development org taking data that has been copied from production to the development environment for testing, upgrades, etc by irreversibly replacing the original sensitive data with fictitious data so that production data can be shared safely with IT developers or offshore business partners  Audit Vault and Database Firewall Oracle Audit Vault and Database Firewall serves as a critical detective and preventive control across multiple operating systems and database platforms to protect against the abuse of legitimate access to databases responsible for almost all data breaches and cyber attacks.  Consolidation, cost-savings, and performance can now be achieved without sacrificing security. The combination of built in protection and Oracle’s industry-leading data protection solutions make Exadata an ideal platform for Federal, State, and local governments and agencies.

    Read the article

  • Oracle MAA Part 1: When One Size Does Not Fit All

    - by JoeMeeks
    The good news is that Oracle Maximum Availability Architecture (MAA) best practices combined with Oracle Database 12c (see video) introduce first-in-the-industry database capabilities that truly make unplanned outages and planned maintenance transparent to users. The trouble with such good news is that Oracle’s enthusiasm in evangelizing its latest innovations may leave some to wonder if we’ve lost sight of the fact that not all database applications are created equal. Afterall, many databases don’t have the business requirements for high availability and data protection that require all of Oracle’s ‘stuff’. For many real world applications, a controlled amount of downtime and/or data loss is OK if it saves money and effort. Well, not to worry. Oracle knows that enterprises need solutions that address the full continuum of requirements for data protection and availability. Oracle MAA accomplishes this by defining four HA service level tiers: BRONZE, SILVER, GOLD and PLATINUM. The figure below shows the progression in service levels provided by each tier. Each tier uses a different MAA reference architecture to deploy the optimal set of Oracle HA capabilities that reliably achieve a given service level (SLA) at the lowest cost.  Each tier includes all of the capabilities of the previous tier and builds upon the architecture to handle an expanded fault domain. Bronze is appropriate for databases where simple restart or restore from backup is ‘HA enough’. Bronze is based upon a single instance Oracle Database with MAA best practices that use the many capabilities for data protection and HA included with every Oracle Enterprise Edition license. Oracle-optimized backups using Oracle Recovery Manager (RMAN) provide data protection and are used to restore availability should an outage prevent the database from being able to restart. Silver provides an additional level of HA for databases that require minimal or zero downtime in the event of database instance or server failure as well as many types of planned maintenance. Silver adds clustering technology - either Oracle RAC or RAC One Node. RMAN provides database-optimized backups to protect data and restore availability should an outage prevent the cluster from being able to restart. Gold raises the game substantially for business critical applications that can’t accept vulnerability to single points-of-failure. Gold adds database-aware replication technologies, Active Data Guard and Oracle GoldenGate, which synchronize one or more replicas of the production database to provide real time data protection and availability. Database-aware replication greatly increases HA and data protection beyond what is possible with storage replication technologies. It also reduces cost while improving return on investment by actively utilizing all replicas at all times. Platinum introduces all of the sexy new Oracle Database 12c capabilities that Oracle staff will gush over with great enthusiasm. These capabilities include Application Continuity for reliable replay of in-flight transactions that masks outages from users; Active Data Guard Far Sync for zero data loss protection at any distance; new Oracle GoldenGate enhancements for zero downtime upgrades and migrations; and Global Data Services for automated service management and workload balancing in replicated database environments. Each of these technologies requires additional effort to implement. But they deliver substantial value for your most critical applications where downtime and data loss are not an option. The MAA reference architectures are inherently designed to address conflicting realities. On one hand, not every application has the same objectives for availability and data protection – the Not One Size Fits All title of this blog post. On the other hand, standard infrastructure is an operational requirement and a business necessity in order to reduce complexity and cost. MAA reference architectures address both realities by providing a standard infrastructure optimized for Oracle Database that enables you to dial-in the level of HA appropriate for different service level requirements. This makes it simple to move a database from one HA tier to the next should business requirements change, or from one hardware platform to another – whether it’s your favorite non-Oracle vendor or an Oracle Engineered System. Please stay tuned for additional blog posts in this series that dive into the details of each MAA reference architecture. Meanwhile, more information on Oracle HA solutions and the Maximum Availability Architecture can be found at: Oracle Maximum Availability Architecture - Webcast Maximize Availability with Oracle Database 12c - Technical White Paper

    Read the article

  • Consolidation in a Database Cloud

    - by B R Clouse
    Consolidation of multiple databases onto a shared infrastructure is the next step after Standardization.  The potential consolidation density is a function of the extent to which the infrastructure is shared.  The three models provide increasing degrees of sharing: Server: each database is deployed in a dedicated VM. Hardware is shared, but most of the software infrastructure is not. Standardization is often applied incompletely since operating environments can be moved as-is onto the shared platform. The potential for VM sprawl is an additional downside. Database: multiple database instances are deployed on a shared software / hardware infrastructure. This model is very efficient and easily implemented with the features in the Oracle Database and supporting products. Many customers have moved to this model and achieved significant, measurable benefits. Schema: multiple schemas are deployed within a single database instance. The most efficient model, it places constraints on the environment. Usually this model will be implemented only by customers deploying their own applications.  (Note that a single deployment can combine Database and Schema consolidations.) Customer value: lower costs, better system utilization In this phase of the maturity model, under-utilized hardware can be used to host more workloads, or retired and those workloads migrated to consolidation platforms. Customers benefit from higher utilization of the hardware resources, resulting in reduced data center floor space, and lower power and cooling costs. And, the OpEx savings from Standardization are multiplied, since there are fewer physical components (both hardware and software) to manage. Customer value: higher productivity The OpEx benefits from Standardization are compounded since not only are there fewer types of things to manage, now there are fewer entities to manage. In this phase, customers discover that their IT staff has time to move away from "day-to-day" tasks and start investing in higher value activities. Database users benefit from consolidating onto shared infrastructures by relieving themselves of the requirement to maintain their own dedicated servers. Also, if the shared infrastructure offers capabilities such as High Availability / Disaster Recovery, which are often beyond the budget and skillset of a standalone database environment, then moving to the consolidation platform can provide access to those capabilities, resulting in less downtime. Capabilities / Characteristics In this phase, customers will typically deploy fixed-size clusters and consolidate on a cluster until that cluster is deemed "full," at which point a new cluster is built. Customers will define one or a few cluster architectures that are used wherever possible; occasionally there may be deployments which must be handled as exceptions. The "full" policy may be based on number of databases deployed on the cluster, or observed peak workload, etc. IT will own the provisioning of new databases on a cluster, making the decision of when and where to place new workloads. Resources may be managed dynamically, e.g., as a priority workload increases, it may be given more CPU and memory to handle the spike. Users will be charged at a fixed, relatively coarse level; or in some cases, no charging will be applied. Activities / Tasks Oracle offers several tools to plan a successful consolidation. Real Application Testing (RAT) has a feature to help plan and validate database consolidations. Enterprise Manager 12c's Cloud Management Pack for Database includes a planning module. Looking ahead, customers should start planning for the Services phase by defining the Service Catalog that will be made available for database services.

    Read the article

  • Multitenancy in SQL Azure

    - by cibrax
    If you are building a SaaS application in Windows Azure that relies on SQL Azure, it’s probably that you will need to support multiple tenants at database level. This is short overview of the different approaches you can use for support that scenario, A different database per tenant A new database is created and assigned when a tenant is provisioned. Pros Complete isolation between tenants. All the data for a tenant lives in a database only he can access. Cons It’s not cost effective. SQL Azure databases are not cheap, and the minimum size for a database is 1GB.  You might be paying for storage that you don’t really use. A different connection pool is required per database. Updates must be replicated across all the databases You need multiple backup strategies across all the databases Multiple schemas in a database shared by all the tenants A single database is shared among all the tenants, but every tenant is assigned to a different schema and database user. Pros You only pay for a single database. Data is isolated at database level. If the credentials for one tenant is compromised, the rest of the data for the other tenants is not. Cons You need to replicate all the database objects in every schema, so the number of objects can increase indefinitely. Updates must be replicated across all the schemas. The connection pool for the database must maintain a different connection per tenant (or set of credentials) A different user is required per tenant, which is stored at server level. You have to backup that user independently. Centralizing the database access with store procedures in a database shared by all the tenants A single database is shared among all the tenants, but nobody can read the data directly from the tables. All the data operations are performed through store procedures that centralize the access to the tenant data. The store procedures contain some logic to map the database user to an specific tenant. Pros You only pay for a single database. You only have a set of objects to maintain and backup. Cons There is no real isolation. All the data for the different tenants is shared in the same tables. You can not use traditional ORM like EF code first for consuming the data. A different user is required per tenant, which is stored at server level. You have to backup that user independently. SQL Federations A single database is shared among all the tenants, but a different federation is used per tenant. A federation in few words, it’s a mechanism for horizontal scaling in SQL Azure, which basically uses the idea of logical partitions to distribute data based on certain criteria. Pros You only have a single database with multiple federations. You can use filtering in the connections to pick the right federation, so any ORM could be used to consume the data. Cons There is no real isolation at that database level. The isolation is enforced programmatically with federations.

    Read the article

  • Android - How to Use SQLiteDatabase.open?

    - by Edwin Lee
    Hi all, i'm trying to use SQLiteDatabase.openDatabase( "/data/data/edwin11.myapp/databases/myapp.db", null, (SQLiteDatabase.CREATE_IF_NECESSARY | SQLiteDatabase.NO_LOCALIZED_COLLATORS)); to create/open a database instead of making use of the SQLiteOpenHelper (because i want to pass in the flag SQLiteDatabase.NO_LOCALIZED_COLLATORS. However, i am getting this exception for that line of code: 04-18 09:50:03.585: ERROR/Database(3471): sqlite3_open_v2("/data/data/edwin11.myapp/databases/myapp.db", &handle, 6, NULL) failed 04-18 09:50:03.665: ERROR/AndroidRuntime(3471): java.lang.RuntimeException: An error occured while executing doInBackground() 04-18 09:50:03.665: ERROR/AndroidRuntime(3471): at android.os.AsyncTask$3.done(AsyncTask.java:200) 04-18 09:50:03.665: ERROR/AndroidRuntime(3471): at java.util.concurrent.FutureTask$Sync.innerSetException(FutureTask.java:234) 04-18 09:50:03.665: ERROR/AndroidRuntime(3471): at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:258) 04-18 09:50:03.665: ERROR/AndroidRuntime(3471): at java.util.concurrent.FutureTask.run(FutureTask.java:122) 04-18 09:50:03.665: ERROR/AndroidRuntime(3471): at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:648) 04-18 09:50:03.665: ERROR/AndroidRuntime(3471): at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:673) 04-18 09:50:03.665: ERROR/AndroidRuntime(3471): at java.lang.Thread.run(Thread.java:1060) 04-18 09:50:03.665: ERROR/AndroidRuntime(3471): Caused by: android.database.sqlite.SQLiteException: unable to open database file 04-18 09:50:03.665: ERROR/AndroidRuntime(3471): at android.database.sqlite.SQLiteDatabase.dbopen(Native Method) 04-18 09:50:03.665: ERROR/AndroidRuntime(3471): at android.database.sqlite.SQLiteDatabase.<init>(SQLiteDatabase.java:1584) 04-18 09:50:03.665: ERROR/AndroidRuntime(3471): at android.database.sqlite.SQLiteDatabase.openDatabase(SQLiteDatabase.java:638) ... Doing some testing just before that line of code (using File.isExists) shows that the file /data/data/edwin11.myapp/databases/myapp.db does not exist. Would that be the cause of the error? (Or am i just using SQLiteDatabase.openDatabase the wrong way?) Would it help if i create the file beforehand? (Shouldn't that be taken care of by the SQLiteDatabase.CREATE_IF_NECESSARY flag that i passed in?) If creating the file manually is the way to go, is it just an empty file, or do i have to write something to it? Thanks and Regards.

    Read the article

  • Delphi dbExpress and Interbase: Unicode migration steps and risks?

    - by mjustin
    Currently, our database uses Win1252 as the only character encoding. We will have to support Unicode in the database tables soon, which means we have to perform this migration for four databases and around 80 Delphi applications which run in-house in a 24/7 environment. Are there recommendations for database migrations to UTF-8 (or UNICODE_FSS) for Delphi applications? Some questions listed below. Many thanks in advance for your answers! are there tools which help with the migration of the existing databases (sizes between 250 MB and 2 GB, no Blob fields), by dumping the data, recreating the database with UNICODE_FSS or UTF-8, and loading the data back? are there known problems with Delphi 2009, dbExpress and Interbase 7.5 related to Unicode character sets? would you recommend to upgrade the databases to Interbase 2009 first? (This upgrade is planned but does not have a high priority) can we simply migrate the database and Delphi will handle the Unicode character sets automatically, or will we have to change all character field types in every Datamodule (dfm and source code) too? which strategy would you recommend to work on the migration in parallel with the normal development and maintenance of the existing application? The application runs in-house so development and database administration is done internally. Update: one problem I found now is that there are two different persistent field types for Unicode and non Unicode character fields. For the existing database, dbExpress creates TStringField objects. For the Unicode database fields, dbExpress creates (or expects!) TWideStringField objects. So we can not just change the database and the connection code page to Unicode. We also have to modify all datamodules to use the new field type. The modified datamodule however will not be backwards compatible.

    Read the article

  • Delphi 2009 dbExpress and Interbase: Unicode migration steps and risks?

    - by mjustin
    Currently, our database uses Win1252 as the only character encoding. We will have to support Unicode in the database tables soon, which means we have to perform this migration for four databases and around 80 Delphi applications which run in-house in a 24/7 environment. Are there recommendations for database migrations to UTF-8 (or UNICODE_FSS) for Delphi applications? Some questions listed below. Many thanks in advance for your answers! are there tools which help with the migration of the existing databases (sizes between 250 MB and 2 GB, no Blob fields), by dumping the data, recreating the database with UNICODE_FSS or UTF-8, and loading the data back? are there known problems with Delphi 2009, dbExpress and Interbase 7.5 related to Unicode character sets? would you recommend to upgrade the databases to Interbase 2009 first? (This upgrade is planned but does not have a high priority) can we simply migrate the database and Delphi will handle the Unicode character sets automatically, or will we have to change all character field types in every Datamodule (dfm and source code) too? which strategy would you recommend to work on the migration in parallel with the normal development and maintenance of the existing application? The application runs in-house so development and database administration is done internally. Update: one problem I found now is that there are two different persistent field types for Unicode and non Unicode character fields. For the existing database, dbExpress creates TStringField objects. For the Unicode database fields, dbExpress creates (or expects!) TWideStringField objects. This looks like a lot of work lies ahead. While we could try to avoid persistent fields (and add calculated fields at run time), Of course we would prefer a solution which does not require so many changes in existing units and DFM files.

    Read the article

  • SQL Server: Is it possible to prevent SQL Agent from failing a step on error?

    - by Kenneth
    I have a stored procedure that runs custom backups for around 60 SQL servers (mixes 2000 through 2008R2). Occasionally, due to issues outside of my control (backup device inaccessible, network error, etc.) an individual backup on one or two databases will fail. This causes this entire step to fail, which means any subsequent backup commands are not executed and half of the databases on a given server may not be backed up. On the 2005+ boxes I am using TRY/CATCH blocks to manage these problems and continue backing up the remaining databases. On a 2000 server however, for example, I have no way to prevent this error from failing the entire step: Msg 3201, Level 16, State 1, Line 1 Cannot open backup device 'db-diff(\PATH\DB-DIFF-03-16-2010.DIF)'. Operating system error 5(Access is denied.). Msg 3013, Level 16, State 1, Line 1 BACKUP DATABASE is terminating abnormally. I am simply asking if anything like TRY/CATCH is possible in SQL 2000? I realize there are no built in methods for this, so I guess I am looking for some creativity. Even when wrapping each backup (or any failing statement) via sp_executesql the job fails instantly. Example: DECLARE @x INT, @iReturn INT PRINT 'Executing statement that will fail with 208.' EXEC @iReturn = Sp_executesql N'SELECT * from TABLETHATDOESNTEXIST;' PRINT Cast(@iReturn AS NVARCHAR) --In SSMS this return code prints. Executed as a job it fails and aborts before this statement.

    Read the article

  • Why can't I create a database in an empty ASP MVC 2 project using Project->Add->New Item->SQL Server

    - by Dr Dork
    I'm diving head first into ASP MVC and am playing around with creating and manipulating a database. I did a search and found this tutorial for creating a database, however when I follow it, I get this error when trying to add a new database to my fresh, empty ASP MVC 2 project... A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: SQL Network Interfaces, error: 26 - Error Locating Server/Instance Specified) The only requirement the tutorial mentioned was SQL Server Express, but when I went to download it, it said it was already installed. I'm assuming it was part of the VS 2010 RC I installed and am running. So I don't know what else I need if I am missing something. This is all new to me, so I'm sure I'm missing something obvious here and after I'm done posting this question, I plan to do some more research into the topic of databases and how they work with ASP MVC. In the meantime, I was you could help me answer a couple high level questions... What am I missing/forgetting to do that is causing this error? Any suggestions for good resources/tutorials that focus on using databases with ASP MVC? I've done a lot of database programming in the past, so I'm familiar with the concepts of relational databases and the SQL language. I wish I could find a good resource for learning how to work with them in an ASP dev environment, as well as a good breakdown of all the related technologies used for working with them (i.e. LINQ to SQL). Thanks so much in advance for all your help! I'm going to start researching these questions right now.

    Read the article

  • PHP PDO SQL Server Select statement not replacing question marks

    - by Metropolis
    Awhile ago I wrote a database class which uses PDO in order to connect to SQL Server databases and also to MySQL databases. It has always replaced the question marks fine when using it on the MySQL databases, but for the SQL Server database I had to create a work around which basically replaces the question marks manually. Here is the code for that. if($this->getPDODriver() == 'odbc' && !empty($values_a) && substr_count($query_s, "?") > 0) { $query_s = preg_replace(array_fill(0, substr_count($query_s, "?"), '/\?/'), $values_a, $query_s, 1); $values_a = NULL; } Now, I understand that this completely defeats the purpose of the question marks and PDO, but it has been working fine for me. What I would like to do now though, is find out why the question marks are not getting replaced in the first place, and remove this workaround. If I have a select statement like the following SELECT * FROM database WHERE value = ? That is what the query looks like when I go to prepare it, but when I display the query results, it is a blank array. Just remember, this class is working fine with MySQL, and it is working fine with the work around above. So I know it has something to do with the question marks.

    Read the article

  • Delphi dbExpress and Interbase: UTF8 migration steps and risks?

    - by mjustin
    Currently, our database uses Win1252 as the only character encoding. We will have to support Unicode in the database tables soon, which means we have to perform this migration for four databases and around 80 Delphi applications which run in-house in a 24/7 environment. Are there recommendations for database migrations to UTF-8 (or UNICODE_FSS) for Delphi applications? Some questions listed below. Many thanks in advance for your answers! are there tools which help with the migration of the existing databases (sizes between 250 MB and 2 GB, no Blob fields), by dumping the data, recreating the database with UNICODE_FSS or UTF-8, and loading the data back? are there known problems with Delphi 2009, dbExpress and Interbase 7.5 related to Unicode character sets? would you recommend to upgrade the databases to Interbase 2009 first? (This upgrade is planned but does not have a high priority) can we simply migrate the database and Delphi will handle the Unicode character sets automatically, or will we have to change all character field types in every Datamodule (dfm and source code) too? which strategy would you recommend to work on the migration in parallel with the normal development and maintenance of the existing application? The application runs in-house so development and database administration is done internally.

    Read the article

  • Script throwing unexpected operator when using mysqldump

    - by Astron
    A portion of a script I use to backup MySQL databases has stopped working correctly after upgrading a Debian box to 6.0 Squeeze. I have tested the backup code via CLI and it works fine. I believe it is in the selection of the databases before the backup occurs, possibly something to do with the $skipdb variable. If there is a better way to perform the function then I'm will to try something new. Any insight would be greatly appreciated. $ sudo ./script.sh [: 138: information_schema: unexpected operator [: 138: -1: unexpected operator [: 138: mysql: unexpected operator [: 138: -1: unexpected operator Using bash -x script here is one of the iterations: + for db in '$DBS' + skipdb=-1 + '[' test '!=' '' ']' + for i in '$IGGY' + '[' mysql == test ']' + : + '[' -1 == -1 ']' ++ /bin/date +%F + FILE=/backups/hostname.2011-03-20.mysql.mysql.tar.gz + '[' no = yes ']' + /usr/bin/mysqldump --single-transaction -u root -h localhost '-ppassword' mysql + /bin/tar -czvf /backups/hostname.2011-03-20.mysql.mysql.tar.gz mysql.sql mysql.sql + rm -f mysql.sql Here is the code. if [ $MYSQL_UP = "yes" ]; then echo "MySQL DUMP" >> /tmp/update.log echo "--------------------------------" >> /tmp/update.log DBS="$($MYSQL -u $MyUSER -h $MyHOST -p"$MyPASS" -Bse 'show databases')" for db in $DBS do skipdb=-1 if [ "$IGGY" != "" ] ; then for i in $IGGY do [ "$db" == "$i" ] && skipdb=1 || : done fi if [ "$skipdb" == "-1" ] ; then FILE="$DEST$HOST.`$DATE +"%F"`.$db.mysql.tar.gz" if [ $ENCRYPT = "yes" ]; then $MYSQLDUMP -u $MyUSER -h $MyHOST -p"$MyPASS" $db > $db.sql && $TAR -czvf - $db.sql | $OPENSSL enc -aes-256-cbc -salt -out $FILE.enc -k $ENC_PASS && rm -f $db.sql else $MYSQLDUMP --single-transaction -u $MyUSER -h $MyHOST -p"$MyPASS" $db > $db.sql && $TAR -czvf $FILE $db.sql && rm -f $db.sql fi fi done fi

    Read the article

  • How to setup Lucene search for a B2B web app?

    - by Bill Paetzke
    Given: 5000 databases (spread out over 5 servers) 1 database per client (so you can infer there are 1000 clients) 2 to 2000 users per client (let's say avg is 100 users per client) Clients (databases) come and go every day (let's assume most remain for at least one year) Let's stay agnostic of language or sql brand, since Lucene (and Solr) have a breadth of support The Question: How would you setup Lucene search so that each client can only search within its database? How would you setup the index(es)? Would you need to add a filter to all search queries? If a client cancelled, how would you delete their (part of the) index? (this may be trivial--not sure yet) Possible Solutions: Make an index for each client (database) Pro: Search is faster (than one-index-for-all method). Indices are relative to the size of the client's data. Con: I'm not sure what this entails, nor do I know if this is beyond Lucene's scope. Have a single, gigantic index with a database_name field. Always include database_name as a filter. Pro: Not sure. Maybe good for tech support or billing dept to search all databases for info. Con: Search is slower (than index-per-client method). Flawed security if query filter removed. For Example: Joel Spolsky said in Podcast #11 that his hosted web app product, FogBugz On-Demand, uses Lucene. He has thousands of on-demand clients. And each client gets their own database. His situation is quite similar to mine. Although, he didn't elaborate on the setup (particularly indices); hence, the need for this question. One last thing: I would also accept an answer that uses Solr (the extension of Lucene). Perhaps it's better suited for this problem. Not sure.

    Read the article

  • Merging datasets based on 2 variables in SAS.

    - by John
    Hye Guys, my question is the following, i'm working with different databases, all contain information about 1000+ companies, a company is defined by its ticker code (the short version of the name( Ford as F) usually seen on stock quotation boards). Aside from the ticker code to merge on I also have to merge on the time, I used month as a count variable throughout my time series. The final purpose is to have a regression in the kind of Y(jt) = c + X(jt) +X1(jt) etc with j = company (ticker) and t = time (month). So imagine I have 2 databases, one which is the base database with variables such as Tickers, months, beta's of a company (risk measure) etc and a second database which has an extra variable (let's say market capitalisation). What I want to do then is to merge these 2 databases based on the ticker and the month. Example: Base database: Ticker __ Month __ Betas AA __ 4 __ 1.2 BB __ 8 __ 1.18 Second database: Ticker __ Month __ MCAP AA __ 4 __ 8542 BB __ 6 __ 1245 Then after merge I would like to have something like this: Ticker __ Month _ Betas ___ MCAP AA __ 4 _ 1.2 ___ 8542 So all observations that do not match BOTH date and ticker have to be dropped, I'm sure this is possible, just can't find the right type of code. Thanks! PS: I'm guessing the underscars have something to do with font layout but both the bold as italic is supposed to be normal :)

    Read the article

  • Migrating MOSS 2007 from SQL 2000 to SQL 2005 - Addendum

    - by lunacrescens
    This is a continuation of an earlier question I had about moving the databases for a MOSS 2007 installation from SQL 2000 to SQL 2005. Here's the URL for the original question: http://stackoverflow.com/questions/254517/migrating-moss-2007-from-sql-2000-to-sql-2005 In my test environment, I've successfully moved the databases to the SQL 2005 test machine and things appear to be working fine. But, on the "Servers in Farm" page of the Central Admin | Operations, it still shows the old (i.e. SQL 2000) server as the Configuration Database Server. Also, it shows the old config database as being the Configuration Database. I know that the SQL2000 server and old config database (that are showing on this page) are NOT being used, because we've deactived the SQL instance in SQL2000. I've tried "removing" the server, and get a message about "Uninstalling SharePoint products and technologies" being the better route. So, I disconnected from the test databases, uninstalled SharePoint from the test WFE server, and reinstalled it. That didn't do anything. Before uninstalling/reinstalling I also tried simply rerunning the SharePoint Configuration wizard, and that didn't do anything either. Does anyone know how to update the Config Server and Config Database on the "Servers in Farm" page after having moved the Config and Content DBs? Is there something I'm missing or overlooking? Thanks.

    Read the article

  • Storing an object to use in multiple classes

    - by Aaron Sanders
    I am wondering the best way to store an object in memory that is used in a lot of classes throughout an application. Let me set up my problem for you: We have multiple databases, 1 per customer. We also have a master table and each row is detailed information about the databases such as database name, server IP it's located and a few config settings. I have an application that loops through those multiple databases and runs some updates on them. The settings I mentioned above are updated each loop iteration into memory. The application then runs through series of processes that include multiple classes using this data. The data never changes during the processes, only during the loop iteration. The variables are related to a customer, so I have them stored in a customer class. I suppose I could make all of the members shared or should I use a singleton for the customer class? I've never actually used a singleton, only read they are good in this type of situation. Are there better solutions to this type of scenario? Also, I could have plans for this application to be multithreaded later. Sorry if this is confusing. If you have questions, let me know and I will answer them. Thanks for your help.

    Read the article

  • Bidirectional replication update record problem

    - by Mirek
    Hi, I would like to present you my problem related to SQL Server 2005 bidirectional replication. What do I need? My teamleader wants to solve one of our problems using bidirectional replication between two databases, each used by different application. One application creates records in table A, changes should replicate to second database into a copy of table A. When data on second server are changed, then those changes have to be propagated back to the first server. I am trying to achieve bidirectional transactional replication between two databases on one server, which is running SQL Server 2005. I have manage to set this up using scripts, established 2 publications and 2 read only subscriptions with loopback detection. Distributtion database is created, publishment on both databases is enabled. Distributor and publisher are up. We are using some rules to control, which records will be replicated, so we need to call our custom stored procedures during replication. So, articles are set to use update, insert and delete custom stored procedures. So far so good, but? Everything works fine, changes are replicating, until updates are done on both tables simultaneously or before changes are replicated (and that takes about 3-6 seconds). Both records then end up with different values. UPDATE db1.dbo.TestTable SET Col = 4 WHERE ID = 1 UPDATE db2.dbo.TestTable SET Col = 5 WHERE ID = 1 results to: db1.dbo.TestTable COL = 5 db2.dbo.TestTable COL = 4 But we want to have last change winning replication. Please, is there a way to solve my problem? How can I ensure same values in both records? Or is there easier solution than this kind of replication? I can provide sample replication script which I am using. I am looking forward for you ideas, Mirek

    Read the article

  • Best practises for Magento Deployment

    - by Spongeboy
    I am looking setting up a deployment process for a highly customised Magento site, and was wondering how other people do this. I will be setting up dev, UAT and prod environments. All the Magento files will be in source control (SVN). At this stage, I can't see any requirements for changing the DB, so the 3 databases will be manually maintained. Specifically, How do you apply Magento upgrades? (Individually in each env, or on dev then roll out, or just give up on upgrades?) What files/folders do leave alone in each environment (e.g. magento/app/etc/local.xml) Do you restrict developers to editing specific files/folders? Do you restrict theme designers to editing specific files/folders? How do you manage database changes? Theme Designer Files/Folders Designers can restricted to editing the following folders- app/design/frontend/your_interface/your_theme/layout/ app/design/frontend/your_interface/your_theme/template/ app/design/frontend/your_interface/your_theme/locale/ skin/frontend/your_interface/your_theme/ Extension Developer Files/Folders Extension developers can edit the following folders/files- /app/code/local /app/etc/modules/<Namespace>_<Module>.xml Database environment management As the store's base URL is stored in the database, you cannot just copy databases between environments. Options include- Overriding the base url in php. Blog article on setting up dev and staging databases Changing the base url in the database after copying. (Where is this stored?) Doing a MySQLDump or backup, then doing a replace on the URL in the SQL file.

    Read the article

  • SQL Server performance issue.

    - by Jit
    Hi Friends, I have been trying to analyze performance issue with SQL Server 2005. We have 30 jobs, one for each databases (30 databases, one per each client). The jobs run at early morning at an interval of 5 minutes. When I run the job individually for testing, for most of the databases it finishes in 7 to 9 minutes. But when these jobs run at early morning, I see few jobs taking 2 to 3 hours to finish and the same takes few minutes as mentioned above if ran independently. We dont have any other job scheduled during that time, other than these 30 jobs. If we restart the server then for 2 or so days all the jobs finishes in few minutes, but over the period of time (from 3rd day suddenly), few jobs start taking hours to finish. What could be the possible reason of performance degradation over the period of time? I verified all the SPs and we uses temp tables and I made sure none of the temp table is left without dropping at the end of SP. Let me know what are the possible reasons for such behavior. Appreciate your time and help. Thanks

    Read the article

  • How do you make life easier for yourself when developing a really large database

    - by Hannes de Jager
    I am busy developing 2 web based systems with MySql databases and the amount of tables/views/stored routines is really becoming a lot and it is more and more challenging to handle the complexity. Now in programming languages we have namespacing e.g. Java packages, C++ namespaces to partition the software, grouping it together to make things more understandable. Databases on the other hand have more of a flat structure (MySql at least) e.g. tables and stored procedures are on the same level. So one have to be more creative, creating naming conventions, perhaps use more than one database or using tools to visualize things. What methods do you use to ease the pain? To be effective while developing your databases? To not get lost in a sea of tables and fields and stored procs? Feel free to mention tools you use also, but try to restrict it to open source and preferably Linux solutions if thats OK. b.t.w How many tables would a database have to be considered large in terms of design?

    Read the article

  • Copying contents of a MySQL table to a table in another (local) database

    - by Philip Eve
    I have two MySQL databases for my site - one is for a production environment and the other, much smaller, is for a testing/development environment. Both have identical schemas (except when I am testing something I intend to change, of course). A small number of the tables are for internationalisation purposes: TransLanguage - non-English languages TransModule - modules (bundles of phrases for translation, that can be loaded individually by PHP scripts) TransPhrase - individual phrases, in English, for potential translation TranslatedPhrase - translations of phrases that are submitted by volunteers ChosenTranslatedPhrase - screened translations of phrases. The volunteers who do translation are all working on the production site, as they are regular users. I wanted to create a stored procedure that could be used to synchronise the contents of four of these tables - TransLanguage, TransModule, TransPhrase and ChosenTranslatedPhrase - from the production database to the testing database, so as to keep the test environment up-to-date and prevent "unknown phrase" errors from being in the way while testing. My first effort was to create the following procedure in the test database: CREATE PROCEDURE `SynchroniseTranslations` () LANGUAGE SQL NOT DETERMINISTIC MODIFIES SQL DATA SQL SECURITY DEFINER BEGIN DELETE FROM `TransLanguage`; DELETE FROM `TransModule`; INSERT INTO `TransLanguage` SELECT * FROM `PRODUCTION_DB`.`TransLanguage`; INSERT INTO `TransModule` SELECT * FROM `PRODUCTION_DB`.`TransModule`; INSERT INTO `TransPhrase` SELECT * FROM `PRODUCTION_DB`.`TransPhrase`; INSERT INTO `ChosenTranslatedPhrase` SELECT * FROM `PRODUCTION_DB`.`ChosenTranslatedPhrase`; END When I try to run this, I get an error message: "SELECT command denied to user 'username'@'localhost' for table 'TransLanguage'". I also tried to create the procedure to work the other way around (that is, to exist as part of the data dictionary for the production database rather than the test database). If I do it that, way, I get an identical message except it tells me I'm denied the DELETE command rather than SELECT. I have made sure that my user has INSERT, DELETE, SELECT, UPDATE and CREATE ROUTINE privileges on both databases. However, it seems as though MySQL is reluctant to let this user exercise its privileges on both databases at the same time. How come, and is there a way around this?

    Read the article

  • What is a good automated data import method for SQL Server?

    - by Joel Potter
    I'm in the process of porting some SQL Server 2005 databases to SQL Server 2008. One of these databases has an associated import application (Windows task) which uses SSIS with a DTS package to import a large dataset from an MS Access database nightly. In upgrading to SQL Server 2008, I discovered that I can't run the same console application which has been performing the imports due to the missing manageddts DLL in SQL Server 2008. It's several years old and in need of a rewrite for various reason, plus, I've been fairly unhappy with DTS in general. The original reason DTS was chosen was for speed (5 min import time compared to 30+ for ADO.NET). The format of the data to import is out of my control (the client likes Access). I would also like to be able to run the import from a machine completely separate from the server hosting SQL Server and preferably with minimal SQL features installed. Options I've considered: Creating an Access application to connect to both databases (SQL Server and Access) and perform the import (Ugh!) Revisiting ADO.NET to see if the original implementation was poorly written. Updated SSIS packages. What other technologies should I be considering for this job?

    Read the article

  • Can't diagnose my MySQL root user problem

    - by George Crawford
    Hi all, I have a problem with the MySQL root user in My MySQL setup, and I just can't for the life of me work out how to fix it. It seems that I have somehow messed up the root user, and my access to databases is now very erratic. For reference, I'm using MAMP on OS X to provide the MySQL server. I'm not sure how much that matters though - I'd guess that whatever I've done will require a command-line fix to solve it. I can start MySQL using MAMP as usual, and access databases using the 'standard' users I have created for my PHP apps. However, the root user, which I use in my MySQL GUI client, and also in phpMyAdmin, can only access the "information_schema" database, as well as two I have created manually, and presumably (and mistakenly) left permissions wide open for. My 15 or so other databases cannot be accessed my the root user. When I load up phpMyAdmin, the home screen says: "Create new database: No Privileges". I certainly did at some stage change my root user's password using the MAMP dialog. But I don't remember if I did anything else which might have caused this problem. I've tried changing the password again, and there seems to be no change in the issue. I've also tried resetting root password using the command line, including starting mysql manually with --skip-grant-tables then flushing privs, but again, nothing seems to fix the issue. I've come to the end of my ideas, and would very much appreciate some step-by-step advice and diagnosis from one of the experts here! Many thanks for your help.

    Read the article

  • Using SMO to call Database.ExecuteNonQuery() concurrently?

    - by JimDaniel
    I have been banging my head against the wall trying to figure out how I can run update scripts concurrently against multiple databases in a single SQL Server instance using SMO. Our environments have an ever-increasing number of databases which need updating, and iterating through one at a time is becoming a problem (too slow). From what I understand SMO does not support concurrent operations, and my tests have bore that out. There seems to be shared memory at the Server object level, for things like DataReader context, keeps throwing exceptions such as "reader is already open." I apologize for not having the exact exceptions I am getting. I will try to get them and update this post. I am no expert on SMO and just feeling my way through to be honest. Not really sure I am approaching it the right way, but it's something that has to be done, or our productivity will slow to a crawl. So how would you guys do something like this? Am I using the wrong technology with SMO? All I am wanting to do is execute sql scripts against databases in a single sql server instance in parallel. Thanks for any help you can give, Daniel

    Read the article

  • Best Method For High Data Availability for SQL Server

    - by omatase
    I have a web service that runs 24/7. Periodically it needs to refresh its database with data from another web service. There is a lot of data. It's tens of thousands of rows. (no, I don't mean this is a lot of data for SQL Server, just trying to point out that I expect it to take some time to come down the pipe from the other web service) The data refresh can take between 5 and 10 minutes. The actual data update portion of that is between 1 and 2 minutes. This means the service would be down for all intents and purposes when consumers would be requesting this type of data. I would like to implement a system where data is always available. The only thing that comes to mind is some type of system where I maintain two separate databases. I populate the inactive one, swapping it to active before populating the other one. I'm not sure I know the best way to do this. My current ideas just revolve around two sets of the schema in a single database (using views to access the active set) or two databases each with the same schema. The application would rotate between the two databases. Any suggestions from someone who has done something like this before?

    Read the article

< Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >