Search Results

Search found 30636 results on 1226 pages for 'database versioning'.

Page 103/1226 | < Previous Page | 99 100 101 102 103 104 105 106 107 108 109 110  | Next Page >

  • The case of the phantom ADF developer (and other yarns)

    - by Chris Muir
    A few years of ADF experience means I see common mistakes made by different developers, some I regularly make myself.  This post is designed to assist beginners to Oracle JDeveloper Application Development Framework (ADF) avoid a common ADF pitfall, the case of the phantom ADF developer [add Scooby-Doo music here]. ADF Business Components - triggers, default table values and instead of views. Oracle's JDeveloper tutorials help with the A-B-Cs of ADF development, typically built on the nice 'n safe demo schema provided by with the Oracle database such as the HR demo schema. However it's not too long until ADF beginners, having built up some confidence from learning with the tutorials and vanilla demo schemas, start building ADF Business Components based upon their own existing database schema objects.  This is where unexpected problems can sneak in. The crime Developers may encounter a surprising error at runtime when editing a record they just created or updated and committed to the database, based on their own existing tables, namely the error: JBO-25014: Another user has changed the row with primary key oracle.jbo.Key[x] ...where X is the primary key value of the row at hand.  In a production environment with multiple users this error may be legit, one of the other users has updated the row since you queried it.  Yet in a development environment this error is just plain confusing.  If developers are isolated in their own database, creating and editing records they know other users can't possibly be working with, or all the other developers have gone home for the day, how is this error possible? There are no other users?  It must be the phantom ADF developer! [insert dramatic music here] The following picture is what you'll see in the Business Component Browser, and you'll receive a similar error message via an ADF Faces page: A false conclusion What can possibly cause this issue if it isn't our phantom ADF developer?  Doesn't ADF BC implement record locking, locking database records when the row is modified in the ADF middle-tier by a user?  How can our phantom ADF developer even take out a lock if this is the case?  Maybe ADF has a bug, maybe ADF isn't implementing record locking at all?  Shouldn't we see the error "JBO-26030: Failed to lock the record, another user holds the lock" as we attempt to modify the record, why do we see JBO-25014? : Let's verify that ADF is in fact issuing the correct SQL LOCK-FOR-UPDATE statement to the database. First we need to verify ADF's locking strategy.  It is determined by the Application Module's jbo.locking.mode property.  The default (as of JDev 11.1.1.4.0 if memory serves me correct) and recommended value is optimistic, and the other valid value is pessimistic. Next we need a mechanism to check that ADF is issuing the LOCK statements to the database.  We could ask DBAs to monitor locks with OEM, but optimally we'd rather not involve overworked DBAs in this process, so instead we can use the ADF runtime setting –Djbo.debugoutput=console.  At runtime this options turns on instrumentation within the ADF BC layer, which among a lot of extra detail displayed in the log window, will show the actual SQL statement issued to the database, including the LOCK statement we're looking to confirm. Setting our locking mode to pessimistic, opening the Business Components Browser of a JSF page allowing us to edit a record, say the CHARGEABLE field within a BOOKINGS record where BOOKING_NO = 1206, upon editing the record see among others the following log entries: [421] Built select: 'SELECT BOOKING_NO, EVENT_NO, RESOURCE_CODE, CHARGEABLE, MADE_BY, QUANTITY, COST, STATUS, COMMENTS FROM BOOKINGS Bookings'[422] Executing LOCK...SELECT BOOKING_NO, EVENT_NO, RESOURCE_CODE, CHARGEABLE, MADE_BY, QUANTITY, COST, STATUS, COMMENTS FROM BOOKINGS Bookings WHERE BOOKING_NO=:1 FOR UPDATE NOWAIT[423] Where binding param 1: 1206  As can be seen on line 422, in fact a LOCK-FOR-UPDATE is indeed issued to the database.  Later when we commit the record we see: [441] OracleSQLBuilder: SAVEPOINT 'BO_SP'[442] OracleSQLBuilder Executing, Lock 1 DML on: BOOKINGS (Update)[443] UPDATE buf Bookings>#u SQLStmtBufLen: 210, actual=62[444] UPDATE BOOKINGS Bookings SET CHARGEABLE=:1 WHERE BOOKING_NO=:2[445] Update binding param 1: N[446] Where binding param 2: 1206[447] BookingsView1 notify COMMIT ... [448] _LOCAL_VIEW_USAGE_model_Bookings_ResourceTypesView1 notify COMMIT ... [449] EntityCache close prepared statement ....and as a result the changes are saved to the database, and the lock is released. Let's see what happens when we use the optimistic locking mode, this time to change the same BOOKINGS record CHARGEABLE column again.  As soon as we edit the record we see little activity in the logs, nothing to indicate any SQL statement, let alone a LOCK has been taken out on the row. However when we save our records by issuing a commit, the following is recorded in the logs: [509] OracleSQLBuilder: SAVEPOINT 'BO_SP'[510] OracleSQLBuilder Executing doEntitySelect on: BOOKINGS (true)[511] Built select: 'SELECT BOOKING_NO, EVENT_NO, RESOURCE_CODE, CHARGEABLE, MADE_BY, QUANTITY, COST, STATUS, COMMENTS FROM BOOKINGS Bookings'[512] Executing LOCK...SELECT BOOKING_NO, EVENT_NO, RESOURCE_CODE, CHARGEABLE, MADE_BY, QUANTITY, COST, STATUS, COMMENTS FROM BOOKINGS Bookings WHERE BOOKING_NO=:1 FOR UPDATE NOWAIT[513] Where binding param 1: 1205[514] OracleSQLBuilder Executing, Lock 2 DML on: BOOKINGS (Update)[515] UPDATE buf Bookings>#u SQLStmtBufLen: 210, actual=62[516] UPDATE BOOKINGS Bookings SET CHARGEABLE=:1 WHERE BOOKING_NO=:2[517] Update binding param 1: Y[518] Where binding param 2: 1205[519] BookingsView1 notify COMMIT ... [520] _LOCAL_VIEW_USAGE_model_Bookings_ResourceTypesView1 notify COMMIT ... [521] EntityCache close prepared statement Again even though we're seeing the midtier delay the LOCK statement until commit time, it is in fact occurring on line 412, and released as part of the commit issued on line 419.  Therefore with either optimistic or pessimistic locking a lock is indeed issued. Our conclusion at this point must be, unless there's the unlikely cause the LOCK statement is never really hitting the database, or the even less likely cause the database has a bug, then ADF does in fact take out a lock on the record before allowing the current user to update it.  So there's no way our phantom ADF developer could even modify the record if he tried without at least someone receiving a lock error. Hmm, we can only conclude the locking mode is a red herring and not the true cause of our problem.  Who is the phantom? At this point we'll need to conclude that the error message "JBO-25014: Another user has changed" is somehow legit, even though we don't understand yet what's causing it. This leads onto two further questions, how does ADF know another user has changed the row, and what's been changed anyway? To answer the first question, how does ADF know another user has changed the row, the Fusion Guide's section 4.10.11 How to Protect Against Losing Simultaneous Updated Data , that details the Entity Object Change-Indicator property, gives us the answer: At runtime the framework provides automatic "lost update" detection for entity objects to ensure that a user cannot unknowingly modify data that another user has updated and committed in the meantime. Typically, this check is performed by comparing the original values of each persistent entity attribute against the corresponding current column values in the database at the time the underlying row is locked. Before updating a row, the entity object verifies that the row to be updated is still consistent with the current state of the database.  The guide further suggests to make this solution more efficient: You can make the lost update detection more efficient by identifying any attributes of your entity whose values you know will be updated whenever the entity is modified. Typical candidates include a version number column or an updated date column in the row.....To detect whether the row has been modified since the user queried it in the most efficient way, select the Change Indicator option to compare only the change-indicator attribute values. We now know that ADF BC doesn't use the locking mechanism at all to protect the current user against updates, but rather it keeps a copy of the original record fetched, separate to the user changed version of the record, and it compares the original record against the one in the database when the lock is taken out.  If values don't match, be it the default compare-all-columns behaviour, or the more efficient Change Indicator mechanism, ADF BC will throw the JBO-25014 error. This leaves one last question.  Now we know the mechanism under which ADF identifies a changed row, what we don't know is what's changed and who changed it? The real culprit What's changed?  We know the record in the mid-tier has been changed by the user, however ADF doesn't use the changed record in the mid-tier to compare to the database record, but rather a copy of the original record before it was changed.  This leaves us to conclude the database record has changed, but how and by who? There are three potential causes: Database triggers The database trigger among other uses, can be configured to fire PLSQL code on a database table insert, update or delete.  In particular in an insert or update the trigger can override the value assigned to a particular column.  The trigger execution is actioned by the database on behalf of the user initiating the insert or update action. Why this causes the issue specific to our ADF use, is when we insert or update a record in the database via ADF, ADF keeps a copy of the record written to the database.  However the cached record is instantly out of date as the database triggers have modified the record that was actually written to the database.  Thus when we update the record we just inserted or updated for a second time to the database, ADF compares its original copy of the record to that in the database, and it detects the record has been changed – giving us JBO-25014. This is probably the most common cause of this problem. Default values A second reason this issue can occur is another database feature, default column values.  When creating a database table the schema designer can define default values for specific columns.  For example a CREATED_BY column could be set to SYSDATE, or a flag column to Y or N.  Default values are only used by the database when a user inserts a new record and the specific column is assigned NULL.  The database in this case will overwrite the column with the default value. As per the database trigger section, it then becomes apparent why ADF chokes on this feature, though it can only specifically occur in an insert-commit-update-commit scenario, not the update-commit-update-commit scenario. Instead of trigger views I must admit I haven't double checked this scenario but it seems plausible, that of the Oracle database's instead of trigger view (sometimes referred to as instead of views).  A view in the database is based on a query, and dependent on the queries complexity, may support insert, update and delete functionality to a limited degree.  In order to support fully insertable, updateable and deletable views, Oracle introduced the instead of view, that gives the view designer the ability to not only define the view query, but a set of programmatic PLSQL triggers where the developer can define their own logic for inserts, updates and deletes. While this provides the database programmer a very powerful feature, it can cause issues for our ADF application.  On inserting or updating a record in the instead of view, the record and it's data that goes in is not necessarily the data that comes out when ADF compares the records, as the view developer has the option to practically do anything with the incoming data, including throwing it away or pushing it to tables which aren't used by the view underlying query for fetching the data. Readers are at this point reminded that this article is specifically about how the JBO-25014 error occurs in the context of 1 developer on an isolated database.  The article is not considering how the error occurs in a production environment where there are multiple users who can cause this error in a legitimate fashion.  Assuming none of the above features are the cause of the problem, and optimistic locking is turned on (this error is not possible if pessimistic locking is the default mode *and* none of the previous causes are possible), JBO-25014 is quite feasible in a production ADF application if 2 users modify the same record. At this point under project timelines pressure, the obvious fix for developers is to drop both database triggers and default values from the underlying tables.  However we must be careful that these legacy constructs aren't used and assumed to be in place by other legacy systems.  Dropping the database triggers or default value that the existing Oracle Forms  applications assumes and requires to be in place could cause unexpected behaviour and bugs in the Forms application.  Proficient software engineers would recognize such a change may require a partial or full regression test of the existing legacy system, a potentially costly and timely exercise, not ideal. Solving the mystery once and for all Luckily ADF has built in functionality to deal with this issue, though it's not a surprise, as Oracle as the author of ADF also built the database, and are fully aware of the Oracle database's feature set.  At the Entity Object attribute level, the Refresh After Insert and Refresh After Update properties.  Simply selecting these instructs ADF BC after inserting or updating a record to the database, to expect the database to modify the said attributes, and read a copy of the changed attributes back into its cached mid-tier record.  Thus next time the developer modifies the current record, the comparison between the mid-tier record and the database record match, and JBO-25014: Another user has changed" is no longer an issue. [Post edit - as per the comment from Oracle's Steven Davelaar below, as he correctly points out the above solution will not work for instead-of-triggers views as it relies on SQL RETURNING clause which is incompatible with this type of view] Alternatively you can set the Change Indicator on one of the attributes.  This will work as long as the relating column for the attribute in the database itself isn't inadvertently updated.  In turn you're possibly just masking the issue rather than solving it, because if another developer turns the Change Indicator back on the original issue will return.

    Read the article

  • SQL SERVER Find Most Active Database in SQL Server DMV dm_io_virtual_file_stats

    Few days ago, I wrote about SQL SERVER Find Current Location of Data and Log File of All the Database. There was very interesting conversation in comments by blog readers. Blog reader and SQL Expert Sreedhar has very interesting DMV presented which lists the most active database in SQL Server. For quick reference he [...]...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Oracle saves with Oracle Database 11g and Advanced Compression

    - by jenny.gelhausen
    Oracle Corporation runs a centralized eBusiness Suite system on Oracle Database 11g for all its employees around the globe. This clustered Global Single Instance (GSI) has scaled seamlessly with many acquisitions over the years, doubling the number of employees since 2001 and supporting around 100,000 employees today, 24 hours a day, 7 days a week around the world. In this podcast, you'll hear from Raji Mani, IT Director for Oracle's PDIT Group, on how Oracle Database 11g and Advanced Compression is helping to save big on storage costs. var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www."); document.write(unescape("%3Cscript src='" + gaJsHost + "google-analytics.com/ga.js' type='text/javascript'%3E%3C/script%3E")); try { var pageTracker = _gat._getTracker("UA-13185312-1"); pageTracker._trackPageview(); } catch(err) {}

    Read the article

  • Deploying Data Mining Models using Model Export and Import

    - by [email protected]
    In this post, we'll take a look at how Oracle Data Mining facilitates model deployment. After building and testing models, a next step is often putting your data mining model into a production system -- referred to as model deployment. The ability to move data mining model(s) easily into a production system can greatly speed model deployment, and reduce the overall cost. Since Oracle Data Mining provides models as first class database objects, models can be manipulated using familiar database techniques and technology. For example, one or more models can be exported to a flat file, similar to a database table dump file (.dmp). This file can be moved to a different instance of Oracle Database EE, and then imported. All methods for exporting and importing models are based on Oracle Data Pump technology and found in the DBMS_DATA_MINING package. Before performing the actual export or import, a directory object must be created. A directory object is a logical name in the database for a physical directory on the host computer. Read/write access to a directory object is necessary to access the host computer file system from within Oracle Database. For our example, we'll work in the DMUSER schema. First, DMUSER requires the privilege to create any directory. This is often granted through the sysdba account. grant create any directory to dmuser; Now, DMUSER can create the directory object specifying the path where the exported model file (.dmp) should be placed. In this case, on a linux machine, we have the directory /scratch/oracle. CREATE OR REPLACE DIRECTORY dmdir AS '/scratch/oracle'; If you aren't sure of the exact name of the model or models to export, you can find the list of models using the following query: select model_name from user_mining_models; There are several options when exporting models. We can export a single model, multiple models, or all models in a schema using the following procedure calls: BEGIN   DBMS_DATA_MINING.EXPORT_MODEL ('MY_MODEL.dmp','dmdir','name =''MY_DT_MODEL'''); END; BEGIN   DBMS_DATA_MINING.EXPORT_MODEL ('MY_MODELS.dmp','dmdir',              'name IN (''MY_DT_MODEL'',''MY_KM_MODEL'')'); END; BEGIN   DBMS_DATA_MINING.EXPORT_MODEL ('ALL_DMUSER_MODELS.dmp','dmdir'); END; A .dmp file can be imported into another schema or database using the following procedure call, for example: BEGIN   DBMS_DATA_MINING.IMPORT_MODEL('MY_MODELS.dmp', 'dmdir'); END; As with models from any data mining tool, when moving a model from one environment to another, care needs to be taken to ensure the transformations that prepare the data for model building are matched (with appropriate parameters and statistics) in the system where the model is deployed. Oracle Data Mining provides automatic data preparation (ADP) and embedded data preparation (EDP) to reduce, or possibly eliminate, the need to explicitly transport transformations with the model. In the case of ADP, ODM automatically prepares the data and includes the necessary transformations in the model itself. In the case of EDP, users can associate their own transformations with attributes of a model. These transformations are automatically applied when applying the model to data, i.e., scoring. Exporting and importing a model with ADP or EDP results in these transformations being immediately available with the model in the production system.

    Read the article

  • Entity Framework Code First: Get Entities From Local Cache or the Database

    - by Ricardo Peres
    Entity Framework Code First makes it very easy to access local (first level) cache: you just access the DbSet<T>.Local property. This way, no query is sent to the database, only performed in already loaded entities. If you want to first search local cache, then the database, if no entries are found, you can use this extension method: 1: public static class DbContextExtensions 2: { 3: public static IQueryable<T> LocalOrDatabase<T>(this DbContext context, Expression<Func<T, Boolean>> expression) where T : class 4: { 5: IEnumerable<T> localResults = context.Set<T>().Local.Where(expression.Compile()); 6:  7: if (localResults.Any() == true) 8: { 9: return (localResults.AsQueryable()); 10: } 11:  12: IQueryable<T> databaseResults = context.Set<T>().Where(expression); 13:  14: return (databaseResults); 15: } 16: }

    Read the article

  • Complex knowledge management system with CRM..written internally

    - by JonH
    We've all heard of salesforce and sugarcrm and the likes of systems like this. Unfortunately at my workplace we have been asked to write a similiar system (rather then license or purchase). Basically the database is fairly large. Think of modules such as: Corporate groups, customers, programs, projects, sub projects, and issue management. In simple terms a corporate group has one to many customers. A program has one or more projects. A project has one or more sub projects. And an issue can be created on many sub projects. Of course the system is a bit more complex but instead of listing every single module I think its best to keep it simple. In any event, the system in its current state has only two resources to be working on it (basically we have to do it all: CSS, database, jquery, asp.net and C#). We've started off well by defining the UI master and footer pages that way we can reuse those across all of our pages. Now comes the hard part. The system will have about 4k end users with say 5-10% being concurrent users. We are wondering if it makes sense to cache our database data (For say 5-10 minutes) rather then continously hit our database. The reason being is some of these pages may have 5-10 search filters associated with the page. Imagine every time a selection is made from a search box how many database hits. Also some of these search fields cascade so selecting for instance an initial drop down may cascade several drop down boxes under them. Is it wrong to cache because I am not finding too many articles on whether it is a good idea or not. Remember the system is similiar to say a CRM system where we manage our various customers, projects, sub projects, issues, etc.

    Read the article

  • What's the entry path towards a database administrator job?

    - by FarmBoy
    I've recently lost my job, and I'm working towards changing vocations. My degrees are in Mathematics, but I'm interested in IT, particularly working as a DBA or a programmer. I don't have IT experience, but I have the resourses to be patient with the transition, and I'm currently learning SQL and Java. Obviously, I need some job experience. My question is this: What entry-level jobs might allow me to gain useful experience towards obtaining a DBA job? It seems to me that programmers often start as testers, and system administrators could start at a help-desk position, but it is unclear how one begins to work with a company's database.

    Read the article

  • OVM Templates: Oracle Solaris Container with Oracle Database 11gR2

    - by Roman Ivanov
    I am delighted to inform you that Oracle just made available new Oracle Solaris Virtual Machine (VM) Templates: Oracle Solaris Container with Oracle Database 11gR2. This VM Templates available for SPARC and x86 platforms. Both Oracle VM Templates based on encapsulating an Oracle Solaris 10 Container which can then be attached to SPARC or x86 system running Oracle Solaris 10 10/09 or later. Make sure your select correct SPARC or x86 platform. The download includes Oracle Solaris 10 10/09 Container Oracle Database 11gR2 pre-installed in the Container.

    Read the article

  • Latest Security Inside Out Newsletter Now Available

    - by Troy Kitch
    The September/October edition of the Security Inside Out Newsletter is now available. Learn about Oracle OpenWorld database security sessions, hands on labs, and demos you'll want to attend, as well as frequently asked question about Label-Based Access Controls in Oracle Database 11g. Subscriber here for the bi-monthly newsletter.  ...and if you haven't already done so, join Oracle Database on these social networks: Twitter Facebook LinkedIn Google+ 

    Read the article

  • SQL SERVER Quick Note of Database Mirroring

    Just a day ago, I was invited at Round Table meeting at prestigious organization. They were planning to implement High Availability solution using Database Mirroring. During the meeting, I have made few notes of what was being discussed there. I just thought it would be interested for all of you know about it.Database Mirroring works [...]...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • SQL Server Maintenence Plan error on an offline Database

    - by Sean Earp
    Today is SQL day for me :) I have a maintenance plan that is failing to run with the following error: Failed:(-1073548784) Executing the query "USE [SharedServices1_DB]" failed with the following error: "Database 'SharedServices1_DB' cannot be opened because it is offline.". Possible failure reasons: Problems with the query, "ResultSet" property not set correctly, parameters not set correctly, or connection not established correctly. where SharedServices1_DB is a database that is set to offline. I would like to exclude this database from the maintenance plan, but when the database is offline, it does not show up at all as a "specific database" in the maintenance plan task, and if I bring it online, it is already unchecked in the maintenance plan task. How can I exclude an offline database from a maintenance plan?

    Read the article

  • Impact of Truncate or Drop Table When Flashback Database is Enabled

    - by alejandro.vargas
    Recently I was working on a VLDB on the implementation of a disaster recovery environment configured with data guard physical standby and fast start failover. One of the questions that come up was about the overhead of truncating and dropping tables. There are daily jobs on the database that truncate extremely large partitions, and as note 565535.1 explains, we knew there is an overhead for these operations. But the information on the note was not clear enough, we the additional information I've got from Senior Oracle colleagues I did compile this document "Impact of Truncate or Drop Table When Flashback Database is Enabled" that further explain the case

    Read the article

  • Should a database server be in a different VM instance as an application?

    - by orokusaki
    I'm setting up a database server as a separate VM in my server so that I can control resources, and make backups of just that instance. I own a server that will reside in a colo soon. Is this the best way to approach my DB regarding scalability? Are there any security concerns? Do I listen at localhost still, even though it's a separate instance? And, is there any benefit to running your DB (PostgreSQL in my case) in the same machine as your application (web based SAAS application in my case)?

    Read the article

  • Improved Database Threat Management with Oracle Audit Vault and ArcSight ESM

    - by roxana.bradescu
    Data represents one of the most valuable assets in any organization, making databases the primary target of today's attacks. It is important that organizations adopt a database security defense-in-depth approach that includes data encryption and masking, access control for privileged users and applications, activity monitoring and auditing. With Oracle Audit Vault, organizations can reliably monitor database activity enterprise-wide and alert on any security policy exceptions. The new integration between Oracle Audit Vault and ArcSight Enterprise Security Manager, allows organizations to take advantage of enterprise-wide, real-time event aggregation, correlation and response to attacks against their databases. Join us for this live SANS Tool Talk event to learn more about this new joint solution and real-world attack scenarios that can now be quickly detected and thwarted.

    Read the article

  • SharePoint Content Database Sizing

    - by Sahil Malik
    SharePoint, WCF and Azure Trainings: more information SharePoint stores majority of its content in SQL Server databases. Many of these databases are concerned with the overall configuration of the system, or managed services support. However, a majority of these databases are those that accept uploaded content, or collaborative content. These databases need to be sized with various factors in mind, such as, Ability to backup/restore the content quickly, thereby allowing for quicker SLAs and isolation in event of database failure. SharePoint as a system avoids SQL transactions in many instances. It does so to avoid locks, but does so at the cost of resultant orphan data or possible data corruption. Larger databases are known to have more orphan items than smaller ones. Also smaller databases keep the problems isolated. As a result, it is very important for any project to estimate content database base sizing estimation. This is especially important in collaborative document centric projects. Not doing this upfront planning can Read full article ....

    Read the article

  • SQL SERVER – Script to Update a Specific Column in Entire Database

    - by Pinal Dave
    Last week, I have received a very interesting question and I find in email and I really liked the question as I had to play around with SQL Script for a while to come up with the answer he was looking for. Please read the question and I believe that all of us face this kind of situation. “Pinal, In our database we have recently introduced ModifiedDate column in all of the tables. Now onwards any update happens in the row, we are updating current date and time to that field. Now here is the issue, when we added that field we did not update it with a default value because we were not sure when we will go live with the system so we let it be NULL. Now modification to the application went live yesterday and we are now updating this field. Here is where I need your help. We need to update all the tables in our database where we have column created ModifiedDate and now want to update with current datetime. As our system is already live since yesterday there are several thousands of the rows which are already updated with real world value so we do not want to update those values. Essentially, in our entire database where ever there is a ModifiedDate column and if it is NULL we want to update that with current date time?  Do you have a script for it?” Honestly I did not have such a script. This is very specific required but I was able to come up with two different methods how he can use this method. Method 1 : Using INFORMATION_SCHEMA SELECT 'UPDATE ' + T.TABLE_SCHEMA + '.' + T.TABLE_NAME + ' SET ModifiedDate = GETDATE() WHERE ModifiedDate IS NULL;' FROM INFORMATION_SCHEMA.TABLES T INNER JOIN INFORMATION_SCHEMA.COLUMNS C ON T.TABLE_NAME = C.TABLE_NAME AND c.COLUMN_NAME ='ModifiedDate' WHERE T.TABLE_TYPE = 'BASE TABLE' ORDER BY T.TABLE_SCHEMA, T.TABLE_NAME; Method 2: Using DMV SELECT 'UPDATE ' + SCHEMA_NAME(t.schema_id) + '.' + t.name + ' SET ModifiedDate = GETDATE() WHERE ModifiedDate IS NULL;' FROM sys.tables AS t INNER JOIN sys.columns c ON t.OBJECT_ID = c.OBJECT_ID WHERE c.name ='ModifiedDate' ORDER BY SCHEMA_NAME(t.schema_id), t.name; Above scripts will create an UPDATE script which will do the task which is asked. We can pretty much the update script to any other SELECT statement and retrieve any other data as well. Click to Download Scripts Reference: Pinal Dave (http://blog.sqlauthority.com)  Filed under: PostADay, SQL, SQL Authority, SQL Joins, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Does tempdb Get Recreated From model at Startup?

    - by Jonathan Kehayias
    In my last post Does the tempdb Log file get Zero Initialized at Startup? I questioned whether or not tempdb is actually created from the model database or not at startup.  There is actually an easy way to prove that this statement, at least internally to the tempdb database is in fact TRUE.  Many thanks go out to Bob Ward (Blog | Twitter) for pointing this out after trading emails with him. To validate that tempdb is actually copied at startup from the model database, all that is necessary...(read more)

    Read the article

  • How to create an Access database by using ADOX and Visual C# .NET

    - by SAMIR BHOGAYTA
    Build an Access Database 1. Open a new Visual C# .NET console application. 2. In Solution Explorer, right-click the References node and select Add Reference. 3. On the COM tab, select Microsoft ADO Ext. 2.7 for DDL and Security, click Select to add it to the Selected Components, and then click OK. 4. Delete all of the code from the code window for Class1.cs. 5. Paste the following code into the code window: using System; using ADOX; private void btnCreate_Click(object sender, EventArgs e) { ADOX.CatalogClass cat = new ADOX.CatalogClass(); cat.Create("Provider=Microsoft.Jet.OLEDB.4.0;" +"Data Source=D:\\NewMDB.mdb;" +"Jet OLEDB:Engine Type=5"); MessageBox.Show("Database Created Successfully"); cat = null; }

    Read the article

  • Is there a "rigorous" method for choosing a database?

    - by Andrew Martin
    I'm not experienced with NoSQL, but one person on my team is calling for its use. I believe our data and its usage isn't optimal for a NoSQL implementation. However, my understanding is based off reading various threads on various websties. I'd like to get some stronger evidence as to who's correct. My question is therefore, "Is there a technique for estimating the performance and requirements of a certain database, that I could use to confirm or modify my intuitions?". Is there, for example, a good book for calculating the performance of equivalent MongoDB/MySQL schema? Is the only really reliable option to build the whole thing and take metrics?

    Read the article

  • Database checksum features - redundant? useful?

    - by Eloff
    Just about every mainstream DB has a feature to calculate checksums per page, per sector, or per record. Now for a DB that does full recover after any crash, like PostgreSQL, is a checksum even useful? There will be no data loss as long as the xlog is ok, no matter what kind of corruption happened to the data itself, as the redo log is replayed every committed transaction will be restored. So checksums are useless on restore. Doesn't the filesystem or disk keep checksums anyway to detect corruption? So unless the checksum is per record, all it does is tell you there is corruption - which the OS should be yelling at you the minute you try to read it - so useless in operation? I can't imagine how a checksum can be helpful in any sane database - but since they all use them - I'd say that's just failure of imagination on my part. So how is it useful?

    Read the article

  • Oracle Database In-Memory: Launch in Frankfurt

    - by Carsten Czarski
    Diesmal gibt es etwas Altes ... und etwas Neues. Zuerst das Neue: Am 11. Juni wird Larry Ellison in Redwood Shores die neue, bahnbrechende Oracle Database In-Memory Funktionalität vorstellen. Mit dieser neuen Technologie profitieren Kunden von beschleunigter Datenbankleistung für Analytics, Data Warehousing, Reporting und Online Transaction Processing (OLTP). Nur 6 Tage später - am 17. Juni -  findet, in Frankfurt, der einzige europäische Launch-Event statt. Neben Fachvorträgen, Panelveranstaltung und Demos wird ein Vortrag von Andy Mendelsohn, Head of Database Product Development, vorgesehen. Melden Sie sich heute noch an. Und hier ist das Alte: Wer erinnert sich noch die die HTML DB ...? In den Archiven der APEX Community Seite haben wir ein Video gefunden, welches zeigt, wie man Seiten in der HTML DB für andere Entwickler sperren konnte. Das gibt es heute übrigens auch noch - es sieht nur etwas anders aus. Viel Spaß beim Ansehen.

    Read the article

< Previous Page | 99 100 101 102 103 104 105 106 107 108 109 110  | Next Page >