Search Results

Search found 955 results on 39 pages for 'inserts'.

Page 32/39 | < Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >

  • Logging in with sha1() encryption.

    - by Samir Ghobril
    Hey guys, I added this to my sign up code : $password=mysql_real_escape_string(sha1($_POST['password'])); and now it inserts the password into the database while its encrypted. But signing in doesn't seem to work anymore. Here is the login code. function checklogin($username, $password){ global $mysqli; $password=sha1($password); $result = $mysqli->prepare("SELECT * FROM users WHERE username = ? and password=?"); $result->bind_param("ss", $username, $password); $result->execute(); if($result != false){ $dbArray=$result->fetch(); if(!$dbArray){ echo '<p class="statusmsg">The username or password you entered is incorrect, or you haven\'t yet activated your account. Please try again.</p><br/><input class="submitButton" type="button" value="Retry" onClick="location.href='."'login.php'\">"; return; } $_SESSION['username']=$username; if(isset($_POST['remember'])){ setcookie("jmuser",$username,time()+60*60*24*356); setcookie("jmpass",$password ,time()+60*60*24*356); } redirect(); }

    Read the article

  • MS SQL - High performance data inserting with stored procedures

    - by Marks
    Hi. Im searching for a very high performant possibility to insert data into a MS SQL database. The data is a (relatively big) construct of objects with relations. For security reasons i want to use stored procedures instead of direct table access. Lets say i have a structure like this: Document MetaData User Device Content ContentItem[0] SubItem[0] SubItem[1] SubItem[2] ContentItem[1] ... ContentItem[2] ... Right now I think of creating one big query, doing somehting like this (Just pseudo-code): EXEC @DeviceID = CreateDevice ...; EXEC @UserID = CreateUser ...; EXEC @DocID = CreateDocument @DeviceID, @UserID, ...; EXEC @ItemID = CreateItem @DocID, ... EXEC CreateSubItem @ItemID, ... EXEC CreateSubItem @ItemID, ... EXEC CreateSubItem @ItemID, ... ... But is this the best solution for performance? If not, what would be better? Split it into more querys? Give all Data to one big stored procedure to reduce size of query? Any other performance clue? I also thought of giving multiple items to one stored procedure, but i dont think its possible to give a non static amount of items to a stored procedure. Since 'INSERT INTO A VALUES (B,C),(C,D),(E,F) is more performant than 3 single inserts i thought i could get some performance here. Thanks for any hints, Marks

    Read the article

  • How do I introspect on a SQL Server?

    - by MetaHyperBolic
    I have a server with a vendor application which is heavily database-reliant. I need to make some minor changes to the data in a few tables in the database in an automated fashion. Just INSERTs and UPDATEs, nothing fancy. Vendors being vendors, I can never be quite sure when they change the schema of a database during upgrade. To that end, how do I ask the SQL server, in some scriptable fashion, "Hey, does this table still exist? Yeah, cool, okay, but does it have this column? What's the data type and size on that? Is it nullable? Could you give me a list of tables? In this table, could you give me a list of columns? Any primary keys there?" I do not need to do this for the whole schema, only part of it, just a quick check of the database before I launch into things. We have Microsoft SQL Server 2005 on it currently, but it might easily move to Microsoft SQL Server 2008. I am probably not using the correct terminology when searching. I do know that ORM is not only too much overhead for this sort of thing, but also that I have no chance of pitching it to my coworkers.

    Read the article

  • Suitable web framework for the following scenario

    - by Paralife
    I have the following scenario: I have a view in an Oracle server and all Iwant is to show that view in a web browser, along with an input field or two for basic filtering. No users, no authentication, just this view maybe with a column or two linking to a second page for master detail viewing. The children are just string descriptions of the columns of the master that contain IDs. No inserts or updates. The question is which is the JAVA based web framework of choice that can accomplish the above in the minimum amount of code lines code time(subjective but also kind of objective if someone has expirience with more than one or two frameworks) configuration effort deployment effort and requirements. dependencies and mem footprint Also: 6. Oracle APEX is not an option. 3,4 and 5 are maybe the same in the sense that they are everything except the functionality coding. I want something that I can compile, deploy by just FTPing to the database host, run and forget. (e.g. For the deployment aspect, Hudson way comes in mind (java -jar hudson.war and that's all)). Also: 3,4 have priority over 1 and 2. (Explanation with a rant: I dont mind coding a lot as long as it is application code and not "why the fuck do we still use javascript over http for everything" code) Thanks.

    Read the article

  • Strange behaviour of DataTable with DataGridView

    - by Paul
    Please explain me what is happening. I have created a WinForms .NET application which has DataGridView on a form and should update database when DataGridView inline editing is used. Form has SqlDataAdapter _da with four SqlCommands bound to it. DataGridView is bound directly to DataTable _names. Such a CellValueChanged handler: private void dataGridView1_CellValueChanged(object sender, DataGridViewCellEventArgs e) { _da.Update(_names); } does not update database state although _names DataTable is updated. All the rows of _names have RowState == DataRowState.Unchanged Ok, I modified the handler: private void dataGridView1_CellValueChanged(object sender, DataGridViewCellEventArgs e) { DataRow row = _names.Rows[e.RowIndex]; row.BeginEdit(); row.EndEdit(); _da.Update(_names); } this variant really writes modified cell to database, but when I attempt to insert new row into grid, I get an error about an absence of row with index e.RowIndex So, I decided to improve the handler further: private void dataGridView1_CellValueChanged(object sender, DataGridViewCellEventArgs e) { if (_names.Rows.Count<e.RowIndex) { DataRow row = _names.Rows[e.RowIndex]; row.BeginEdit(); row.EndEdit(); } else { DataRow row = _names.NewRow(); row["NameText"] = dataGridView1["NameText", e.RowIndex].Value; _names.Rows.Add(row); } _da.Update(_names); } Now the really strange things happen when I insert new row to grid: the grid remains what it was until _names.Rows.Add(row); After this line THREE rows are inserted into table - two rows with the same value and one with Null value. The slightly modified code: DataRow row = _names.NewRow(); row["NameText"] = "--------------" _names.Rows.Add(row); inserts three rows with three different values: one as entered into the grid, the second with "--------------" value and third - with Null value. I really got stuck in guessing what is happening.

    Read the article

  • javascript/html/php: Is it possible to insert a value into a form with Ajax?

    - by user1260310
    I have a form where users enter some text manually. Then I'd like to let the users choose a tag from a database through AJAX, not unlike how tag suggestions appear in the SO question form. While the div where the ajax call places the tag is inside the form, it does not seem to register and the tag is not picked up by the form. Am I missing something in my code, is this impossible or, if impossible there a better way to do this? Thanks for any suggestions. Here is code: html <form method="post" action="enterdata.php"> <input type="text" name="text">Enter text here. <div id="inserttags"></div><a href="javascript:void(0);" onclick="getTags()";>Get tags</a> <form type="button" name="submit" value="Enter Text and Tag"> </form> javascript getTags() { various Ajax goes here, then //following line inserts value into div of html document.getElementById("inserttags").innerHTML=xmlhttp.responseText; // a bit more ajax, then following pulls tag from db xmlhttp.open("GET","findtags.php",true); xmlhttp.send(); } //end function php //gettags.php //first pull tag from db. Then: echo 'input type="text" name="tag" value= "html">Enter tag'; //above output gets inserted in div and is visible on page. Though the above output is visible on page, the form does not seem to pick it up when you click "Enter Text and Tag" to submit the form.

    Read the article

  • Data Structure / Hash Function to link Sets of Ints to Value

    - by Gaminic
    Given n integer id's, I wish to link all possible sets of up to k id's to a constant value. What I'm looking for is a way to translate sets (e.g. {1, 5}, {1, 3, 5} and {1, 2, 3, 4, 5, 6, 7}) to unique values. Guarantees: n < 100 and k < 10 (again: set sizes will range in [1, k]). The order of id's doesn't matter: {1, 5} == {5, 1}. All combinations are possible, but some may be excluded. All sets and values are constant and made only once. No deletes or inserts, no value updates. Once generated, the only operations taking place will be look-ups. Look-ups will be frequent and one-directional (given set, look up value). There is no need to sort (or otherwise organize) the values. Additionally, it would be nice (but not obligatory) if "neighboring" sets (drop one id, add one id, swap one id, etc) are easy to reach, as well as "all sets that include at least this set". Any ideas?

    Read the article

  • How to lock a transaction for reading a row and then inserting in Hibernate?

    - by at
    I have a table with a name and a name_count. So when I insert a new record, I first check what the maximum name_count is for that name. I then insert the record with that maximum + 1. Works great... except with mysql 5.1 and hibernate 3.5, by default the reads don't respect transaction boundaries. 2 of these inserts for the same name could happen at the same time and end up with the same name_count, which completely screws my application! Unfortunately, there are some specific situations where the above is actually fairly common. So what do I do? I assume I can do a pessimistic lock where any row I read is locked for further reading until I commit or roll-back my transaction. Or I can do an optimistic lock with a version column that automatically keeps trying until there are no conflicts? What's the best approach for my situation and how do I specify it in Hibernate 3.5 and mysql 5.1? The above table is massive and accessed frequently.

    Read the article

  • Is there an Easier way to Get a 3 deep Panel Control from a Form in order to add a new Control to it programmatically?

    - by Mark Sweetman
    I have a VB Windows Program Created by someone else, it was programmed so that anyone could Add to the functionality of the program through the use of Class Libraries, the Program calls them (ie... the Class Libraries, DLL files) Plugins, The Plugin I am creating is a C# Class Library. ie.. .dll This specific Plugin Im working on Adds a Simple DateTime Clock Function in the form of a Label and inserts it into a Panel that is 3 Deep. The Code I have I have tested and it works. My Question is this: Is there a better way to do it? for instance I use Controls.Find 3 different times, each time I know what Panel I am looking for and there will only be a single Panel added to the Control[] array. So again Im doing a foreach on an array that only holds a single element 3 different times. Now like I said the code works and does as I expected it to. It just seems overly redudant, and Im wondering if there could be a performance issue. here is the code: foreach (Control p0 in mDesigner.Controls) if (p0.Name == "Panel1") { Control panel1 = (Control)p0; Control[] controls = panel1.Controls.Find("Panel2", true); foreach (Control p1 in controls) if (p1.Name == "Panel2") { Control panel2 = (Control)p1; Control[] controls1 = panel2.Controls.Find("Panel3", true); foreach(Control p2 in controls1) if (p2.Name == "Panel3") { Control panel3 = (Control)p2; panel3.Controls.Add(clock); } } }

    Read the article

  • Inline horizontal spacer in HTML

    - by LeafStorm
    I am making a Web page with a slideshow, using the same technique used on http://zine.pocoo.org/. The person I'm making the site for wants the slideshow to be centered. However, some of the photos are portrait layout and some are landscape. (This was not my choice.) I need a position: absolute to get the li's containing the items in the right place, so centering them does not work. (At least, not by normal methods.) So, I thought that it might work to insert a 124-pixel "spacer" before the image on the portrait pictures. I tried it with a <span style="width: 124px;">&nbsp;</span>, but it only inserts a single space, not the full 124 pixels. The slideshow fades in and out OK, though, so I think that it would work if I could get the proper spacing. My question is this: does anyone know a way to have 124px of space inline in HTML (preferably without using images), or another way to center the pictures in the li items?

    Read the article

  • Runtime Error: "Out of Memory" From Excel Macro

    - by user356180
    I have one macro, which is called when a cell change occurs. This macro selects images, deletes them, and inserts another image depending on a cell value using the following code. I have the same code for two sheets. Private Sub Worksheet_SelectionChange(ByVal Target As Range) ActiveSheet.Shapes.SelectAll Selection.Delete 'insert image code here. End Sub In one sheet, it's working perfectly fine and deletes all images, while in the other sheet, it gives me the runtime error "Out of Memory" and highlights the following line: ActiveSheet.Shapes.SelectAll Can anyone tell me why this is happening? It works perfectly fine in one and not in the other. One other thing I want to tell you is it was working fine when I gave this Excel macro to my client; both sheets were working fine. Suddenly after 2 days, he started getting the error on one sheet on which he was working a lot. I don't know why this is happening. Can anyone tell me what's the reason for this and how I can solve it?

    Read the article

  • syntax for MySQL INSERT with an array of columns

    - by Mike_Laird
    I'm new to PHP and MySQL query construction. I have a processor for a large form. A few fields are required, most fields are user optional. In my case, the HTML ids and the MySQL column names are identical. I've found tutorials about using arrays to convert $_POST into the fields and values for INSERT INTO, but I can't get them working - after many hours. I've stepped back to make a very simple INSERT using arrays and variables, but I'm still stumped. The following line works and INSERTs 5 items into a database with over 100 columns. The first 4 items are strings, the 5th item, monthlyRental is an integer. $query = "INSERT INTO `$table` (country, stateProvince, city3, city3Geocode, monthlyRental) VALUES ( '$country', '$stateProvince', '$city3', '$city3Geocode', '$monthlyRental')"; When I make an array for the fields and use it, as follows: $colsx = array('country,', 'stateProvince,', 'city3,', 'city3Geocode,', 'monthlyRental'); $query = "INSERT INTO `$table` ('$colsx') VALUES ( '$country', '$stateProvince', '$city3', '$city3Geocode', '$monthlyRental')"; I get a MySQL error - check the manual that corresponds to your MySQL server version for the right syntax to use near ''Array') VALUES ( 'US', 'New York', 'Fairport, Monroe County, New York', '(43.09)' at line 1. I get this error whether the array items have commas inside the single quotes or not. I've done a lot of reading and tried many combinations and I can't get it. I want to see the proper syntax on a small scale before I go back to foreach expressions to process $_POST and both the fields and values are arrays. And yes, I know I should use mysql_real_escape_string, but that is an easy later step in the foreach. Lastly, some clues about the syntax for an array of values would be helpful, particularly if it is different from the fields array. I know I need to add a null as the first array item to trigger the MySQL autoincrement id. What else? I'm pretty new, so please be specific.

    Read the article

  • Is this a bad indexing strategy for a table?

    - by llamaoo7
    The table in question is part of a database that a vendor's software uses on our network. The table contains metadata about files. The schema of the table is as follows Metadata ResultID (PK, int, not null) MappedFieldname (char(50), not null) Fieldname (PK, char(50), not null) Fieldvalue (text, null) There is a clustered index on ResultID and Fieldname. This table typically contains millions of rows (in one case, it contains 500 million). The table is populated by 24 workers running 4 threads each when data is being "processed". This results in many non-sequential inserts. Later after processing, more data is inserted into this table by some of our in-house software. The fragmentation for a given table is at least 50%. In the case of the largest table, it is at 90%. We do not have a DBA. I am aware we desperately need a DB maintenance strategy. As far as my background, I'm a college student working part time at this company. My question is this, is a clustered index the best way to go about this? Should another index be considered? Are there any good references for this type and similar ad-hoc DBA tasks?

    Read the article

  • Static variable definition order in c++

    - by rafeeq
    Hi i have a class tools which has static variable std::vector m_tools. Can i insert the values into the static variable from Global scope of other classes defined in other files. Example: tools.h File class Tools { public: static std::vector<std::vector> m_tools; void print() { for(int i=0 i< m_tools.size() ; i++) std::cout<<"Tools initialized :"<< m_tools[i]; } } tools.cpp File std::vector<std::vector> Tools::m_tools; //Definition Using register class constructor for inserting the new string into static variable. class Register { public: Register(std::string str) { Tools::m_tools.pushback(str); } }; Different class which inserts the string to static variable in static variable first_tool.cpp //Global scope declare global register variable Register a("first_tool"); //////// second_tool.cpp //Global scope declare global register variable Register a("second_tool"); Main.cpp void main() { Tools abc; abc.print(); } Will this work? In the above example on only one string is getting inserted in to the static list. Problem look like "in Global scope it tries to insert the element before the definition is done" Please let me know is there any way to set the static definiton priority? Or is there any alternative way of doing the same.

    Read the article

  • cakephp find('list') - problem using

    - by MOFlint
    I'm trying to get an array to poplulate a Counties select. If I use find('all') I can get the data but this array needs flattening to use in the view with $form-input($counties). If I use find('list') I can't seem to get the right array - which is simple array of county names. What I have tried is this: $ops=array( 'conditions' => array( 'display' => '!=0', 'TO_DAYS(event_finish) >= TO_DAYS(NOW())' ), 'fields' => 'DISTINCT venue_county', 'order' => 'venue_county DESC' ); $this->set('counties', $this->Event->find('list',$ops)); but the SQL this generates is: SELECT Event.id, DISTINCT Event.venue_county FROM events AS Event WHERE display = 1 AND TO_DAYS(event_finish) = TO_DAYS(NOW()) ORDER BY venue_county DESC which generates an error because it first inserts the Event.id field in the query - which is not wanted and causes the error. In my database I have a single table for Events which includes the venue address and I don't really want to create another table for address. What options should I be using efor the find('list') call?

    Read the article

  • Why could "insert (...) values (...)" not insert a new row?

    - by nang
    Hi, I have a simple SQL insert statement of the form: insert into MyTable (...) values (...) It is used repeatedly to insert rows and usually works as expected. It inserts exactly 1 row to MyTable, which is also the value returned by the Delphi statement AffectedRows:= myInsertADOQuery.ExecSQL. After some time there was a temporary network connectivity problem. As a result, other threads of the same application perceived EOleExceptions (Connection failure, -2147467259 = unspecified error). Later, the network connection was reestablished, these threads reconnected and were fine. The thread responsible for executing the insert statement described above, however, did not perceive the connectivity problems (No exceptions) - probably it was simply not executed while the network was down. But after the network connectivity problems myInsertADOQuery.ExecSQL always returned 0 and no rows were inserted to MyTable anymore. After a restart of the application the insert statement worked again as expected. For SQL Server, is there any defined case where an insert statment like the one above would not insert a row and return 0 as the number of affected rows? Primary key is an autogenerated GUID. There are no unique or check constraints (which should result in an exception anyway rather than not inserting a row). Are there any known ADO bugs (Provider=SQLOLEDB.1)? Any other explanations for this behaviour? Thanks, Nang.

    Read the article

  • DB Strategy for inserting into a high read table (Sql Server)

    - by Tom
    Looking for strategies for a very large table with data maintained for reporting and historical purposes, a very small subset of that data is used in daily operations. Background: We have Visitor and Visits tables which are continuously updated by our consumer facing site. These tables contain information on every visit and visitor, including bots and crawlers, direct traffic that does not result in a conversion, etc. Our back end site allows management of the visitor's (leads) from the front end site. Most of the management occurs on a small subset of our visitors (visitors that become leads). The vast majority of the data in our visitor and visit tables is maintained only for a much smaller subset of user activity (basically reporting type functionality). This is NOT an indexing problem, we have done all we can with indexing and keeping our indexes clean, small, and not fragmented. ps: We do not currently have the budget or expertise for a data warehouse. The problem: We would like the system to be more responsive to our end users when they are querying, for instance, the list of their assigned leads. Currently the query is against a huge data set of mostly irrelevant data. I am pondering a few ideas. One involves new tables and a fairly major re-architecture, I'm not asking for help on that. The other involves creating redundant data, (for instance a Visitor_Archive and a Visitor_Small table) where the larger visitor and visit tables exist for inserts and history/reporting, the smaller visitor1 table would exist for managing leads, sending lead an email, need leads phone number, need my list of leads, etc.. The reason I am reaching out is that I would love opinions on the best way to keep the Visitor_Archive and the Visitor_Small tables in sync... Replication? Can I use replication to replicate only data with a certain column value (FooID = x) Any other strategies?

    Read the article

  • Rebuilding indexes does not change the fragmentation % for nonclustered indexes.

    - by Noddy
    For starters, I am no DBA and I am working on rebuilding the indexes. I made use of the amazing TSQL script from msdn to alter index based onthe fragmente percent returned by dm_db_index_physical_stats and if the fragment percent is more than 30 then do a REBUILD or do a REORGANISE. What I found out was, in the first iteration, there were 87 records which needed defrag.I ran the script and all the 87 indexes (clustered & nonclustered) were rebuilt or reindexed. When I got the stats from dm_db_index_physical_stats , there were still 27 records which needed defrag and all of theses were NON CLUSTERED Indexes. All the Clustered indexes were fixed. No matter how many times I run the script to defrag these records, I still have the same indexes to be defraged and most of them with the same fragmentation %. Nothing seems to change after this. Note: I did not perform any inserts/ updates/ deletes to the tables during these iterations. Still the Rebuild/reorganise did not result in any change. More information: Using SQL 2008 Script as available in msdn http://msdn.microsoft.com/en-us/library/ms188917.aspx Could you please explain why these 27 records of non clustered indexes are not being changed/ modified ? Any help on this would be highly appreciated. Nod

    Read the article

  • Benchmark MySQL Cluster using flexAsynch: No free node id found for mysqld(API)?

    - by quanta
    I am going to benchmark MySQL Cluster using flexAsynch follow this guide, details as below: mkdir /usr/local/mysqlc732/ cd /usr/local/src/mysql-cluster-gpl-7.3.2 cmake . -DCMAKE_INSTALL_PREFIX=/usr/local/mysqlc732/ -DWITH_NDB_TEST=ON make make install Everything works fine until this step: # /usr/local/mysqlc732/bin/flexAsynch -t 1 -p 80 -l 2 -o 100 -c 100 -n FLEXASYNCH - Starting normal mode Perform benchmark of insert, update and delete transactions 1 number of concurrent threads 80 number of parallel operation per thread 100 transaction(s) per round 2 iterations Load Factor is 80% 25 attributes per table 1 is the number of 32 bit words per attribute Tables are with logging Transactions are executed with hint provided No force send is used, adaptive algorithm used Key Errors are disallowed Temporary Resource Errors are allowed Insufficient Space Errors are disallowed Node Recovery Errors are allowed Overload Errors are allowed Timeout Errors are allowed Internal NDB Errors are allowed User logic reported Errors are allowed Application Errors are disallowed Using table name TAB0 NDBT_ProgramExit: 1 - Failed ndb_cluster.log: WARNING -- Failed to allocate nodeid for API at 127.0.0.1. Returned eror: 'No free node id found for mysqld(API).' I also have recompiled with -DWITH_DEBUG=1 -DWITH_NDB_DEBUG=1. How can I run flexAsynch in the debug mode? # /usr/local/mysqlc732/bin/flexAsynch -h FLEXASYNCH Perform benchmark of insert, update and delete transactions Arguments: -t Number of threads to start, default 1 -p Number of parallel transactions per thread, default 32 -o Number of transactions per loop, default 500 -l Number of loops to run, default 1, 0=infinite -load_factor Number Load factor in index in percent (40 -> 99) -a Number of attributes, default 25 -c Number of operations per transaction -s Size of each attribute, default 1 (PK is always of size 1, independent of this value) -simple Use simple read to read from database -dirty Use dirty read to read from database -write Use writeTuple in insert and update -n Use standard table names -no_table_create Don't create tables in db -temp Create table(s) without logging -no_hint Don't give hint on where to execute transaction coordinator -adaptive Use adaptive send algorithm (default) -force Force send when communicating -non_adaptive Send at a 10 millisecond interval -local 1 = each thread its own node, 2 = round robin on node per parallel trans 3 = random node per parallel trans -ndbrecord Use NDB Record -r Number of extra loops -insert Only run inserts on standard table -read Only run reads on standard table -update Only run updates on standard table -delete Only run deletes on standard table -create_table Only run Create Table of standard table -drop_table Only run Drop Table on standard table -warmup_time Warmup Time before measurement starts -execution_time Execution Time where measurement is done -cooldown_time Cooldown time after measurement completed -table Number of standard table, default 0

    Read the article

  • Postmaster uses excessive CPU and Disk Writes

    - by wolfcastle
    using PostgreSQL 9.1.2 I'm seeing excessive CPU usage and large amounts of writes to disk from postmaster tasks. This happens even while my application is doing almost nothing (10s of inserts per MINUTE). There are a reasonable number of connections open however. I've been trying to determine what in my application is causing this. I'm pretty newb with postgresql, and haven't gotten anywhere so far. I've turned on some logging options in my config file, and looked at connections in the pg_stat_activity table, but they are all idle. Yet each connection consumes ~ 50% CPU, and is writing ~15M/s to disk (reading nothing). I'm basically using the stock postgresql.conf with very little tweaks. I'd appreciate any advice or pointers on what I can do to track this down. Here is a sample of what top/iotop is showing me: Cpu(s): 18.9%us, 14.4%sy, 0.0%ni, 53.4%id, 11.8%wa, 0.0%hi, 1.5%si, 0.0%st Mem: 32865916k total, 7263720k used, 25602196k free, 575608k buffers Swap: 16777208k total, 0k used, 16777208k free, 4464212k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 17057 postgres 20 0 236m 33m 13m R 45.0 0.1 73:48.78 postmaster 17188 postgres 20 0 219m 15m 11m R 42.3 0.0 61:45.57 postmaster 17963 postgres 20 0 219m 16m 11m R 42.3 0.1 27:15.01 postmaster 17084 postgres 20 0 219m 15m 11m S 41.7 0.0 63:13.64 postmaster 17964 postgres 20 0 219m 17m 12m R 41.7 0.1 27:23.28 postmaster 18688 postgres 20 0 219m 15m 11m R 41.3 0.0 63:46.81 postmaster 17088 postgres 20 0 226m 24m 12m R 41.0 0.1 64:39.63 postmaster 24767 postgres 20 0 219m 17m 12m R 41.0 0.1 24:39.24 postmaster 18660 postgres 20 0 219m 14m 9.9m S 40.7 0.0 60:51.52 postmaster 18664 postgres 20 0 218m 15m 11m S 40.7 0.0 61:39.61 postmaster 17962 postgres 20 0 222m 19m 11m S 40.3 0.1 11:48.79 postmaster 18671 postgres 20 0 219m 14m 9m S 39.4 0.0 60:53.21 postmaster 26168 postgres 20 0 219m 15m 10m S 38.4 0.0 59:04.55 postmaster Total DISK READ: 0.00 B/s | Total DISK WRITE: 195.97 M/s TID PRIO USER DISK READ DISK WRITE SWAPIN IO> COMMAND 17962 be/4 postgres 0.00 B/s 14.83 M/s 0.00 % 0.25 % postgres: aggw aggw [local] idle 17084 be/4 postgres 0.00 B/s 15.53 M/s 0.00 % 0.24 % postgres: aggw aggw [local] idle 17963 be/4 postgres 0.00 B/s 15.00 M/s 0.00 % 0.24 % postgres: aggw aggw [local] idle 17188 be/4 postgres 0.00 B/s 14.80 M/s 0.00 % 0.24 % postgres: aggw aggw [local] idle 17964 be/4 postgres 0.00 B/s 15.50 M/s 0.00 % 0.24 % postgres: aggw aggw [local] idle 18664 be/4 postgres 0.00 B/s 15.13 M/s 0.00 % 0.23 % postgres: aggw aggw [local] idle 17088 be/4 postgres 0.00 B/s 14.71 M/s 0.00 % 0.13 % postgres: aggw aggw [local] idle 18688 be/4 postgres 0.00 B/s 14.72 M/s 0.00 % 0.00 % postgres: aggw aggw [local] idle 24767 be/4 postgres 0.00 B/s 14.93 M/s 0.00 % 0.00 % postgres: aggw aggw [local] idle 18671 be/4 postgres 0.00 B/s 16.14 M/s 0.00 % 0.00 % postgres: aggw aggw [local] idle 17057 be/4 postgres 0.00 B/s 13.58 M/s 0.00 % 0.00 % postgres: aggw aggw [local] idle 26168 be/4 postgres 0.00 B/s 15.50 M/s 0.00 % 0.00 % postgres: aggw aggw [local] idle 18660 be/4 postgres 0.00 B/s 15.85 M/s 0.00 % 0.00 % postgres: aggw aggw [local] idle

    Read the article

  • ODI SDK: Retrieving Information From the Logs

    - by Christophe Dupupet
    It is fairly common to want to retrieve data from the ODI logs: statistics, execution status, even the generated code can be retrieved from the logs. The ODI SDK provides a robust set of APIs to parse the repository and retreve such information. To locate the information you are looking for, you have to keep in mind the structure of the logs: sessions contain steps; steps containt tasks. The session is the execution unit: basically, each time you execute something (interface, package, procedure, scenario) you create a new session. The steps are the individual entries found in a session: these will be the icons in your package for instance. Or if you are running an interface, you will have one single step: the interface itself. The tasks will represent the more atomic elements of the steps: the individual DDL, DML, scripts and so forth that are generated by ODI, along with all the detailed statistics for that task. All these details can be retrieved with the SDK. Because I had a question recently on the API ODIStepReport, I focus explicitly in this code on Scenario logs, but a lot more can be done with these APIs. Here is the code sample (you can just cut and paste that code in your ODI 11.1.1.6 Groovy console). Just save, adapt the code to your environment (in particular to connect to your repository) and hit "run" //Created by ODI Studioimport oracle.odi.core.OdiInstanceimport oracle.odi.core.config.OdiInstanceConfigimport oracle.odi.core.config.MasterRepositoryDbInfo import oracle.odi.core.config.WorkRepositoryDbInfo import oracle.odi.core.security.Authentication  import oracle.odi.core.config.PoolingAttributes import oracle.odi.domain.runtime.scenario.finder.IOdiScenarioFinder import oracle.odi.domain.runtime.scenario.OdiScenario import java.util.Collection import java.io.* /* ----------------------------------------------------------------------------------------- Simple sample code to list all executions of the last version of a scenario,along with detailed steps information----------------------------------------------------------------------------------------- */ /* update the following parameters to match your environment => */def url = "jdbc:oracle:thin:@myserver:1521:orcl"def driver = "oracle.jdbc.OracleDriver"def schema = "ODIM1116"def schemapwd = "ODIM1116PWD"def workrep = "WORKREP1116"def odiuser= "SUPERVISOR"def odiuserpwd = "SUNOPSIS" // Rather than hardcoding the project code and folder name, // a great improvement here would be to parse the entire repository def scenario_name = "LOAD_DWH" /*Scenario Name*/ /* <=End of the update section */ //--------------------------------------//Connection to the repository// Note for ODI 11.1.1.6: you could use predefined odiInstance variable if you are // running the script from a Studio that is already connected to the repository def masterInfo = new MasterRepositoryDbInfo(url, driver, schema, schemapwd.toCharArray(), new PoolingAttributes())def workInfo = new WorkRepositoryDbInfo(workrep, new PoolingAttributes())def odiInstance = OdiInstance.createInstance(new OdiInstanceConfig(masterInfo, workInfo)) //--------------------------------------// In all cases, we need to make sure we have authorized access to the repositorydef auth = odiInstance.getSecurityManager().createAuthentication(odiuser, odiuserpwd.toCharArray())odiInstance.getSecurityManager().setCurrentThreadAuthentication(auth) //--------------------------------------// Retrieve the scenario we are looking fordef odiScenario = ((IOdiScenarioFinder)odiInstance.getTransactionalEntityManager().getFinder(OdiScenario.class)).findLatestByName(scenario_name) if (odiScenario == null){    println("Error: cannot find scenario "+scenario_name);    return} //--------------------------------------// Retrieve all reports for the scenario def OdiScenarioReportsList = odiScenario.getScenarioReports() println("*** Listing all reports for Scenario \""+scenario_name+"\" ") //--------------------------------------// For each report, print the folowing:// - start time// - duration// - status// - step reports: selection of details for (s in OdiScenarioReportsList){        println("\tStart time: " + s.getSessionStartTime())        println("\tDuration: " + s.getSessionDuration())        println("\tStatus: " + s.getSessionStatus())                def OdiScenarioStepReportsList = s.getStepReports()        for (st in OdiScenarioStepReportsList){            println("\t\tStep Name: " + st.getStepName())            println("\t\tStep Resource Name: " + st.getStepResourceName())            println("\t\tStep Start time: " + st.getStepStartTime())            println("\t\tStep Duration: " + st.getStepDuration())            println("\t\tStep Status: " + st.getStepStatus())            println("\t\tStep # of inserts: " + st.getStepInsertCount())            println("\t\tStep # of updates: " + st.getStepUpdateCount()+'\n')      }      println("\t")}

    Read the article

  • NoSQL Memcached API for MySQL: Latest Updates

    - by Mat Keep
    With data volumes exploding, it is vital to be able to ingest and query data at high speed. For this reason, MySQL has implemented NoSQL interfaces directly to the InnoDB and MySQL Cluster (NDB) storage engines, which bypass the SQL layer completely. Without SQL parsing and optimization, Key-Value data can be written directly to MySQL tables up to 9x faster, while maintaining ACID guarantees. In addition, users can continue to run complex queries with SQL across the same data set, providing real-time analytics to the business or anonymizing sensitive data before loading to big data platforms such as Hadoop, while still maintaining all of the advantages of their existing relational database infrastructure. This and more is discussed in the latest Guide to MySQL and NoSQL where you can learn more about using the APIs to scale new generations of web, cloud, mobile and social applications on the world's most widely deployed open source database The native Memcached API is part of the MySQL 5.6 Release Candidate, and is already available in the GA release of MySQL Cluster. By using the ubiquitous Memcached API for writing and reading data, developers can preserve their investments in Memcached infrastructure by re-using existing Memcached clients, while also eliminating the need for application changes. Speed, when combined with flexibility, is essential in the world of growing data volumes and variability. Complementing NoSQL access, support for on-line DDL (Data Definition Language) operations in MySQL 5.6 and MySQL Cluster enables DevOps teams to dynamically update their database schema to accommodate rapidly changing requirements, such as the need to capture additional data generated by their applications. These changes can be made without database downtime. Using the Memcached interface, developers do not need to define a schema at all when using MySQL Cluster. Lets look a little more closely at the Memcached implementations for both InnoDB and MySQL Cluster. Memcached Implementation for InnoDB The Memcached API for InnoDB is previewed as part of the MySQL 5.6 Release Candidate. As illustrated in the following figure, Memcached for InnoDB is implemented via a Memcached daemon plug-in to the mysqld process, with the Memcached protocol mapped to the native InnoDB API. Figure 1: Memcached API Implementation for InnoDB With the Memcached daemon running in the same process space, users get very low latency access to their data while also leveraging the scalability enhancements delivered with InnoDB and a simple deployment and management model. Multiple web / application servers can remotely access the Memcached / InnoDB server to get direct access to a shared data set. With simultaneous SQL access, users can maintain all the advanced functionality offered by InnoDB including support for Foreign Keys, XA transactions and complex JOIN operations. Benchmarks demonstrate that the NoSQL Memcached API for InnoDB delivers up to 9x higher performance than the SQL interface when inserting new key/value pairs, with a single low-end commodity server supporting nearly 70,000 Transactions per Second. Figure 2: Over 9x Faster INSERT Operations The delivered performance demonstrates MySQL with the native Memcached NoSQL interface is well suited for high-speed inserts with the added assurance of transactional guarantees. You can check out the latest Memcached / InnoDB developments and benchmarks here You can learn how to configure the Memcached API for InnoDB here Memcached Implementation for MySQL Cluster Memcached API support for MySQL Cluster was introduced with General Availability (GA) of the 7.2 release, and joins an extensive range of NoSQL interfaces that are already available for MySQL Cluster Like Memcached, MySQL Cluster provides a distributed hash table with in-memory performance. MySQL Cluster extends Memcached functionality by adding support for write-intensive workloads, a full relational model with ACID compliance (including persistence), rich query support, auto-sharding and 99.999% availability, with extensive management and monitoring capabilities. All writes are committed directly to MySQL Cluster, eliminating cache invalidation and the overhead of data consistency checking to ensure complete synchronization between the database and cache. Figure 3: Memcached API Implementation with MySQL Cluster Implementation is simple: 1. The application sends reads and writes to the Memcached process (using the standard Memcached API). 2. This invokes the Memcached Driver for NDB (which is part of the same process) 3. The NDB API is called, providing for very quick access to the data held in MySQL Cluster’s data nodes. The solution has been designed to be very flexible, allowing the application architect to find a configuration that best fits their needs. It is possible to co-locate the Memcached API in either the data nodes or application nodes, or alternatively within a dedicated Memcached layer. The benefit of this flexible approach to deployment is that users can configure behavior on a per-key-prefix basis (through tables in MySQL Cluster) and the application doesn’t have to care – it just uses the Memcached API and relies on the software to store data in the right place(s) and to keep everything synchronized. Using Memcached for Schema-less Data By default, every Key / Value is written to the same table with each Key / Value pair stored in a single row – thus allowing schema-less data storage. Alternatively, the developer can define a key-prefix so that each value is linked to a pre-defined column in a specific table. Of course if the application needs to access the same data through SQL then developers can map key prefixes to existing table columns, enabling Memcached access to schema-structured data already stored in MySQL Cluster. Conclusion Download the Guide to MySQL and NoSQL to learn more about NoSQL APIs and how you can use them to scale new generations of web, cloud, mobile and social applications on the world's most widely deployed open source database See how to build a social app with MySQL Cluster and the Memcached API from our on-demand webinar or take a look at the docs Don't hesitate to use the comments section below for any questions you may have 

    Read the article

  • NoSQL Java API for MySQL Cluster: Questions & Answers

    - by Mat Keep
    The MySQL Cluster engineering team recently ran a live webinar, available now on-demand demonstrating the ClusterJ and ClusterJPA NoSQL APIs for MySQL Cluster, and how these can be used in building real-time, high scale Java-based services that require continuous availability. Attendees asked a number of great questions during the webinar, and I thought it would be useful to share those here, so others are also able to learn more about the Java NoSQL APIs. First, a little bit about why we developed these APIs and why they are interesting to Java developers. ClusterJ and Cluster JPA ClusterJ is a Java interface to MySQL Cluster that provides either a static or dynamic domain object model, similar to the data model used by JDO, JPA, and Hibernate. A simple API gives users extremely high performance for common operations: insert, delete, update, and query. ClusterJPA works with ClusterJ to extend functionality, including - Persistent classes - Relationships - Joins in queries - Lazy loading - Table and index creation from object model By eliminating data transformations via SQL, users get lower data access latency and higher throughput. In addition, Java developers have a more natural programming method to directly manage their data, with a complete, feature-rich solution for Object/Relational Mapping. As a result, the development of Java applications is simplified with faster development cycles resulting in accelerated time to market for new services. MySQL Cluster offers multiple NoSQL APIs alongside Java: - Memcached for a persistent, high performance, write-scalable Key/Value store, - HTTP/REST via an Apache module - C++ via the NDB API for the lowest absolute latency. Developers can use SQL as well as NoSQL APIs for access to the same data set via multiple query patterns – from simple Primary Key lookups or inserts to complex cross-shard JOINs using Adaptive Query Localization Marrying NoSQL and SQL access to an ACID-compliant database offers developers a number of benefits. MySQL Cluster’s distributed, shared-nothing architecture with auto-sharding and real time performance makes it a great fit for workloads requiring high volume OLTP. Users also get the added flexibility of being able to run real-time analytics across the same OLTP data set for real-time business insight. OK – hopefully you now have a better idea of why ClusterJ and JPA are available. Now, for the Q&A. Q & A Q. Why would I use Connector/J vs. ClusterJ? A. Partly it's a question of whether you prefer to work with SQL (Connector/J) or objects (ClusterJ). Performance of ClusterJ will be better as there is no need to pass through the MySQL Server. A ClusterJ operation can only act on a single table (e.g. no joins) - ClusterJPA extends that capability Q. Can I mix different APIs (ie ClusterJ, Connector/J) in our application for different query types? A. Yes. You can mix and match all of the API types, SQL, JDBC, ODBC, ClusterJ, Memcached, REST, C++. They all access the exact same data in the data nodes. Update through one API and new data is instantly visible to all of the others. Q. How many TCP connections would a SessionFactory instance create for a cluster of 8 data nodes? A. SessionFactory has a connection to the mgmd (management node) but otherwise is just a vehicle to create Sessions. Without using connection pooling, a SessionFactory will have one connection open with each data node. Using optional connection pooling allows multiple connections from the SessionFactory to increase throughput. Q. Can you give details of how Cluster J optimizes sharding to enhance performance of distributed query processing? A. Each data node in a cluster runs a Transaction Coordinator (TC), which begins and ends the transaction, but also serves as a resource to operate on the result rows. While an API node (such as a ClusterJ process) can send queries to any TC/data node, there are performance gains if the TC is where most of the result data is stored. ClusterJ computes the shard (partition) key to choose the data node where the row resides as the TC. Q. What happens if we perform two primary key lookups within the same transaction? Are they sent to the data node in one transaction? A. ClusterJ will send identical PK lookups to the same data node. Q. How is distributed query processing handled by MySQL Cluster ? A. If the data is split between data nodes then all of the information will be transparently combined and passed back to the application. The session will connect to a data node - typically by hashing the primary key - which then interacts with its neighboring nodes to collect the data needed to fulfil the query. Q. Can I use Foreign Keys with MySQL Cluster A. Support for Foreign Keys is included in the MySQL Cluster 7.3 Early Access release Summary The NoSQL Java APIs are packaged with MySQL Cluster, available for download here so feel free to take them for a spin today! Key Resources MySQL Cluster on-line demo  MySQL ClusterJ and JPA On-demand webinar  MySQL ClusterJ and JPA documentation MySQL ClusterJ and JPA whitepaper and tutorial

    Read the article

  • Common usecases and techniques when integrating a 3rd party application with Oracle Sales Cloud

    - by asantaga
    Over the last year or so I've see a lot of partners migrating and integrate their applications with Oracle Sales Cloud. Interestingly I'd say 60% of the partners use the same set of design patterns over and over again. Most of the time I see that they want to embed their application into Oracle Sales Cloud, within a tab usually, perhaps click on a link to their application (passing some piece of data + credentials) and then within their application update sales cloud again using webservices. Here are some examples of the different use-cases I've seen , and how partners are embedding their applications into Sales Cloud, NB : The following examples use the "Desktop" User Interface rather than the Newer "Simplified User Interface", I'll update the sample application soon but the integration patterns are precisely the same Use Case 1 :  Navigator "Link out" to third party application This is an example of where the developer has added a link to the global navigator and this links out to the 3rd Party Application. Typically one doesn't pass any contextual data with the exception of perhaps user credentials, or better still JWT Token. Techniques Used   Adding Link to Menu Item Using JWT Token in Sales Cloud Use Case 2 : Application Embedded within the Sales Cloud Dashboard Within the Oracle Sales Cloud application there is a tab called "Sales", within this tab its possible to embed a SubTab and embed a iFrame pointing to your application. To do this the developer simply needs to edit the page in customization mode, add the tab and then add the iFrame, simples! The developer can pass credentials/JWT Token and some other pieces of data but not object data (ie the current OpportunityID etc)  Techniques Used Adding a page to the dashboard  Using JWT Token in Sales Cloud  Use Case 3 : Embedding a Tab and Context Linking out from a Sales Cloud object to the 3rd party application In this usecase the developer embeds two components into Oracle Sales Cloud. The first is a SubTab showing summary data to the user (a quote in our case) and then secondly a hyperlink, (although it could be a button) which when clicked navigates the user to the 3rd party application. In this case the developer almost always passes context specific data (i.e. the opportunityId) and a security token (username password combo or JWT Token). The third party application usually takes the data, perhaps queries more data using the Sales Cloud SOAP/WebService interface and then displays the resulting mashup to the user for further processing. When the user has finished their work in the 3rd party application they normally navigate back to Oracle Sales Cloud using what's called a "DeepLink", ie taking them back to the object [opportunity in our case] they came from. This image visually shows a "Happy Path" a user may follow, and combines linking out to an application , webservice calls and deep linking back to Sales Cloud. Techniques Used Extending a SalesCloud application with a custom button Using JWT Token in Sales Cloud Extending Oracle Sales Cloud [Opportnity] with a custom tab exposing External Content Retrieving Data from Oracle Sales cloud using WebServices Coding some groovy script to generate the URLs required (Doc 1571200.1 on MyOracle Support) DeepLinking to specific Oracle Sales Cloud Pages (Doc 1516151.1 on My Oracle Support) Use-Case 4 :  Server Side processing/synchronization This usecase focuses on the Server Side processing of data, in this case synchronizing data. Here the 3rd party application is running on a "timer", e.g. cron or similar, and when triggered it queries data from Oracle Sales Cloud, then it queries data from the 3rd party application, determines the deltas and then inserts the data where required. Specifically here we are calling Oracle Sales Cloud using SOAP/WebServices and the 3rd party application is being communicated to using the REST API, for Oracle Sales Cloud one would use standard JAX-WS WebService calls and for REST one would use the JAX-RS api and perhap the Jackson api for managing JSON objects.. This is a very common use case and one which specifically lends itself to using the Oracle Java Cloud Service as the ideal application server where to host the mediator between the two applications.  Techniques Used Using JWT Token in Sales Cloud Integrating with the Oracle Java Cloud Service Retrieving Data from Oracle Sales cloud using WebServices General Resources The above is just a small set of techniques and use-cases which are used today. There are plenty of other sources of documentation and resources available on the internet but to get you started here are a few of my favourite places  Sales Cloud General Documentation Sales Cloud Customize Tab is useful for general customization of Sales Cloud Sales Cloud Integration Tab focuses on the 3rd party integration techniques  Official Oracle Fusion Developer Relations Blog Official Oracle Fusion Developer Relations YouTube Channel Enjoy integrating! 

    Read the article

  • Breaking 1NF to model subset constraints. Does this sound sane?

    - by Chris Travers
    My first question here. Appologize if it is in the wrong forum but this seems pretty conceptual. I am looking at doing something that goes against conventional wisdom and want to get some feedback as to whether this is totally insane or will result in problems, so critique away! I am on PostgreSQL 9.1 but may be moving to 9.2 for this part of this project. To re-iterate: Does it seem sane to break 1NF in this way? I am not looking for debugging code so much as where people see problems that this might lead. The Problem In double entry accounting, financial transactions are journal entries with an arbitrary number of lines. Each line has either a left value (debit) or a right value (credit) which can be modelled as a single value with negatives as debits and positives as credits or vice versa. The sum of all debits and credits must equal zero (so if we go with a single amount field, sum(amount) must equal zero for each financial journal entry). SQL-based databases, pretty much required for this sort of work, have no way to express this sort of constraint natively and so any approach to enforcing it in the database seems rather complex. The Write Model The journal entries are append only. There is a possibility we will add a delete model but it will be subject to a different set of restrictions and so is not applicable here. If and when we allow deletes, we will probably do them using a simple ON DELETE CASCADE designation on the foreign key, and require that deletes go through a dedicated stored procedure which can enforce the other constraints. So inserts and selects have to be accommodated but updates and deletes do not for this task. My Proposed Solution My proposed solution is to break first normal form and model constraints on arrays of tuples, with a trigger that breaks the rows out into another table. CREATE TABLE journal_line ( entry_id bigserial primary key, account_id int not null references account(id), journal_entry_id bigint not null, -- adding references later amount numeric not null ); I would then add "table methods" to extract debits and credits for reporting purposes: CREATE OR REPLACE FUNCTION debits(journal_line) RETURNS numeric LANGUAGE sql IMMUTABLE AS $$ SELECT CASE WHEN $1.amount < 0 THEN $1.amount * -1 ELSE NULL END; $$; CREATE OR REPLACE FUNCTION credits(journal_line) RETURNS numeric LANGUAGE sql IMMUTABLE AS $$ SELECT CASE WHEN $1.amount > 0 THEN $1.amount ELSE NULL END; $$; Then the journal entry table (simplified for this example): CREATE TABLE journal_entry ( entry_id bigserial primary key, -- no natural keys :-( journal_id int not null references journal(id), date_posted date not null, reference text not null, description text not null, journal_lines journal_line[] not null ); Then a table method and and check constraints: CREATE OR REPLACE FUNCTION running_total(journal_entry) returns numeric language sql immutable as $$ SELECT sum(amount) FROM unnest($1.journal_lines); $$; ALTER TABLE journal_entry ADD CONSTRAINT CHECK (((journal_entry.running_total) = 0)); ALTER TABLE journal_line ADD FOREIGN KEY journal_entry_id REFERENCES journal_entry(entry_id); And finally we'd have a breakout trigger: CREATE OR REPLACE FUNCTION je_breakout() RETURNS TRIGGER LANGUAGE PLPGSQL AS $$ BEGIN IF TG_OP = 'INSERT' THEN INSERT INTO journal_line (journal_entry_id, account_id, amount) SELECT NEW.id, account_id, amount FROM unnest(NEW.journal_lines); RETURN NEW; ELSE RAISE EXCEPTION 'Operation Not Allowed'; END IF; END; $$; And finally CREATE TRIGGER AFTER INSERT OR UPDATE OR DELETE ON journal_entry FOR EACH ROW EXECUTE_PROCEDURE je_breaout(); Of course the example above is simplified. There will be a status table that will track approval status allowing for separation of duties, etc. However the goal here is to prevent unbalanced transactions. Any feedback? Does this sound entirely insane? Standard Solutions? In getting to this point I have to say I have looked at four different current ERP solutions to this problems: Represent every line item as a debit and a credit against different accounts. Use of foreign keys against the line item table to enforce an eventual running total of 0 Use of constraint triggers in PostgreSQL Forcing all validation here solely through the app logic. My concerns are that #1 is pretty limiting and very hard to audit internally. It's not programmer transparent and so it strikes me as being difficult to work with in the future. The second strikes me as being very complex and required a series of contraints and foreign keys against self to make work, and therefore it strikes me as complex, hard to sort out at least in my mind, and thus hard to work with. The fourth could be done as we force all access through stored procedures anyway and this is the most common solution (have the app total things up and throw an error otherwise). However, I think proof that a constraint is followed is superior to test cases, and so the question becomes whether this in fact generates insert anomilies rather than solving them. If this is a solved problem it isn't the case that everyone agrees on the solution....

    Read the article

< Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >