Search Results

Search found 22078 results on 884 pages for 'composite primary key'.

Page 378/884 | < Previous Page | 374 375 376 377 378 379 380 381 382 383 384 385  | Next Page >

  • sql: Group by x,y,z; return grouped by x,y with lowest f(z)

    - by Sai Emrys
    This is for http://cssfingerprint.com I collect timing stats about how fast the different methods I use perform on different browsers, etc., so that I can optimize the scraping speed. Separately, I have a report about what each method returns for a handful of URLs with known-correct values, so that I can tell which methods are bogus on which browsers. (Each is different, alas.) The related tables look like this: CREATE TABLE `browser_tests` ( `id` int(11) NOT NULL AUTO_INCREMENT, `bogus` tinyint(1) DEFAULT NULL, `result` tinyint(1) DEFAULT NULL, `method` varchar(255) DEFAULT NULL, `url` varchar(255) DEFAULT NULL, `os` varchar(255) DEFAULT NULL, `browser` varchar(255) DEFAULT NULL, `version` varchar(255) DEFAULT NULL, `created_at` datetime DEFAULT NULL, `updated_at` datetime DEFAULT NULL, `user_agent` varchar(255) DEFAULT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB AUTO_INCREMENT=33784 DEFAULT CHARSET=latin1 CREATE TABLE `method_timings` ( `id` int(11) NOT NULL AUTO_INCREMENT, `method` varchar(255) DEFAULT NULL, `batch_size` int(11) DEFAULT NULL, `timing` int(11) DEFAULT NULL, `os` varchar(255) DEFAULT NULL, `browser` varchar(255) DEFAULT NULL, `version` varchar(255) DEFAULT NULL, `user_agent` varchar(255) DEFAULT NULL, `created_at` datetime DEFAULT NULL, `updated_at` datetime DEFAULT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB AUTO_INCREMENT=28849 DEFAULT CHARSET=latin1 (user_agent is broken down pre-insert into browser, version, and os from a small list of recognized values using regex; I keep the original user-agent string just in case.) I have a query like this that tells me the average timing for every non-bogus browser / version / method tuple: select c, avg(bogus) as bog, timing, method, browser, version from browser_tests as b inner join ( select count(*) as c, round(avg(timing)) as timing, method, browser, version from method_timings group by browser, version, method having c > 10 order by browser, version, timing ) as t using (browser, version, method) group by browser, version, method having bog < 1 order by browser, version, timing; Which returns something like: c bog tim method browser version 88 0.8333 184 reuse_insert Chrome 4.0.249.89 18 0.0000 238 mass_insert_width Chrome 4.0.249.89 70 0.0400 246 mass_insert Chrome 4.0.249.89 70 0.0400 327 mass_noinsert Chrome 4.0.249.89 88 0.0556 367 reuse_reinsert Chrome 4.0.249.89 88 0.0556 383 jquery Chrome 4.0.249.89 88 0.0556 863 full_reinsert Chrome 4.0.249.89 187 0.0000 105 jquery Chrome 5.0.307.11 187 0.8806 109 reuse_insert Chrome 5.0.307.11 123 0.0000 110 mass_insert_width Chrome 5.0.307.11 176 0.0000 231 mass_noinsert Chrome 5.0.307.11 176 0.0000 237 mass_insert Chrome 5.0.307.11 187 0.0000 314 reuse_reinsert Chrome 5.0.307.11 187 0.0000 372 full_reinsert Chrome 5.0.307.11 12 0.7500 82 reuse_insert Chrome 5.0.335.0 12 0.2500 102 jquery Chrome 5.0.335.0 [...] I want to modify this query to return only the browser/version/method with the lowest timing - i.e. something like: 88 0.8333 184 reuse_insert Chrome 4.0.249.89 187 0.0000 105 jquery Chrome 5.0.307.11 12 0.7500 82 reuse_insert Chrome 5.0.335.0 [...] How can I do this, while still returning the method that goes with that lowest timing? I could filter it app-side, but I'd rather do this in mysql since it'd work better with my caching.

    Read the article

  • Get HDD (and NOT Volume) Serial Number on Vista Ultimate 64 bit

    - by TheAgent
    Hi all. I was once looking for getting the HDD serial number without using WMI, and I found it. The code I found and posted on StackOverFlow.com works very well on 32 bit Windows, both XP and Vista. The trouble only begins when I try to get the serail number on 64 bit OSs (Vista Ultimate 64, specifically). The code returns String.Empty, or a Space all the time. Anyone got an idea how to fix this? EDIT: I used the tools Dave Cluderay suggested, with interesting results: Here is the output from DiskId32, on Windows XP SP2 32-bit: To get all details use "diskid32 /d" Trying to read the drive IDs using physical access with admin rights Drive 0 - Primary Controller - - Master drive Drive Model Number________________: [MAXTOR STM3160215AS] Drive Serial Number_______________: [ 6RA26XK3] Drive Controller Revision Number__: [3.AAD] Controller Buffer Size on Drive___: 2097152 bytes Drive Type________________________: Fixed Drive Size________________________: 160041885696 bytes Trying to read the drive IDs using the SCSI back door Drive 4 - Tertiary Controller - - Master drive Drive Model Number________________: [MAXTOR STM3160215AS] Drive Serial Number_______________: [ 6RA26XK3] Drive Controller Revision Number__: [3.AAD] Controller Buffer Size on Drive___: 2097152 bytes Drive Type________________________: Fixed Drive Size________________________: 160041885696 bytes Trying to read the drive IDs using physical access with zero rights **** STORAGE_DEVICE_DESCRIPTOR for drive 0 **** Vendor Id = [] Product Id = [MAXTOR STM3160215AS] Product Revision = [3.AAD] Serial Number = [] **** DISK_GEOMETRY_EX for drive 0 **** Disk is fixed DiskSize = 160041885696 Trying to read the drive IDs using Smart Drive 0 - Primary Controller - - Master drive Drive Model Number________________: [MAXTOR STM3160215AS] Drive Serial Number_______________: [ 6RA26XK3] Drive Controller Revision Number__: [3.AAD] Controller Buffer Size on Drive___: 2097152 bytes Drive Type________________________: Fixed Drive Size________________________: 160041885696 bytes Hard Drive Serial Number__________: 6RA26XK3 Hard Drive Model Number___________: MAXTOR STM3160215AS And DiskId32 run on Windows Vista Ultimate 64-bit: To get all details use "diskid32 /d" Trying to read the drive IDs using physical access with admin rights Trying to read the drive IDs using the SCSI back door Trying to read the drive IDs using physical access with zero rights **** STORAGE_DEVICE_DESCRIPTOR for drive 0 **** Vendor Id = [MAXTOR S] Product Id = [TM3160215AS] Product Revision = [3.AA] Serial Number = [] **** DISK_GEOMETRY_EX for drive 0 **** Disk is fixed DiskSize = 160041885696 Trying to read the drive IDs using Smart Hard Drive Serial Number__________: Hard Drive Model Number___________: Notice how much lesser the information is on Vista, and how the Serial Number is not returned. Also the other tool, EnumDisk, refers to my hard disks on Vista as "SCSI" as opposed to "ATA" on Windows XP. Any ideas? EDIT 2: I'm posting the results from EnumDisks: On Windows XP SP2 32-bit: Properties for Device 1 Device ID: IDE\DiskMAXTOR_STM3160215AS_____________________3.AAD___ Adapter Properties ------------------ Bus Type : ATA Max. Tr. Length: 0x20000 Max. Phy. Pages: 0xffffffff Alignment Mask : 0x1 Device Properties ----------------- Device Type : Direct Access Device (0x0) Removable Media : No Product ID : MAXTOR STM3160215AS Product Revision: 3.AAD Inquiry Data from Pass Through ------------------------------ Device Type: Direct Access Device (0x0) Vendor ID : MAXTOR S Product ID : TM3160215AS Product Rev: 3.AA Vendor Str : *** End of Device List

    Read the article

  • Cannot add Silverlight Maps Control to Windows Mobile 7 application

    - by Jacob
    I know the bits just came out today, but one of the first things I want to do with the newly released Windows Mobile 7 SDK is put a map up on the screen and mess around. I've downloaded the latest version of the Silverlight Maps Control and added the references to my application. As a matter of fact, the VS 2010 design view of the MainPage.xaml shows the map control after adding the namespace and placing the control. I'm using the provided VS 2010 Express version that comes with the Win Mobile 7 SDK and have just used the New Project - Windows Phone Application template. When I try to build I get two warnings related to the Microsoft.Maps.MapControl dll's. Warning 1 The primary reference "Microsoft.Maps.MapControl, Version=1.0.1.0, Culture=neutral, PublicKeyToken=498d0d22d7936b73, processorArchitecture=MSIL" could not be resolved because it has an indirect dependency on the framework assembly "System.Windows.Browser, Version=2.0.5.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e" which could not be resolved in the currently targeted framework. "Silverlight,Version=v4.0,Profile=WindowsPhone". To resolve this problem, either remove the reference "Microsoft.Maps.MapControl, Version=1.0.1.0, Culture=neutral, PublicKeyToken=498d0d22d7936b73, processorArchitecture=MSIL" or retarget your application to a framework version which contains "System.Windows.Browser, Version=2.0.5.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e". Warning 2 The primary reference "Microsoft.Maps.MapControl.Common, Version=1.0.1.0, Culture=neutral, PublicKeyToken=498d0d22d7936b73, processorArchitecture=MSIL" could not be resolved because it has an indirect dependency on the framework assembly "System.Windows.Browser, Version=2.0.5.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e" which could not be resolved in the currently targeted framework. "Silverlight,Version=v4.0,Profile=WindowsPhone". To resolve this problem, either remove the reference "Microsoft.Maps.MapControl.Common, Version=1.0.1.0, Culture=neutral, PublicKeyToken=498d0d22d7936b73, processorArchitecture=MSIL" or retarget your application to a framework version which contains "System.Windows.Browser, Version=2.0.5.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e". I'm leaning towards some way of adding the System.Windows.Browser to the targeted framework version. But I'm not even sure if that is possible. To be more specific; I'm looking for a way to get the Silverlight Maps Control up on a Windows Phone 7 series application. If possible. Thanks.

    Read the article

  • Idea for a user friendly/non technical RAD tool for performing queries and reports on Database

    - by pierocampanelli
    I am investigating for a tool that allows a user to perform in a user friendly way queries to database for extracting datas and creating reports. Primary requirement is that we can't know queries users are going to do. So we need to design a flexible UI allowing them to specify in a non technical way. My question is: do you know any tool that does something similar? Have you some inspiring user interface?

    Read the article

  • HtAccess Rewrite Needed

    - by pws5068
    My host will not allow me to change the default folder of my primary domain. I have managed to Rewrite http://www.mysite.com to the real folder public_html/mysite.com/www/ with the following code: RewriteEngine On RewriteRule ^$ /mysite.com/www/ [R=301,L] This does successfully load my domain from the subfolder, but the url becomes: http://mysite.com/mysite.com/www/ How can I continue loading requests from http://mysite.com/index.html in the correct folder shown above, without showing it in the client-side url?

    Read the article

  • Amazon SimpleDB Identity Seed equivalent

    - by Zaff
    Is there an equivalent to an identity Seed in SimpleDB? If the answer is no, how do you handle creating something like a customer number or order number that will prevent the creation duplicate numbers? My experience is mainly from SQL Server in which I would either create a primary key with an identity seed or use transactions in a stored procedure to increment the number. Thanks for your help!

    Read the article

  • Why does PostgresQL query performance drop over time, but restored when rebuilding index

    - by Jim Rush
    According to this page in the manual, indexes don't need to be maintained. However, we are running with a PostgresQL table that has a continuous rate of updates, deletes and inserts that over time (a few days) sees a significant query degradation. If we delete and recreate the index, query performance is restored. We are using out of the box settings. The table in our test is currently starting out empty and grows to half a million rows. It has a fairly large row (lots of text fields). We are search is based of an index, not the primary key (I've confirmed the index is being used, at least under normal conditions) The table is being used as a persistent store for a single process. Using PostgresQL on Windows with a Java client I'm willing to give up insert and update performance to keep up the query performance. We are considering rearchitecting the application so that data is spread across various dynamic tables in a manner that allows us to drop and rebuild indexes periodically without impacting the application. However, as always, there is a time crunch to get this to work and I suspect we are missing something basic in our configuration or usage. We have considered forcing vacuuming and rebuild to run at certain times, but I suspect the locking period for such an action would cause our query to block. This may be an option, but there are some real-time (windows of 3-5 seconds) implications that require other changes in our code. Additional information: Table and index CREATE TABLE icl_contacts ( id bigint NOT NULL, campaignfqname character varying(255) NOT NULL, currentstate character(16) NOT NULL, xmlscheduledtime character(23) NOT NULL, ... 25 or so other fields. Most of them fixed or varying character fiel ... CONSTRAINT icl_contacts_pkey PRIMARY KEY (id) ) WITH (OIDS=FALSE); ALTER TABLE icl_contacts OWNER TO postgres; CREATE INDEX icl_contacts_idx ON icl_contacts USING btree (xmlscheduledtime, currentstate, campaignfqname); Analyze: Limit (cost=0.00..3792.10 rows=750 width=32) (actual time=48.922..59.601 rows=750 loops=1) - Index Scan using icl_contacts_idx on icl_contacts (cost=0.00..934580.47 rows=184841 width=32) (actual time=48.909..55.961 rows=750 loops=1) Index Cond: ((xmlscheduledtime < '2010-05-20T13:00:00.000'::bpchar) AND (currentstate = 'SCHEDULED'::bpchar) AND ((campaignfqname)::text = '.main.ee45692a-6113-43cb-9257-7b6bf65f0c3e'::text)) And, yes, I am aware there there are a variety of things we could do to normalize and improve the design of this table. Some of these options may be available to us. My focus in this question is about understanding how PostgresQL is managing the index and query over time (understand why, not just fix). If it were to be done over or significantly refactored, there would be a lot of changes.

    Read the article

  • PHP & MySQL - Deleting table rows problem.

    - by oReiLLy
    Okay my script is supposed to delete a specific users case which is stored in 2 MySQL tables but for some reason when the user deletes the specific case it deletes all the users cases I only want it to delete the case the user selects. I was wondering how can I fix this problem? Thanks in advance for helping. Here is the PHP & MySQL code. if(isset($_POST['delete_case'])) { $cases_ids = array(); $mysqli = mysqli_connect("localhost", "root", "", "sitename"); $dbc = mysqli_query($mysqli,"SELECT cases.*, users_cases.* FROM cases INNER JOIN users_cases ON users_cases.cases_id = cases.id WHERE users_cases.user_id='$user_id'"); if (!$dbc) { print mysqli_error($mysqli); } else { while($row = mysqli_fetch_array($dbc)){ $cases_ids[] = $row["cases_id"]; } } foreach($_POST['delete_id'] as $di) { if(in_array($di, $cases_ids)) { $mysqli = mysqli_connect("localhost", "root", "", "sitename"); $dbc = mysqli_query($mysqli,"DELETE FROM users_cases WHERE cases_id = '$di'"); $dbc2 = mysqli_query($mysqli,"DELETE FROM cases WHERE id = '$di'"); } } } Here is the XHTML code. <li> <input type="text" name="file[]" size="25" /> <input type="text" name="case[]" size="25" /> <input type="text" name="name[]" size="25" /> <input type="submit" name="delete_case" id="delete_case" value="Delete Case" /> <input type="hidden" name="delete_id[]" value="' . $row['cases_id'] . '" /> </li> <li> <input type="text" name="file[]" size="25" /> <input type="text" name="case[]" size="25" /> <input type="text" name="name[]" size="25" /> <input type="submit" name="delete_case" id="delete_case" value="Delete Case" /> <input type="hidden" name="delete_id[]" value="' . $row['cases_id'] . '" /> </li> <li> <input type="text" name="file[]" size="25" /> <input type="text" name="case[]" size="25" /> <input type="text" name="name[]" size="25" /> <input type="submit" name="delete_case" id="delete_case" value="Delete Case" /> <input type="hidden" name="delete_id[]" value="' . $row['cases_id'] . '" /> </li> Here is the MySQL tables. CREATE TABLE cases ( id INT UNSIGNED NOT NULL AUTO_INCREMENT, file VARCHAR(255) NOT NULL, case VARCHAR(255) NOT NULL, name VARCHAR(255) NOT NULL, PRIMARY KEY (id) ); CREATE TABLE users_cases ( id INT UNSIGNED NOT NULL AUTO_INCREMENT, cases_id INT UNSIGNED NOT NULL, user_id INT UNSIGNED NOT NULL, PRIMARY KEY (id) );

    Read the article

  • DataRelation Insert and ForeignKey

    - by Steve
    Guys, I have a winforms application with two DataGridViews displaying a master-detail relationship from my Person and Address tables. Person table has a PersonID field that is auto-incrementing primary key. Address has a PersonID field that is the FK. I fill my DataTables with DataAdapter and set Person.PersonID column's AutoIncrement=true and AutoIncrementStep=-1. I can insert records in the Person DataTable from the DataGridView. The PersonID column displays unique negative values for PersonID. I update the database by calling DataAdapter.Update(PersonTable) and the negative PersonIDs are converted to positive unique values automatically by SQL Server. Here's the rub. The Address DataGridView show the address table which has a DataRelation to Person by PersonID. Inserted Person records have the temporary negative PersonID. I can now insert records into Address via DataGridView and Address.PersonID is set to the negative value from the DataRelation mapping. I call Adapter.Update(AddressTable) and the negative PersonIDs go into the Address table breaking the relationship. How do you guys handle primary/foreign keys using DataTables and master-detail DataGridViews? Thanks! Steve EDIT: After more googling, I found that SqlDataAdapter.RowUpdated event gives me what I need. I create a new command to query the last id inserted by using @@IDENTITY. It works pretty well. The DataRelation updates the Address.PersonID field for me so it's required to Update the Person table first then update the Address table. All the new records insert properly with correct ids in place! Adapter = new SqlDataAdapter(cmd); Adapter.RowUpdated += (s, e) => { if (e.StatementType != StatementType.Insert) return; //set the id for the inserted record SqlCommand c = e.Command.Connection.CreateCommand(); c.CommandText = "select @@IDENTITY id"; e.Row[0] = Convert.ToInt32( c.ExecuteScalar() ); }; Adapter.Fill(this); SqlCommandBuilder sb = new SqlCommandBuilder(Adapter); sb.GetDeleteCommand(); sb.GetUpdateCommand(); sb.GetInsertCommand(); this.Columns[0].AutoIncrement = true; this.Columns[0].AutoIncrementSeed = -1; this.Columns[0].AutoIncrementStep = -1;

    Read the article

  • Eclipse Debug Mode disrupting SQL Server 2005 Stored Procedure access

    - by Sathish
    We have a strange problem in our team. When a developer is using Eclipse in Debug mode, SQL Server 2005 blocks other developers from accessing a stored procedure. Debug session typically involves opening Hibernate session to persist an entity which could be accessing a stored procedure used for Primary key generation. Debugging is done in business logic code and rarely in JDBC stored procedure call. Is there any way to configure SQL server or the stored procedure so that other developers are not blocked?

    Read the article

  • Filestream in Sql Server 2008 Express

    - by Xaitec
    i tried to get it to work but i never seem to have to luck, i go a code snippet for a blog and still no dice. This is the code. EXEC sp_configure filestream_access_level, 1 GO RECONFIGURE GO CREATE DATABASE NorthPole ON PRIMARY ( NAME = NorthPoleDB, FILENAME = 'C:\Temp\NP\NorthPoleDB.mdf' ), FILEGROUP NorthPoleFS CONTAINS FILESTREAM( NAME = NorthPoleFS, FILENAME = 'C:\Temp\NP\NorthPoleFS') LOG ON ( NAME = NorthPoleLOG, FILENAME = 'C:\Temp\NP\NorthPoleLOG.ldf') GO

    Read the article

  • fast retrieval from MYSQL DB

    - by trojanwarrior3000
    I have a table of users - It contains around millions of rows (user-id is the primary key). I just want to retrieve user-id and their joining date. using "select user-id,joining date from table user" requires lot of time.Is there a fast way to query/retrieve the same data from this table?

    Read the article

  • CDNs and domains

    - by Martind
    Hi all! A lot of big websites (facebook etc) are settings up CDN's for their content. Now I notice, that these CDN's are not always on the original domain. Example: Facebook pictures are on "photos-a.ak.fbcdn.net" Why is that? Is there a performance-gain in not having lots of subdomains on the "primary" domain (facebook.com)

    Read the article

  • autoincrement in access sql is not working

    - by Thunder
    how can i create a table with autoincrement in access.Here is what i have been doing but not working. CREATE TABLE People_User_Master( Id INTEGER primary key AUTOINCREMENT , Name varchar(50) , LastName varchar(50) , Userid varchar(50) unique , Email varchar(50) , Phone varchar(50) , Pw varchar(50) , fk_Group int , Address varchar(150) )

    Read the article

  • Default Value or Binding in "Transfer SQL Server Object Task"

    - by Kronass
    Hi, I want to move 500 table from Database to other with their data and constraints all the tables have column who has default value, I used SSIS using "Transfer SQL Server Object Task" and I choose to copy all tables, copy data and primary keys, it copies the table except the default bindings I tried in SQL Server 2008 CopyAllDRIObjects Property but still the same result. How can I copy all tables from database to other with their data and maintaining their constraints.

    Read the article

  • delete all but minimal values, based on two columns in SQL Server table

    - by sqlill
    how to write a statement to accomplish the folowing? lets say a table has 2 columns (both are nvarchar) with the following data col1 10000_10000_10001_10002_10002_10002 col2 10____20____10____30____40_____50 I'd like to keep only the following data: col1 10000_10001_10002 col2 10____10____30 thus removing the duplicates based on the second column values (neither of the columns are primary keys), keeping only those records with the minimal value in the second column. how to accomplish this?

    Read the article

  • Email to be sent out from a dedicated server with different IP

    - by ToughPal
    We have three domains hosted on one dedicated server each with its own dedicated IP. Domain A - Has the server primary IP address (default server IP) Domain B - Has its own IP address Domain C - has its own IP address If an email goes out from Domain B then it uses the Domain A IP address in outgoing and this makes emails from Domain B using PHP go straight to spam box of Gmail etc. Is there any way to change the source IP depending on where the email originates from in PHP? What should we change to fix this?

    Read the article

  • INSERT OR IGNORE in a trigger

    - by dan04
    I have a database (for tracking email statistics) that has grown to hundreds of megabytes, and I've been looking for ways to reduce it. It seems that the main reason for the large file size is that the same strings tend to be repeated in thousands of rows. To avoid this problem, I plan to create another table for a string pool, like so: CREATE TABLE AddressLookup ( ID INTEGER PRIMARY KEY AUTOINCREMENT, Address TEXT UNIQUE ); CREATE TABLE EmailInfo ( MessageID INTEGER PRIMARY KEY AUTOINCREMENT, ToAddrRef INTEGER REFERENCES AddressLookup(ID), FromAddrRef INTEGER REFERENCES AddressLookup(ID) /* Additional columns omitted for brevity. */ ); And for convenience, a view to join these tables: CREATE VIEW EmailView AS SELECT MessageID, A1.Address AS ToAddr, A2.Address AS FromAddr FROM EmailInfo LEFT JOIN AddressLookup A1 ON (ToAddrRef = A1.ID) LEFT JOIN AddressLookup A2 ON (FromAddrRef = A2.ID); In order to be able to use this view as if it were a regular table, I've made some triggers: CREATE TRIGGER trg_id_EmailView INSTEAD OF DELETE ON EmailView BEGIN DELETE FROM EmailInfo WHERE MessageID = OLD.MessageID; END; CREATE TRIGGER trg_ii_EmailView INSTEAD OF INSERT ON EmailView BEGIN INSERT OR IGNORE INTO AddressLookup(Address) VALUES (NEW.ToAddr); INSERT OR IGNORE INTO AddressLookup(Address) VALUES (NEW.FromAddr); INSERT INTO EmailInfo SELECT NEW.MessageID, A1.ID, A2.ID FROM AddressLookup A1, AddressLookup A2 WHERE A1.Address = NEW.ToAddr AND A2.Address = NEW.FromAddr; END; CREATE TRIGGER trg_iu_EmailView INSTEAD OF UPDATE ON EmailView BEGIN UPDATE EmailInfo SET MessageID = NEW.MessageID WHERE MessageID = OLD.MessageID; REPLACE INTO EmailView SELECT NEW.MessageID, NEW.ToAddr, NEW.FromAddr; END; The problem After: INSERT OR REPLACE INTO EmailView VALUES (1, '[email protected]', '[email protected]'); INSERT OR REPLACE INTO EmailView VALUES (2, '[email protected]', '[email protected]'); The updated rows contain: MessageID ToAddr FromAddr --------- ------ -------- 1 NULL [email protected] 2 [email protected] [email protected] There's a NULL that shouldn't be there. The corresponding cell in the EmailInfo table contains an orphaned ToAddrRef value. If you do the INSERTs one at a time, you'll see that Alice's ID in the AddressLookup table changes! It appears that this behavior is documented: An ON CONFLICT clause may be specified as part of an UPDATE or INSERT action within the body of the trigger. However if an ON CONFLICT clause is specified as part of the statement causing the trigger to fire, then conflict handling policy of the outer statement is used instead. So the "REPLACE" in the top-level "INSERT OR REPLACE" statement is overriding the critical "INSERT OR IGNORE" in the trigger program. Is there a way I can make it work the way that I wanted?

    Read the article

  • Host a project on Github and Google Code

    - by Abhi Beckert
    Is it possible to have a project hosted on Github and google code? I've been using Google Code for years, and recently started playing with GitHub. I like GitHub a lot, but there's also a long list of Google Code features I really miss. Is it possible/feasible to host a single project on both? Can I use github as the primary repository for my source, but have all revisions automatically sent over to a git repository on Google Code?

    Read the article

  • How to create nonclustered index in Create Table.

    - by isthatacode
    Create table FavoriteDish ( FavID int identity (1,1) primary key not null, DishID int references Dishes(DishID) not null , CelebrityName nvarchar(100) nonclustered not null ) This results in - Incorrect syntax near the keyword 'nonclustered'. I referred the MSDN help for create table syntax. I am not sure whats wrong here? Thanks for reading.

    Read the article

< Previous Page | 374 375 376 377 378 379 380 381 382 383 384 385  | Next Page >