Search Results

Search found 955 results on 39 pages for 'inserts'.

Page 19/39 | < Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >

  • .NET TDD with a Database and ADO.NET Entity Framework - Integration Tests

    - by Brian
    Hello, I'm using ADO.NET entity framework, and am using an AdventureWorks database attached to my local database server. For unit testing, what approaches have people taken to work with a database? Obviously, the database has to be in a pre-defined state of change so that the tests can have some isolation from each other... so I need to be able to run through the inserts and updates, then rollback either between tests or after the batch of tests are done. Any advice? Thanks.

    Read the article

  • In VisualStudio 2008, emacs mode - how can you enable "overwrite"?

    - by Abby Fichtner
    Using VisualStudio 2008, have emacs keyboard mapping scheme enabled. If I select text and try to paste over it, it INSERTS the new text, rather than replacing it. Also, if I select text and hit DELETE it deletes the first character AFTER the selected text (just as if I didn't have any text selected). Does anyone know how to fix this so that I get the standard windows behavior. That is: If I select text and try to paste over it, it replaces the selected text with what I pasted in. If I select text and hit the DELETE key, it actually deletes the text I have selected Thanks! Abby

    Read the article

  • Identity column SQL Server 2005 inserting same value twice

    - by DannykPowell
    I have a stored procedure that inserts into a table (where there is an identity column that is not the primary key- the PK is inserted initially using the date/time to generate a unique value). We then use SCOPEIDENTITY() to get the value inserted, then there is some logic to generate the primary key field value based on this value, which is then updated back to the table. In some situations the stored procedure is called simultaneously by more than one process, resulting in "Violation of PRIMARY KEY constraint..." errors. This would seem to indicate that the identity column is allowing the same number to be inserted for more than one record. First question- how is this possible? Second question- how to stop it...there's no error handling currently so I'm going to add some try/ catch logic- but would like to understand the problem fully to deal with properly

    Read the article

  • getting mysql_insert_id() while using ON DUPLICATE KEY UPDATE with PHP

    - by julio
    Hi-- I've found a few answers for this using mySQL alone, but I was hoping someone could show me a way to get the ID of the last inserted or updated row of a mysql DB when using PHP to handle the inserts/updates. Currently I have something like this, where column3 is a unique key, and there's also an id column that's an autoincremented primary key: $query ="INSERT INTO TABLE (column1, column2, column3) VALUES (value1, value2, value3) ON DUPLICATE KEY UPDATE SET column1=value1, column2=value2, column3=value3"; mysql_query($query); $my_id = mysql_insert_id(); $my_id is correct on INSERT, but incorrect when it's updating a row (ON DUPLICATE KEY UPDATE). I have seen several posts with people advising that you use something like INSERT INTO table (a) VALUES (0) ON DUPLICATE KEY UPDATE id=LAST_INSERT_ID(id) to get a valid ID value when the ON DUPLICATE KEY is invoked-- but will this return that valid ID to the PHP "mysql_insert_id()" function? Thanks for any advice.

    Read the article

  • Drupal Views how to filter items overlapping a date range

    - by Marcos Buarque
    Hi, in Drupal I have used CCK to add a datetime field to my custom data type. It inserts start date and end date fields. That is what I want. Now, I want Views to filter and show only the items that have the daterange (start date and end date) overlapping today's date. Any ideas on how to set it up on Views? What I think is strange is that the date fields of my custom content type don't seem to appear on the Views list when I am trying to add a filter. Thanks.

    Read the article

  • SQL Server CE 3.5 SP1 Stored Procedures

    - by Robert
    I have been tasked with taking an existing WinForms application and modifying it to work in an "occasionally-connected" mode. This was to be achieved with SQL Server CE 3.5 on a user's laptop and sync the server and client either via SQL Server Merge Replication or utilizing Microsoft's Sync Framework. Currently, the application connects to our SQL Server and retrieves, inserts, updates data using stored procedures. I have read that SQL Server CE does not support stored procedures. Does this mean that all my stored procedures will need to be converted to straight SQL statements, either in my code or as a query inside a tableadapter? If this is true, what are my alternatives?

    Read the article

  • Clustered index dilemma - ID or sort?

    - by richardtallent
    I have a table with two very important fields: id INT identity(1,1) PRIMARY KEY identifiersortcode VARCHAR(900) My app always sorts and pages search results in the UI based on identifiersortcode, but all table joins (and they are legion) are on the id field. (Aside: yes, the sort code really is that long. There's a strong BL reason.) Also, due to O/RM use, most SELECT statements are going to pull almost every column. Currently, the clustered index is on id, but I'm wondering if the TOP / ORDER BY portion of most queries would make identifiersortcode a more attractive option as the clustered key, even considering all of the table joins going on. Inserts on the table and changes to the identifiersortcode are limited enough that changing my clustered index would be a problem for insert/update operations. Trying to make the sort code's non-clustered index a covering index (using INCLUDE) is not a good option. There are a number of large columns, and some of them have a lot of update activity.

    Read the article

  • not returning anything from postgresql function?

    - by netllama
    Is it possible for a PostgreSQL plpgsql function to not return anything? I've created a function, and I don't need it to return anything at all, as it performs a complex SQL query, and inserts the results of that query into another table (SELECT INTO ....). Thus, I have no need or interest in having the function return any output or value. Unfortunately, when I try to omit the RETURN clause of the function declaration, I can't create the function. Is it possible for a PostgreSQL plpgsql function to not return anything?

    Read the article

  • filter that uses elements from two arrays at the same time

    - by Gacek
    Let's assume we have two arrays of the same size - A and B. Now, we need a filter that, for a given mask size, selects elements from A, but removes the central element of the mask, and inserts there corresponding element from B. So the 3x3 "pseudo mask" will look similar to this: A A A A B A A A A Doing something like this for averaging filter is quite simple. We can compute the mean value for elements from A without the central element, and then combine it with a proper proportion with elements from B: h = ones(3,3); h(2,2) =0; h = h/sum(h(:)); A_ave = filter2(h, A); C = (8/9) * A_ave + (1/9) * B; But how to do something similar for median filter (medfilt2 or even better for ordfilt2)

    Read the article

  • Linux Kernel - Slab Allocator Question

    - by Drex
    I am playing around with the kernel and am looking at the kmem_cache files_cachep belonging to fork.c. It detects the sizeof(files_struct). My question is this: I have altered files_struct and added a rb_root (red/black tree root) using the built-in functionality in linux/rbtree.h. I can properly insert values into this tree. However, at some point, a segfault occurs and GDB backtraces the following information: (gdb) backtrace 0 0x08066ad7 in page_ok (page=) at arch/um/os-Linux/sys-i386/task_size.c:31 1 0x08066bdf in os_get_top_address () at arch/um/os-Linux/sys-i386/task_size.c:100 2 0x0804a216 in linux_main (argc=1, argv=0xbfb05f14) at arch/um/kernel/um_arch.c:277 3 0x0804acdc in main (argc=1, argv=0xbfb05f14, envp=0xbfb05f1c) at arch/um/os-Linux/main.c:150 I have spent many hours trying to figure out why there is a segfault given that the red/black tree inserts properly. I'm thinking it's a memory allocation issue with new processes made by fork() of a parent process. Could this be the case and could it have something to do with kmem_cache files_cachep?

    Read the article

  • Can a user-chosen image be inserted directly into a JEditorPane?

    - by JIM
    What I am trying to do is open up a JFilechooser that filters jpeg,gif and png images, then gets the user's selection and inserts it into the JEditorPane. Can this be done? or am i attempting something impossible? Here is a sample of my program.(insert is a JMenuItem and mainText is a JEditorPane) insert.addActionListener(new ActionListener(){ public void actionPerformed(ActionEvent e){ JFileChooser imageChooser = new JFileChooser(); imageChooser.setFileFilter(new FileNameExtensionFilter("Image Format","jpg","jpeg","gif","png")); int choice = imageChooser.showOpenDialog(mainText); if (choice == JFileChooser.APPROVE_OPTION) { mainText.add(imageChooser.getSelectedFile()); } } }); What i tried to do is use the add method, i know it's wrong but just to give you an idea of what i'm trying to do. Before you complain, i'm sorry about the code formatting, i don't really know all the conventions of what is considered good or bad style. Thank you very much.

    Read the article

  • How to remove one instance of one string in PHP?

    - by Jane
    I have a open source editor on the cms that I am making that automatically inserts a <br /> tag at the beginning of the post it submits to the database. This makes validation a pain, since even though there is no real text being submitted, the form still accepts the break tag as input and prevents the "Please enter some text" error from showing. So I tried to remove the opening break tag by filtering my input like this: substr($_POST['content'], 6); This works as long as the user doesn't press the backspace a couple of times which removes the break tag in which case the first 8 characters of the post gets removed even if they are not a break tag. So how can I remove the first 6 characters of the input ONLY if those first 6 characters are composed of the break tag. Also I don't want to remove all break tags, only the one at the very beginning of the post.

    Read the article

  • Force 'Replace Into' to use a certain index

    - by Bobby
    I have a MySQL (5.0) table with 3 rows which are considered a combined Unique Index: CREATE TABLE `test`.`table_a` ( `Id` int(11) NOT NULL AUTO_INCREMENT, `field1` varchar(5) COLLATE latin1_swedish_ci NOT NULL DEFAULT '', `field2` varchar(5) COLLATE latin1_swedish_ci NOT NULL DEFAULT '', `field3` varchar(5) COLLATE latin1_swedish_ci NOT NULL DEFAULT '', PRIMARY KEY (`Id`), INDEX `IdxUnqiue` (`field1`(5),`field2`(5),`field3`(5)) ) ENGINE=MyISAM; This table should be filled with a REPLACE INTO query: REPLACE INTO table_a ( Field1, Field2, Field3 ) VALUES ( "Test1", "Test2", "Test3" ) The behavior I'd like to see is that this query always overrides the previous inserted row, because IdxUnique is...ahm, triggered. But unfortunately, there's still the primary index which seems to kick in and always inserts a new row. What I get: Query was executed 3 times: +---Id---+---Field1---+---Field2---+---Field3---+ | 1 | Test1 | Test2 | Test2 | | 2 | Test1 | Test2 | Test2 | | 3 | Test1 | Test2 | Test2 | +--------+------------+------------+------------+ What I want: Query was executed 3 times: +---Id---+---Field1---+---Field2---+---Field3---+ | 3 | Test1 | Test2 | Test2 | +--------+------------+------------+------------+ So, can I tell REPLACE INTO to use just a certain Index or to consider one 'more inportant' then another?

    Read the article

  • Major performance difference between two Oracle database instances

    - by jrdioko
    I am working with two instances of an Oracle database, call them one and two. two is running on better hardware (hard disk, memory, CPU) than one, and two is one minor version behind one in terms of Oracle version (both are 11g). Both have the exact same table table_name with exactly the same indexes defined. I load 500,000 identical rows into table_name on both instances. I then run, on both instances: delete from table_name; This command takes 30 seconds to complete on one and 40 minutes to complete on two. Doing INSERTs and UPDATEs on the two tables has similar performance differences. Does anyone have any suggestions on what could have such a drastic impact on performance between the two databases?

    Read the article

  • msysGit: Why does git log output blank lines?

    - by Sam
    It appears to insert less blank lines the closer I type the command to the bottom of the terminal window. If I type it at the top of the terminal window, it inserts nearly a full window height of blank lines; if I type it at the very bottom, no blank lines are inserted. It seems like the pager program is pushing output to the bottom of the terminal window, but I want the output to be right below my command or at the top, like in Linux git. I can get expected behavior by using git --no-pager log, but what if I want to use a pager?

    Read the article

  • Fastest way to compress a database or .bak file and transfer it

    - by Nai
    As per the question title. I wonder if there are special programmes or commands that makes zipping up a .bak file and transferring it super quick. I read abour xp_cmdshell here but I'm not sure about the speed. My .bak file is about 12 gigs at the moment. Related to this is the possibility of using Red Gate's SQL Data Compare to just transfer the differential data across the network pipeline but I have never used SQL Data Compare before and I'm not sure how it goes about doing INSERTS on tables with Primary Keys and such. Also, not sure about the speed. Does anyone have any experience with this programme or similar programmes? Cheers!

    Read the article

  • ibatis domain modelling

    - by josh
    Hi team; I am working on the domain model for a project. I have a class named user that has a class named UserType as one of the properties. I know when i want to select all users, i will use joins to pick up all corresponding usertypes. How do i do inserts? Do i have to write a handler for userType? or can i do something like INSERT INTO users(... usertype_id ...) VALUES(... #{usertype.usertype_id}...) Please help; I have spent the whole day trying to figure this out. Am using ibatis 3.0 and am new to ibatis. Regards Josh

    Read the article

  • Posting comment with ajax and jquery

    - by Steve
    I want to display the posted comment once the user comments. Just to add it under all of them as facebook does. I have this: // Interceptamos el evento submit $('#CommentAddForm').submit(function() { alert("entro"); alert($(this).attr('action')); // Enviamos el formulario usando AJAX $.ajax({ type: 'POST', url: $(this).attr('action'), data: $(this).serialize(), // Mostramos un mensaje con la respuesta de PHP success: function(data) { $('#result').html(//????????????); } }); return false; }); But i don't know very well how it works and i dont know what should i write in the line $('#result').html(//????????????); The variable URL contains the route which inserts the comment in the DB. And it works well. Any idea? Thanks. By the way, i have been reading this answer: Ajax/jQuery Comment System But i still don't get it.

    Read the article

  • MySQL INSERT data does not get stored in proper db, only temporary?

    - by greye
    I'm having trouble with MySQL or Python and can't seem to isolate the problem. INSERTs only seem to last the run of the script and are not stored in the database. I have this script: import MySQLdb db = MySQLdb.connect(host="localhost", user="user", passwd="password", db="example") dbcursor = db.cursor() dbcursor.execute("select * from tablename") temp = dbcursor.fetchall() print 'before: '+str(temp) dbcursor.execute('INSERT INTO tablename (data1, data2, data3) VALUES ("1", "a", "b")') dbcursor.execute("select * from tablename") temp = dbcursor.fetchall() print 'after: '+str(temp) The first time I run it I get the expected output: >>> before: () after: ((1L, 'a', 'b'),) The problem is that if I run it again, the before comes out empty when it should already have the entry in it and the after doesn't break (data 1 is primary key). >>> before: () after: ((1L, 'a', 'b'),) >>> before: () after: ((1L, 'a', 'b'),) >>> before: () after: ((1L, 'a', 'b'),) If I try running the insert command twice in the same script it will break ("Duplicate entry for PRIMARY KEY") Any idea what might be happening here?

    Read the article

  • Return database messages on successful SQL execution when using ADO

    - by peacedog
    I'm working on a legacy VB6 app here at work, and it has been a long time since I've looked at VB6 or ADO. One thing the app does is to executes SQL Tasks and then to output the success/failure into an XML file. If there is an error it inserts the text the task node. What I have been asked to do is try and do the same with the other mundane messages that result from succesfully executed tasks, like (323 row(s) affected). There is no command object being used, it's just an ADODB.Connection object. Here is the gist of the code: Dim sqlStatement As String Set sqlStatement = /* sql for task */ Dim sqlConn As ADODB.Connection Set sqlConn = /* connection magic */ sqlConn.Execute sqlStatement, , adExecuteNoRecords What is the best way for me to capture the non-error messages so I can output them? Or is it even possible?

    Read the article

  • how to have separate keys per record in mongo_mapper + Rails

    - by Vitaly Kushner
    When I'm adding a record in mongodb I can specify whatever keys I want and it will store it in the db. The problem is that it will remember those keys for the next time I insert another record. so for example if I do the following: Product.create :foo => 123 and then Product.create :bar => 456 I get :foo => nil field in the 2nd record. This is definitely not a limitation of mongodb itself, since if I restart the rails console and create yet another record with different set of columns, it will not add the columns from the 1st 2 records. So it seems like mongomapper remembers all the keys used and inserts them all into all records, even if values are not provided. The question is obviously: how do I disable this crazy attributes explosion? Basically I want only the 'permanent' keys that I specify in the model to be in every record, but all the 'extra' attributes to be specified per record and not to mess the consequent records.

    Read the article

  • Carrierwave upload to a tmp dir before saving to database

    - by user827570
    I'm trying to build a visual editor where users can click an image they are presented with an image upload form once the upload is done I use ajax to return the image and insert it back into the page. But the above method inserts the image straight into the database but I want users to be able to visualize the image before the image is inserted into the database. So I was wondering if the image using carrierwave could be uploaded to a temp location, sent back to the user and then when the user saves the page the image is moved into the permanent location. Here's what I have so far. def edit_image @page = Page.find(1) @page.update_attributes(params[:page]) @page.save return :text => @page.file end But this is what I want to achieve def temp_image #uploads received image to a temp location #returns image to the user end And once the user clicks save def save #moves the file in the temp folder to the permanent location end Cheers

    Read the article

  • Determine if on product page programmatically in Magento

    - by dfondente
    I want to insert tracking codes on all of the pages of a Magento site, and need to use a different syntax if the page is a CMS page, a category browsing page, or a product view page. I have a custom module set up with a block that inserts a generic tracking code on each page for now. From within the block, how can I distinguish between CMS pages, category pages, and product pages? I started with: Mage::app()->getRequest(); I can see that Mage::app()->getRequest()->getParam('id'); returns the product or category ID on product and category pages, but doesn't distinguish between those page types. Mage::app()->getRequest()->getRouteName(); return "cms" for CMS pages, but returns "catalog" for both category browsing and product view pages, so I can't use that to tell category and product pages apart. Is there some indicator in the request I can use safely? Or is there a better way to accomplish my goal of different tracking codes for different page types?

    Read the article

  • How to implement Auto_Increment per User, on the same table?

    - by Jonas
    I would like to have multiple users that share the same tables in the database, but have one auto_increment value per user. I will use an embedded database, JavaDB and as what I know it doesn't support this functionality. How can I implement it? Should I implement a trigger on inserts that lookup the users last inserted row, and then add one, or are there any better alternative? Or is it better to implement this in the application code? Or is this just a bad idea? I think this is easier to maintain than creating new tables for every user. Example: table +----+-------------+---------+------+ | ID | ID_PER_USER | USER_ID | DATA | +----+-------------+---------+------+ | 1 | 1 | 2 | 3454 | | 2 | 2 | 2 | 6567 | | 3 | 1 | 3 | 6788 | | 4 | 3 | 2 | 1133 | | 5 | 4 | 2 | 4534 | | 6 | 2 | 3 | 4366 | | 7 | 3 | 3 | 7887 | +----+-------------+---------+------+ SELECT * FROM table WHERE USER_ID = 3 +----+-------------+---------+------+ | ID | ID_PER_USER | USER_ID | DATA | +----+-------------+---------+------+ | 3 | 1 | 3 | 6788 | | 6 | 2 | 3 | 4366 | | 7 | 3 | 3 | 7887 | +----+-------------+---------+------+ SELECT * FROM table WHERE USER_ID = 2 +----+-------------+---------+------+ | ID | ID_PER_USER | USER_ID | DATA | +----+-------------+---------+------+ | 1 | 1 | 2 | 3454 | | 2 | 2 | 2 | 6567 | | 4 | 3 | 2 | 1133 | | 5 | 4 | 2 | 4534 | +----+-------------+---------+------+

    Read the article

  • Database file is inexplicably locked during SQLite commit

    - by sweeney
    Hello, I'm performing a large number of INSERTS to a SQLite database. I'm using just one thread. I batch the writes to improve performance and have a bit of security in case of a crash. Basically I cache up a bunch of data in memory and then when I deem appropriate, I loop over all of that data and perform the INSERTS. The code for this is shown below: public void Commit() { using (SQLiteConnection conn = new SQLiteConnection(this.connString)) { conn.Open(); using (SQLiteTransaction trans = conn.BeginTransaction()) { using (SQLiteCommand command = conn.CreateCommand()) { command.CommandText = "INSERT OR IGNORE INTO [MY_TABLE] (col1, col2) VALUES (?,?)"; command.Parameters.Add(this.col1Param); command.Parameters.Add(this.col2Param); foreach (Data o in this.dataTemp) { this.col1Param.Value = o.Col1Prop; this. col2Param.Value = o.Col2Prop; command.ExecuteNonQuery(); } } this.TryHandleCommit(trans); } conn.Close(); } } I now employ the following gimmick to get the thing to eventually work: private void TryHandleCommit(SQLiteTransaction trans) { try { trans.Commit(); } catch (Exception e) { Console.WriteLine("Trying again..."); this.TryHandleCommit(trans); } } I create my DB like so: public DataBase(String path) { //build connection string SQLiteConnectionStringBuilder connString = new SQLiteConnectionStringBuilder(); connString.DataSource = path; connString.Version = 3; connString.DefaultTimeout = 5; connString.JournalMode = SQLiteJournalModeEnum.Persist; connString.UseUTF16Encoding = true; using (connection = new SQLiteConnection(connString.ToString())) { //check for existence of db FileInfo f = new FileInfo(path); if (!f.Exists) //build new blank db { SQLiteConnection.CreateFile(path); connection.Open(); using (SQLiteTransaction trans = connection.BeginTransaction()) { using (SQLiteCommand command = connection.CreateCommand()) { command.CommandText = DataBase.CREATE_MATCHES; command.ExecuteNonQuery(); command.CommandText = DataBase.CREATE_STRING_DATA; command.ExecuteNonQuery(); //TODO add logging } trans.Commit(); } connection.Close(); } } } I then export the connection string and use it to obtain new connections in different parts of the program. At seemingly random intervals, though at far too great a rate to ignore or otherwise workaround this problem, I get unhandled SQLiteException: Database file is locked. This occurs when I attempt to commit the transaction. No errors seem to occur prior to then. This does not always happen. Sometimes the whole thing runs without a hitch. No reads are being performed on these files before the commits finish. I have the very latest SQLite binary. I'm compiling for .NET 2.0. I'm using VS 2008. The db is a local file. All of this activity is encapsulated within one thread / process. Virus protection is off (though I think that was only relevant if you were connecting over a network?). As per Scotsman's post I have implemented the following changes: Journal Mode set to Persist DB files stored in C:\Docs + Settings\ApplicationData via System.Windows.Forms.Application.AppData windows call No inner exception Witnessed on two distinct machines (albeit very similar hardware and software) Have been running Process Monitor - no extraneous processes are attaching themselves to the DB files - the problem is definitely in my code... Does anyone have any idea whats going on here? I know I just dropped a whole mess of code, but I've been trying to figure this out for way too long. My thanks to anyone who makes it to the end of this question! brian UPDATES: Thanks for the suggestions so far! I've implemented many of the suggested changes. I feel that we are getting closer to the answer...however... The code above technically works however it is non-deterministic! It is not guaranteed to do anything aside from spin in neutral forever. In practice it seems to work somewhere between the 1st and 10th iteration. If i batch my commits at a reasonable interval damage will be mitigated but I really do not want to leave things in this state... More suggestions welcome!

    Read the article

< Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >