Search Results

Search found 10691 results on 428 pages for 'batch insert'.

Page 161/428 | < Previous Page | 157 158 159 160 161 162 163 164 165 166 167 168  | Next Page >

  • Creating an Excel file using .NET (C#) - Problems with columns headers!

    - by tsocks
    Hello, I want to create & fill a .xls file using ADO.NET or LINQ, but I do not want to have the columns names in the first row. I just want to insert rows starting in row no. 1. I know I have to insert colums first, but... is there a way to 'hide' those columns headers? The problem is that, in first row of my spreadsheet, I must have only two values (one in A1 and the other in B1), but in the remaining rows I'll be inserting more than just two values (maximum 15 columns). I'm open to suggestions/hacks/tricks even if that's not the best way of doing this. Thanks!

    Read the article

  • .net Studio Local Database

    - by testerwpf
    Hello everyone, i am designing a local database in .net with wpf as gui. I have added a new database, and added a new table. Through the TableAdapter i generated 2 statements ( 1 statement is a select stmt and 1 is a insert) , i insert name and firstname (id is auto generated). It works fine, i can display the table in a datagrid (wpf toolkit) and also add new items (name,firstname), when i close and start the application everything is fine (data in table is stored) BUT when i try to preview data in my database dataset (where my Adapters exist) , no data is displayed and then the table gets deleted.. why? public partial class MainWindow : Window { public MainWindow() { this.InitializeComponent(); PlayerTableAdapter objPlayerTableAdapter = new PlayerTableAdapter(); objDataGridResults.ItemsSource = objPlayerTableAdapter.GetDataAllPlayer(); } //Button Event onClick private void m_voidAddPlayer(object sender, System.Windows.RoutedEventArgs e) { PlayerTableAdapter objPlayerTableAdapter = new PlayerTableAdapter(); objPlayerTableAdapter.InsertQueryPlayer(objTextBoxPlayerName.Text.ToString(), objTextBoxPlayerFirstName.Text.ToString()); objDataGridResults.ItemsSource = objPlayerTableAdapter.GetDataAllPlayer(); } }

    Read the article

  • Trying to use the 7-zip self extracting archive GUI and it fails [closed]

    - by djangofan
    Trying to use the 7-zip self extracting archive GUI and it fails. When I try to create a "Self Extracting Installer" option, at the end of the install it runs my batch file and it appears to be extracting all the files just before it does that, but after extraction, the files are nowhere to be found (except within the .7z archive). Any idea on why this occurs? https://code.google.com/p/sfx-creator/

    Read the article

  • Is there any way to limit the size of an STL Map?

    - by Nathan Fellman
    I want to implement some sort of lookup table in C++ that will act as a cache. It is meant to emulate a piece of hardware I'm simulating. The keys are non-integer, so I'm guessing a hash is in order. I have no intention of inventing the wheel so I intend to use stl::map for this (though suggestions for alternatives are welcome). The question is, is there any way to limit the size of the hash to emulate the fact that my hardware is of finite size? I'd expect the hash's insert method to return an error message or throw an exception if the limit is reached. If there is no such way, I'll simply check its size before trying to insert, but that seems like an inelegant way to do it.

    Read the article

  • Regex Searching in Emacs

    - by Inaimathi
    I'm trying to write some Elisp code to format a bunch of legacy files. The idea is that if a file contains a section like "<meta name=\"keywords\" content=\"\\(.*?\\)\" />", then I want to insert a section that contains existing keywords. If that section is not found, I want to insert my own default keywords into the same section. I've got the following function: (defun get-keywords () (re-search-forward "<meta name=\"keywords\" content=\"\\(.*?\\)\" />") (goto-char 0) ;The section I'm inserting will be at the beginning of the file (or (march-string 1) "Rubber duckies and cute ponies")) ;;or whatever the default keywords are When the function fails to find its target, it returns Search failed: "[regex here]" and prevents the rest of evaluation. Is there a way to have it return the default string, and ignore the error?

    Read the article

  • What system of administrative e-mail addresses does your organization use?

    - by Draco Flangetastic
    I'm getting ready to request a new batch of administrative e-mail addresses to replace an outdated hierarchy within my organization. I have the opportunity of choosing new aliases for 24/7 alert recipients, monitoring recipients, all team members, etc. What does your org use for these purposes? Groups in my org use things like: org-dept@, org-dept-all@, org-dept-alert@, org-dept-monitoring@, org-dept-status@. TIA!!!!!111

    Read the article

  • Inserting non-pod struct into a GHashTable

    - by RikSaunderson
    Hi there, I'm trying to build a GHashTable of instances of a struct containing ints, a time_t and a few char*'s. My question is, how do you insert an instance of a struct into a GHashTable? there are plenty of examples of how to insert a string or an int (using g_str_hash and g_int_hash respectively), but I'm guessing thatI want to use the g_direct_hash, and I can't seem to find any examples of that. Ideally, my code would look like this: GHashtable table; table = g_hash_table_new(g_direct_hash, g_direct_equal); struct mystruct; mystruct.a = 1; mystruct.b = "hello"; mystruct.c = 5; mystruct.d = "test"; g_hash_table_insert(table,mystruct.a,mystruct); Clearly, this is incorrect as it does not compile. Can anyone provide an example that does do what I want? Thanks, Rik

    Read the article

  • Need help with SQL table structure transformation

    - by Arnis L.
    I need to perform update/insert simultaneously changing structure of incoming data. Think about Shops that have defined work time for each day of the week. Hopefully, this might explain better what I'm trying to achieve: worktimeOrigin table: columns: shop_id day val data: 123 | "monday" | "9:00 AM - 18:00" 123 | "tuesday" | "9:00 AM - 18:00" 123 | "wednesday" | "9:00 AM - 18:00" shop table: columns: id worktimeDestination.id worktimeDestination table: columns: id monday tuesday wednesday My aim: I would like to insert data from worktimeOrigin table into worktimeDestination and specify appropriate worktimeDestination for shop. shop table data: 123 1 (updated) worktimeDestination table data: 1 | "9:00 AM - 18:00" | "9:00 AM - 18:00" | "9:00 AM - 18:00" (inserted) Any ideas how to do that?

    Read the article

  • Run QuickTime from command line to export video

    - by Daniel Huckstep
    I have a bunch of avi's I am converting to m4v, and I can do this in QuickTime by opening the video and then go 'Save As', select a folder, select the type (iPhone, Movie, etc), blah blah blah. But I have around 100 videos I want to do this with. Command line options? Or batch processing options in the GUI? Enlighten me, please. This is QuickTime X on Snow Leopard.

    Read the article

  • How to efficiently SELECT rows from database table based on selected set of values

    - by Chau Chee Yang
    I have a transaction table of 1 million rows. The table has a field name "Code" to keep customer's ID. There are about 10,000 different customer code. I have an GUI interface allow user to render a report from transaction table. User may select arbitrary number of customers for rendering. I use IN operator first and it works for few customers: SELECT * FROM TRANS_TABLE WHERE CODE IN ('...', '...', '...') I quickly run into problem if I select few thousand customers. There is limitation using IN operator. An alternate way is create a temporary table with only one field of CODE, and inject selected customer codes into the temporary table using INSERT statement. I may then using SELECT A.* FROM TRANS_TABLE A INNER JOIN TEMP B ON (A.CODE=B.CODE) This works nice for huge selection. However, there is performance overhead for temporary table creation, INSERT injection and dropping of temporary table. Do you aware of better solution to handle this situation?

    Read the article

  • Porting join from Oracle to Postgres

    - by Grasper
    INSERT INTO MISSION_OBJECTIVE( MSN_INT_ID, MO_INT_ID, MO_MSN_CLASS_NM, MO_MSN_CLASS_CD, MO_MSN_TYPE, MO_PRIORITY, MO_COMMENT, MO_START_DT, MO_END_DT, ASP_AIRSPACE_NM, MO_OBJ_LOCATION, MO_ALO_LEG_ID, MO_ALO_ARRIVE_LOC) SELECT '1025', '1', 'AIRDROP', 'ADP', 'LAPES', NULL, COALESCE( NULL, ' '), TO_TIMESTAMP( '1002260900', 'YYMMDDHH24MI'), TO_TIMESTAMP( '1002260915', 'YYMMDDHH24MI'), 'TRANSIT ALPHA', 'TRANSIT ALPHA', '1', 'TRANSIT ALPHA' FROM AIRSPACE ASP, apsmain .MISSION_CLASS MC WHERE ASP.ASP_AIRSPACE_NM(+)= 'TRANSIT ALPHA' AND MC.MCS_MISSION_CLASS_NAME= 'AIRDROP' AND 'TRANSIT ALPHA' IS NOT NULL Is that exactly the same as: INSERT INTO MISSION_OBJECTIVE( MSN_INT_ID, MO_INT_ID, MO_MSN_CLASS_NM, MO_MSN_CLASS_CD, MO_MSN_TYPE, MO_PRIORITY, MO_COMMENT, MO_START_DT, MO_END_DT, ASP_AIRSPACE_NM, MO_OBJ_LOCATION, MO_ALO_LEG_ID, MO_ALO_ARRIVE_LOC) SELECT '1025', '1', 'AIRDROP', 'ADP', 'LAPES', NULL, COALESCE( NULL, ' '), TO_TIMESTAMP( '1002260900', 'YYMMDDHH24MI'), TO_TIMESTAMP( '1002260915', 'YYMMDDHH24MI'), 'TRANSIT ALPHA', 'TRANSIT ALPHA', '1', 'TRANSIT ALPHA' FROM AIRSPACE ASP, apsmain .MISSION_CLASS MC WHERE ASP.ASP_AIRSPACE_NM = 'TRANSIT ALPHA' AND MC.MCS_MISSION_CLASS_NAME= 'AIRDROP' AND 'TRANSIT ALPHA' IS NOT NULL I just deleted the (+). The part that is confusing me is that ASP.ASP_AIRSPACE_NM is being right joined to a constant.

    Read the article

  • inserting std::strings in to a std::map

    - by PaulH
    I have a program that reads data from a file line-by-line. I would like to copy some substring of that line in to a map as below: std::map< DWORD, std::string > my_map; DWORD index; // populated with some data char buffer[ 1024 ]; // populated with some data char* element_begin; // points to some location in buffer char* element_end; // points to some location in buffer > element_begin my_map.insert( std::make_pair( index, std::string( element_begin, element_end ) ) ); This std::map<>::insert() operation takes a long time (It doubles the file parsing time). Is there a way to make this a less expensive operation? Thanks, PaulH

    Read the article

  • Should we denormalize database to improve performance?

    - by Groo
    We have a requirement to store 500 measurements per second, coming from several devices. Each measurement consists of a timestamp, a quantity type, and several vector values. Right now there is 8 vector values per measurement, and we may consider this number to be constant for needs of our prototype project. We are using HNibernate. Tests are done in SQLite (disk file db, not in-memory), but production will probably be MsSQL. Our Measurement entity class is the one that holds a single measurement, and looks like this: public class Measurement { public virtual Guid Id { get; private set; } public virtual Device Device { get; private set; } public virtual Timestamp Timestamp { get; private set; } public virtual IList<VectorValue> Vectors { get; private set; } } Vector values are stored in a separate table, so that each of them references its parent measurement through a foreign key. We have done a couple of things to ensure that generated SQL is (reasonably) efficient: we are using Guid.Comb for generating IDs, we are flushing around 500 items in a single transaction, ADO.Net batch size is set to 100 (I think SQLIte does not support batch updates? But it might be useful later). The problem Right now we can insert 150-200 measurements per second (which is not fast enough, although this is SQLite we are talking about). Looking at the generated SQL, we can see that in a single transaction we insert (as expected): 1 timestamp 1 measurement 8 vector values which means that we are actually doing 10x more single table inserts: 1500-2000 per second. If we placed everything (all 8 vector values and the timestamp) into the measurement table (adding 9 dedicated columns), it seems that we could increase our insert speed up to 10 times. Switching to SQL server will improve performance, but we would like to know if there might be a way to avoid unnecessary performance costs related to the way database is organized right now. [Edit] With in-memory SQLite I get around 350 items/sec (3500 single table inserts), which I believe is about as good as it gets with NHibernate (taking this post for reference: http://ayende.com/Blog/archive/2009/08/22/nhibernate-perf-tricks.aspx). But I might as well switch to SQL server and stop assuming things, right? I will update my post as soon as I test it.

    Read the article

  • WCF Runtime Caching

    - by francois
    Hi I'm using the following code to cache objects. HttpRuntime.Cache.Insert("Doc001", _document); HttpRuntime.Cache.Remove("Doc001"); I would like to know were the cache is stored? (On the client PC or the IIS server) Is this a save way of cache objects and by adding and removing cache in this way will it influence any of the other clients, say for instance i've got 2 clients connected and both are storing cache "*HttpRuntime.Cache.Insert("Doc001", _document);*" and one client removes the cache, is it only removed on a client level?

    Read the article

  • Is there a SortedList<T> class in .net ? (not SortedList<Key,Value> which is actually a kind of Sort

    - by Brann
    I need to sort some objects according to their contents (in fact according to one of their properties, which is NOT the key and may be duplicated between different objects) .net provides two classes (SortedDictionnary and SortedList), and both are some kind of Dictionaries. The only difference (AFAIK) is that SortedDictionnary uses a binary tree to maintain its state whereas SortedList does not and is accessible via an index. I could achieve what I want using a List, and then using its Sort() method with a custom implementation of IComparer, but it wouldn't be time-efficient as I would sort the whole List each time I insert a new object, whereas a good SortedList would just insert the item at the right position. What I need is a SortedList class with a RefreshPosition(int index) to move only the changed (or inserted) object rather than resorting the whole list each time an object inside changes. Am I missing something obvious ?

    Read the article

  • Is using Natural Join or Implicit column names not a good practice when writing SQL in a programming

    - by Jian Lin
    When we use Natural Join, we are joining the tables when both table have the same column names. But what if we write it in PHP and then the DBA add some more fields to both tables, then the Natural Join can break? The same goes for Insert, if we do a insert into gifts values (NULL, "chocolate", "choco.jpg", now()); then it will break the code as well as contaminating the table when the DBA adds some fields to the table (example as column 2 or 3). So it is always best to spell out the column names when the SQL statements are written inside a programming language and stored in a file in a big project.

    Read the article

  • Is READ UNCOMMITTED / NOLOCK safe in this situation?

    - by Ben Challenor
    I know that snapshot isolation would fix this problem, but I'm wondering if NOLOCK is safe in this specific case so that I can avoid the overhead. I have a table that looks something like this: drop table Data create table Data ( Id BIGINT NOT NULL, Date BIGINT NOT NULL, Value BIGINT, constraint Cx primary key (Date, Id) ) create nonclustered index Ix on Data (Id, Date) There are no updates to the table, ever. Deletes can occur but they should never contend with the SELECT because they affect the other, older end of the table. Inserts are regular and page splits to the (Id, Date) index are extremely common. I have a deadlock situation between a standard INSERT and a SELECT that looks like this: select top 1 Date, Value from Data where Id = @p0 order by Date desc because the INSERT acquires a lock on Cx (Date, Id; Value) and then Ix (Id, Date), but the SELECT acquires a lock on Ix (Id, Date) and then Cx (Date, Id; Value). This is because the SELECT first seeks on Ix and then joins to a seek on Cx. Swapping the clustered and non-clustered index would break this cycle, but it is not an acceptable solution because it would introduce cycles with other (more complex) SELECTs. If I add NOLOCK to the SELECT, can it go wrong in this case? Can it return: More than one row, even though I asked for TOP 1? No rows, even though one exists and has been committed? Worst of all, a row that doesn't satisfy the WHERE clause? I've done a lot of reading about this online, but the only reproductions of over- or under-count anomalies I've seen (one, two) involve a scan. This involves only seeks. Jeff Atwood has a post about using NOLOCK that generated a good discussion. I was particularly interested in a comment by Rick Townsend: Secondly, if you read dirty data, the risk you run is of reading the entirely wrong row. For example, if your select reads an index to find your row, then the update changes the location of the rows (e.g.: due to a page split or an update to the clustered index), when your select goes to read the actual data row, it's either no longer there, or a different row altogether! Is this possible with inserts only, and no updates? If so, then I guess even my seeks on an insert-only table could be dangerous. Update: I'm trying to figure out how snapshot isolation works. It seems to be row-based, where transactions read the table (with no shared lock!), find the row they are interested in, and then see if they need to get an old version of the row from the version store in tempdb. But in my case, no row will have more than one version, so the version store seems rather pointless. And if the row was found with no shared lock, how is it different to just using NOLOCK?

    Read the article

  • Using a check contraint in MySQL for controlling string length

    - by ptrn
    I'm tumbled with a problem! I've set up my first check constraint using MySQL, but unfortunately I'm having a problem. When inserting a row that should fail the test, the row is inserted anyway. The structure: CREATE TABLE user ( id INT UNSIGNED NOT NULL AUTO_INCREMENT, uname VARCHAR(10) NOT NULL, fname VARCHAR(50) NOT NULL, lname VARCHAR(50) NOT NULL, mail VARCHAR(50) NOT NULL, PRIMARY KEY (id), CHECK (LENGTH(fname) > 30) ); The insert statement: INSERT INTO user VALUES (null, 'user', 'Fname', 'Lname', '[email protected]'); The length of the string in the fname column should be too short, but it's inserted anyway. I'm pretty sure I'm missing something basic here.

    Read the article

  • Generate non-identity primary key

    - by MikeWyatt
    My workplace doesn't use identity columns or GUIDs for primary keys. Instead, we retrieve "next IDs" from a table as needed, and increment the value for each insert. Unfortunatly for me, LINQ-TO-SQL appears to be optimized around using identity columns. So I need to query and update the "NextId" table whenever I perform an insert. For simplicity, I do this immediately creating the new object. Since all operations between creation of the data context and the call to SubmitChanges are part of one transaction, do I need to create a separate data context for retrieving next IDs? Each time I need an ID, I need to query and update a table inside a transaction to prevent multiple apps from grabbing the same value. Is a separate data context the only way, or is there something better I could try?

    Read the article

  • Hibernate constraint ConstraintViolationException. Is there an easy way to ignore duplicate entries?

    - by vincent
    Basically I've got the below schema and I'm inserting records if they don't exists. However when it comes to inserting a duplicate it throws and error as I would expect. My question is whether there is an easy way to make Hibernate to just ignore inserts which would in effect insert duplicates? CREATE TABLE IF NOT EXISTS `method` ( `id` bigint(20) NOT NULL AUTO_INCREMENT, `name` varchar(10) DEFAULT NULL, PRIMARY KEY (`id`), UNIQUE KEY `name` (`name`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=2 ; SEVERE: Duplicate entry 'GET' for key 'name' Exception in thread "pool-11-thread-4" org.hibernate.exception.ConstraintViolationException: could not insert:

    Read the article

  • Can you add identity to existing column in sql server 2008?

    - by bmutch
    In all my searching I see that you essentially have to copy the existing table to a new table to chance to identity column for pre-2008, does this apply to 2008 also? thanks. most concise solution I have found so far: CREATE TABLE Test ( id int identity(1,1), somecolumn varchar(10) ); INSERT INTO Test VALUES ('Hello'); INSERT INTO Test VALUES ('World'); -- copy the table. use same schema, but no identity CREATE TABLE Test2 ( id int NOT NULL, somecolumn varchar(10) ); ALTER TABLE Test SWITCH TO Test2; -- drop the original (now empty) table DROP TABLE Test; -- rename new table to old table's name EXEC sp_rename 'Test2','Test'; -- see same records SELECT * FROM Test;

    Read the article

< Previous Page | 157 158 159 160 161 162 163 164 165 166 167 168  | Next Page >