I have a varchar @a='a|b|c|d|e|f|g|h|i|j|k|l|m|n|o|p' , wich have | delimitted values. I want to split this variable in a array or a table.
Do anyone have any idea about this.
Actually, free is good enough, it doesn't have to be open source :)
I'm currently using the Schema Compare utility of VS2008, but it doesn't have a command line interface and has some other weaknesses as well.
I'm wondering what free tools others are using to provide command line schema comparisons/synchronizations?
Thanks.
I'm importing a flat file of invoices into a database using C#. I'm using the TransactionScope to roll back the entire operation if a problem is encountered.
It is a tricky input file, in that one row does not necessary equal one record. It also includes linked records. An invoice would have a header line, line items, and then a total line. Some of the invoices will need to be skipped, but I may not know it needs to be skipped until I reach the total line.
One strategy is to store the header, line items, and total line in memory, and save everything once the total line is reached. I'm pursuing that now.
However, I was wondering if it could be done a different way. Creating a "nested" transaction around the invoice, inserting the header row, and line items, then updating the invoice when the total line is reached. This "nested" transaction would roll back if it is determined the invoice needs to be skipped, but the overall transaction would continue.
Is this possible, practical, and how would you set this up?
I've found that Linq2Sql doesn't (Rhino) mock well, as the interfaces I need aren't there. Does EF generate code that's more mockable?
NOTE: I'm not mocking, yet, without interfaces, the next reader of this question may not have my bias.
EDIT: VS2008 / 3.5 for now.
If you have a table with a clustered index on the Primary Key (int), is it redundant and bad to have one (ore more) non-clustered indexes that include that primary key column as one of the columns in the non-clustered index?
My table looks like this with duplicates in col1
col1, col2, col3, col4
1, 1, 0, a
1, 2, 1, a
1, 3, 1, a
2, 4, 1, b
3, 5, 0, c
I want to select distinct col1 with max (col3) and min(col2);
so result set will be:
col1, col2, col3, col4
1, 2, 1, a
2, 4, 1, b
3, 5, 0, c
I have a solution but looking for best ideas?
Hi
I am using Linq2Sql and want to bind an objects field (which is enum) to either a bit or a int type in the database. For example I want have a gender field in my model. I have already edited the DBML and changed the Type to point to my enum. I want to create Radio buttons (which I think I have figured out) for gender and dropdown lists for other areas using the same idea. My enum looks like this
public enum Gender
{
Male,
Female
}
Mapping between DbType 'int' and Type 'Project.Models.Gender' in Column 'Gender' of Type 'Candidate' is not supported.
Any ideas on how to do this mapping. Am I missing something on the enums.
I'm using SMS 2008 & I'm looking for where the registered servers are stores on my local machine. I have searched the registry with no luck.
AHIA,
Larry...
I was trying to run the following query
UPDATE blog_post SET `thumbnail_present`=0, `thumbnail_size`=0, `thumbnail_data`=''
WHERE `blog_post` NOT IN (
SELECT `blog_post`
FROM blog_post
ORDER BY `blog_post` DESC
LIMIT 10)
But Mysql doesn't allow 'LIMIT' in an 'IN' subquery.
I think I can make a select to count the table rows and then make an ordered update limited by 'COUNT - 10', but I was wondering if there is a better way.
Thanks in advance.
I have a vb.net project that uses a SQLite database. I do this by using dataset/table adapters. The client is happy and all works well. However I have just heard that they plan on providing this product to another customer that wishes to use their MSSQL database. So I am writing this post so I can mentally prepare for this before I begin. I am not a database pro and have really enjoyed the simplicity of setting up and managing an SQLite database.
So any ideas on the easiest way to support MSSQL as well? I am happy to run them parallel to each other. Can I just make a separate service / middleware that syncs the SQLite database to the MSSQL on a timer and does not care about what the main app is up to?
Any pointers are appreciated.
My specific concern is related to the performance of a clustered index on a reference table that has many rapid inserts and deletes.
Table 1 "Collection" collection_pk int (among other fields)
Table 2 "Item" item_pk int (among other fields)
Reference Table "Collection_Items" collection_pk int, item_pk int (combined primary key)
Because the primary key is composed of both pks, a clustered index is created and the data physically ordered in the table according to the combined keys.
I have many users creating and deleting collections and adding and removing items to those collections very frequently affecting the "Collection_Items" table, and its clustered index.
QUESTION PART: Since the "Collection_Items" table is so dynamic, wouldn't there be a big performance hit on constantly resorting the table rows because of the clustered index ?
If yes, what should I do to minimize this ?
I need a way to store an int for N columns. Basically what I have is this:
Armies:
ArmyID - UINT
UnitCount1 - UINT
UnitCount2 - UINT
UnitCount3 - UINT
UnitCount4 - UINT
...
I can't possible add a column for each and every unit, so I need a fast way to store the number of each units in an army (you might have guesses it's for a game by now). Using XML is not an option as it will be dead slow.
Hi there,
I have a Classic ASP shopping cart, but I need to add in an option to enter a promo code and (if valid) apply a discount to the total. There is a promo codes table that stores the promotional codes and the value of the discount to apply.
So I was wondering if someone might be able to help me integrate this?
Happy to pay for the time. I think it may only take an hour or so at most. :S
I've written some really nice, funky libraries for use in LinqToSql. (Some day when I have time to think about it I might make it open source... :) )
Anyway, I'm not sure if this is related to my libraries or not, but I've discovered that when I have a large number of changed objects in one transaction, and then call DataContext.GetChangeSet(), things start getting reaalllly slooowwwww. When I break into the code, I find that my program is spinning its wheels doing an awful lot of Equals() comparisons between the objects in the change set. I can't guarantee this is true, but I suspect that if there are n objects in the change set, then the call to GetChangeSet() is causing every object to be compared to every other object for equivalence, i.e. at best (n^2-n)/2 calls to Equals()...
Yes, of course I could commit each object separately, but that kinda defeats the purpose of transactions. And in the program I'm writing, I could have a batch job containing 100,000 separate items, that all need to be committed together. Around 5 billion comparisons there.
So the question is: (1) is my assessment of the situation correct? Do you get this behavior in pure, textbook LinqToSql, or is this something my libraries are doing? And (2) is there a standard/reasonable workaround so that I can create my batch without making the program geometrically slower with every extra object in the change set?
for a current webapp i need a "outlook-like" calendar... Here are some requirements for the calendar:
week-view for the appointments
different appointment types
direct display of the length and time of the date (like in googleCalendar)
multiple appointments for the same time
only using javascript, php and any DB
We need the calendar for the Zend Framework, so if the Calendar doesn't already support the ZF, the source needs to be editable!
do you know any calendar which fits my needs? or do you have any tipps for developing one by myself?
I have a page where I have 4 tabs displaying 4 different reports based off different tables.
I obtain the row count of each table using a select count(*) from <table> query and display number of rows available in each table on the tabs. As a result, each page postback causes 5 count(*) queries to be executed (4 to get counts and 1 for pagination) and 1 query for getting the report content.
Now my question is: are count(*) queries really expensive -- should I keep the row counts (at least those that are displayed on the tab) in the view state of page instead of querying multiple times?
How expensive are COUNT(*) queries ?
Using MySql 5, I have a task where I need to update one table based on the contents of another table.
For example, I need to add 'A1' to table 'A' if table 'B' contains 'B1'. I need to add 'A2a' and 'A2b' to table 'A' if table 'B' contains 'B2', etc.. In our case, the value in table 'B' we're interested is an enum.
Right now I have a stored procedure containing a series of statements like:
INSERT INTO A
SELECT 'A1'
FROM B
WHERE B.Value = 'B1';
--Repeat for 'B2' -> 'A2a'; 'B2' -> 'A2b'; 'B3' -> 'A3', etc...
Is there a nicer more DRY way of accomplishing this?
Edit:
There may be values in table 'B' that have no equivalent value for table 'A'.
I have a user table 'users' that has fields like:
id
first_name
last_name
...
and have another table that determines relationships:
user_id
friend_id
user_accepted
friend_accepted
....
I would like to generate a query that selects all the users but also add another field/column say 'network_status' that depends on the values of user_accepted and fiend_accepted. For example, if user_accepted is true friend_accepted is false I want the 'network_status' field to say 'request sent'. Can I possibly do this in one query? (I would prefer not to user if/else inside the query but if that's the only way so be it)
result=sqlstring.executeQuery("select distinct table_name,owner from all_tables ")
rs.append(str(i)+' , '+result.getString("table_name")+' , '+result.getString("owner"))
If i want to display the query select * from all_tables or ' select count(*) from all_tables'
how can i get the output to display . Please suggest thanks
Hi guys,
Quick question. I'm in a bit of a rush but if someone could quickly point me in the right direction I would be very very happy.
I have a field in the db, let's call it field_a which returns a string in the format "20,50,60,80" etc.
I wish to do a query which will search in this field to see if 20 exists.
Could I use MySQL MATCH or is there a better way?
Thank you!
hi,
I want to insert some data into a table
(id PK autoincrement, val)
with use multi insert
INSERT INTO tab (val) VALUES (1), (2), (3)
Is it possible to obtain a table of last inserted ids?
I'm asking becouse I'm not sure if all will in this form: (n, n+1, n+2).
I use mysql inodb.
i have following database table name tbl_rec
recno uid uname points
============================
1 a abc 10
2 b bac 8
3 c cvb 12
4 d aty 13
5 f cyu 9
-------------------------
-------------------------
i have about 5000 records in this table.
i want to select first 50 higher points records.
i can't use limit statement as i am already using limit for paging.
Thanks