Search Results

Search found 16059 results on 643 pages for 'global temp tables'.

Page 460/643 | < Previous Page | 456 457 458 459 460 461 462 463 464 465 466 467  | Next Page >

  • BDE, Delphi, ODBC, SQL Native Client & Dead lock

    - by EspenS
    Hi. We have some Delphi code that uses the BDE to Access SQL Server 2008 through the SQL Server Native Client ODBC driver (2005 version). Our issue is that we're experiencing some deadlock issues in a loop doing inserts to multiple tables. The whole loop is done within a [TDatabase].StartTransaction. Looking at the SQL Server Profiler we clearly see that at one point during the loop the SPID (Session ID?) change, and then we naturally end up with a deadlock. (Both SPID doing inserts to the same table) It seems like the BDE at some point does a second connection to the DB... (Although I would love to skip the BDE, it's currently not possible. ) Anyone with experiences to share?

    Read the article

  • How do I use SimpleRepository without the migration approach?

    - by Bill Sempf
    I am evaluating SubSonic for use in Phase 2 of a large project. This is an ASP.NET project, with 700 tables in a SQL Server database. We are planning for our domain model to consist of POCO classes to assist with an offline access requirements we have. I believe that the SimpleRepository pattern would be among my best options. Since I have a database already, however, the migration assistance doesn't help me. Are there T4 templates for SimpleRepository that I just overlooked? How do I 'turn off' migration? If I missed something in the Wiki, point me there, otherwise get me started and I'll write up a Wiki entry for y'all when we get there.

    Read the article

  • Parsing plain Win32 PE File (Exe/DLL) in .NET

    - by Usman
    I need to parse plain Win32 DLL/Exe and need to get all imports and exports from it and to show it on console or GUI(say Win Forms). Is it possible to parse Win32 DLL/Exe in C#.NET, read its export table,import table and get managed types from it. As its unmanaged PE(.NET doesn't allows you to convert unmanaged PE files to managed .NET assemblies, only it generates COM managed assemblies). So how to parse export and import tables of PE files and take all methods(signatures from it) in managed form.(e.g if char* as argument, it should display as IntPtr)

    Read the article

  • How to build an interactive search engine web interface using python

    - by asmaier
    I have build a static web interface for searching data from some tables in my PostgreSQL database. The query website consists of a simple textfield for entering the search term, the result website presents the results as a simple html table. The server side code for searching the PostgreSQL database and returning the results is written in python using psycopg2. Now I would like to add some interactive "Ajax features" to my search engine. When entering the search term I would like to be able to see a list of possible search terms like Google does it. On the results page, I would like to be able to sort the table showing the results. What would be the easiest/recommended way to implement these features for my search engine web site? Do I need a full-fledged web framework like Django for that?

    Read the article

  • Best solution for a comment table for multiple content types

    - by KRTac
    I'm currently designing a comments table for a site I'm building. Users will be able to upload images, link videos and add audio files to the profile. Each of these types of content must be commentable. Now I'm wondering what's the best approach to this. My current options are: 1. to have one big comments table and a link tables for every content type (comments_videos, ...) with comment_id and _id. 2. to have comments separated by the type of content their for. So each type of content would have his own comments table with the comments for that type.

    Read the article

  • Grouping a query with php

    - by Tom Hoad
    Basic question! I have 2 tables FRUIT id | fruit_name ------------------- 1 | Apple 2 | Banana 3 | Carrot VARIETIES id | fk_fruit_id | variety_name --------------------------------------- 1 1 Cox 2 1 Braeburn 3 2 Chester 4 3 Kotaka 5 3 Imperial 6 3 Oneal I'd like to output a list of varieties per fruit e.g. APPLE - Cox, Braeburn BANANA - Chester CARROT - Kotaka, Imperial, Oneal My current code is $query = "SELECT * FROM produce, varieties WHERE produce.id = varieties.fk_fruit_id"; $result = mysql_query($query) or die('Error : ' . mysql_error()); while ($row = mysql_fetch_array($result, MYSQL_ASSOC)) { $produce_fruit_code = $row['fruit_code']; $variety_name = $row['variety_name']; echo $produce_fruit_code.' - '.$variety_name.'<br/>'; } which outputs: Apple - Cox Apple - Braeburn Banana - Chester Carrot - Kotaka Carrot - Imperial Carrot - Oneal Not a million miles away, but still not there. Any help is much appreciated, thanks!

    Read the article

  • Performance logging tips

    - by Germstorm
    I am developing large data collecting ASP.Net/Windows service application-pair that uses Microsoft SQL Server 2005 through LINQ2Sql. Performance is always the issue. Currently the application is divided into multiple larger processing parts, each logging the duration of their work. This is not detailed and does not help us with anything. It would be nice to have some database tables that contain statistics that the application itself collected from its own behavior. What logging tips and data structures do you recommend to spot the parts that cause performance problems?

    Read the article

  • Converting to Byte Array after reading a BLOB from SQL in C#

    - by Soham
    Hi All, I need to read a BLOB and store it in a byte[], before going forward with Deserializing; Consider: //Reading the Database with DataAdapterInstance.Fill(DataSet); DataTable dt = DataSet.Tables[0]; foreach (DataRow row in dt.Rows) { byte[] BinDate = Byte.Parse(row["Date"].ToString()); // convert successfully to byte[] } I need help in this C# statement, as I am not able to convert an object type into a byte[]. Note, "Date" field in the table is a blob and not of type Date; Help appreciated; Soham

    Read the article

  • How to SET ARITHABORT ON for connections in Linq To SQL

    - by Laurence
    By default, the SQL connection option ARITHABORT is OFF for OLEDB connections, which I assume Linq To SQL is using. However I need it to be ON. The reason is that my DB contains some indexed views, and any insert/update/delete operations against tables that are part of an indexed view fail if the connection does not have ARITHABORT ON. Even selects against the indexed view itself fail if the WITH(NOEXPAND) hint is used (which you have to use in SQL Standard Edition to get the performance benefit of the indexed view). Is there somewhere in the data context I can specify I want this option ON? Or somewhere in code I can do it?? I have managed a clumsy workaround, but I don't like it .... I have to create a stored procedure for every select/insert/update/delete operation, and in this proc first run SET ARITHABORT ON, then exec another proc which contains the actual select/insert/update/delete. In other words the first proc is just a wrapper for the second. It doesn't work to just put SET ARITHABORT ON above the select/insert/update/delete code.

    Read the article

  • MYSQL Event to update another database table

    - by Lee
    Hey All, I have just taken over a project for a client and the database schema is in a total mess. I would like to rename a load of fields make it a relationship database. But doing this will be a painstaking process as they have an API running of it also. So the idea would be to create a new database and start re-writing the code to use this instead. But I need a way to keep these tables in sync during this process. Would you agree that I should use MYSQL EVENT's to keep updating the new table on Inserts / updates & deletes?? Or can you suggest a better way?? Hope you can advise !! thanks for any input I get

    Read the article

  • upload image during registration in asp.net

    - by Vikrant
    I am developing a student registration form in asp.net. there are two tables viz std_registration and gallery(having pic_id and pic_url). During registration, before the click of submit button i want to retrieve pic_id from gallery table and then pass it to the insert query of registration table. image upload is an optional part. if no image is uploaded i want a default image to get uploaded. pls help me on this. thanx.

    Read the article

  • Best way to store large dataset in SQL Server?

    - by gary
    I have a dataset which contains a string key field and up to 50 keywords associated with that information. Once the data has been inserted into the database there will be very few writes (INSERTS) but mostly queries for one or more keywords. I have read "Tagsystems: performance tests" which is MySQL based and it seems 2NF appears to be a good method for implementing this, however I was wondering if anyone had experience with doing this with SQL Server 2008 and very large datasets. I am likely to initially have 1 million key fields which could have up to 50 keywords each. Would a structure of keyfield, keyword1, keyword2, ... , keyword50 be the best solution or two tables keyid keyfield | 1 | | M keyid keyword Be a better idea if my queries are mostly going to be looking for results that have one or more keywords?

    Read the article

  • SQL query performance optimization (TimesTen)

    - by Sergey Mikhanov
    Hi community, I need some help with TimesTen DB query optimization. I made some measures with Java profiler and found the code section that takes most of the time (this code section executes the SQL query). What is strange that this query becomes expensive only for some specific input data. Here’s the example. We have two tables that we are querying, one represents the objects we want to fetch (T_PROFILEGROUP), another represents the many-to-many link from some other table (T_PROFILECONTEXT_PROFILEGROUPS). We are not querying linked table. These are the queries that I executed with DB profiler running (they are the same except for the ID): Command> select G.M_ID from T_PROFILECONTEXT_PROFILEGROUPS CG, T_PROFILEGROUP G where CG.M_ID_EID = G.M_ID and CG.M_ID_OID = 1464837998949302272; < 1169655247309537280 > < 1169655249792565248 > < 1464837997699399681 > 3 rows found. Command> select G.M_ID from T_PROFILECONTEXT_PROFILEGROUPS CG, T_PROFILEGROUP G where CG.M_ID_EID = G.M_ID and CG.M_ID_OID = 1466585677823868928; < 1169655247309537280 > 1 row found. This is what I have in the profiler: 12:14:31.147 1 SQL 2L 6C 10825P Preparing: select G.M_ID from T_PROFILECONTEXT_PROFILEGROUPS CG, T_PROFILEGROUP G where CG.M_ID_EID = G.M_ID and CG.M_ID_OID = 1464837998949302272 12:14:31.147 2 SQL 4L 6C 10825P sbSqlCmdCompile ()(E): (Found already compiled version: refCount:01, bucket:47) cmdType:100, cmdNum:1146695. 12:14:31.147 3 SQL 4L 6C 10825P Opening: select G.M_ID from T_PROFILECONTEXT_PROFILEGROUPS CG, T_PROFILEGROUP G where CG.M_ID_EID = G.M_ID and CG.M_ID_OID = 1464837998949302272; 12:14:31.147 4 SQL 4L 6C 10825P Fetching: select G.M_ID from T_PROFILECONTEXT_PROFILEGROUPS CG, T_PROFILEGROUP G where CG.M_ID_EID = G.M_ID and CG.M_ID_OID = 1464837998949302272; 12:14:31.148 5 SQL 4L 6C 10825P Fetching: select G.M_ID from T_PROFILECONTEXT_PROFILEGROUPS CG, T_PROFILEGROUP G where CG.M_ID_EID = G.M_ID and CG.M_ID_OID = 1464837998949302272; 12:14:31.148 6 SQL 4L 6C 10825P Fetching: select G.M_ID from T_PROFILECONTEXT_PROFILEGROUPS CG, T_PROFILEGROUP G where CG.M_ID_EID = G.M_ID and CG.M_ID_OID = 1464837998949302272; 12:14:31.228 7 SQL 4L 6C 10825P Fetching: select G.M_ID from T_PROFILECONTEXT_PROFILEGROUPS CG, T_PROFILEGROUP G where CG.M_ID_EID = G.M_ID and CG.M_ID_OID = 1464837998949302272; 12:14:31.228 8 SQL 4L 6C 10825P Closing: select G.M_ID from T_PROFILECONTEXT_PROFILEGROUPS CG, T_PROFILEGROUP G where CG.M_ID_EID = G.M_ID and CG.M_ID_OID = 1464837998949302272; 12:14:35.243 9 SQL 2L 6C 10825P Preparing: select G.M_ID from T_PROFILECONTEXT_PROFILEGROUPS CG, T_PROFILEGROUP G where CG.M_ID_EID = G.M_ID and CG.M_ID_OID = 1466585677823868928 12:14:35.243 10 SQL 4L 6C 10825P sbSqlCmdCompile ()(E): (Found already compiled version: refCount:01, bucket:44) cmdType:100, cmdNum:1146697. 12:14:35.243 11 SQL 4L 6C 10825P Opening: select G.M_ID from T_PROFILECONTEXT_PROFILEGROUPS CG, T_PROFILEGROUP G where CG.M_ID_EID = G.M_ID and CG.M_ID_OID = 1466585677823868928; 12:14:35.243 12 SQL 4L 6C 10825P Fetching: select G.M_ID from T_PROFILECONTEXT_PROFILEGROUPS CG, T_PROFILEGROUP G where CG.M_ID_EID = G.M_ID and CG.M_ID_OID = 1466585677823868928; 12:14:35.243 13 SQL 4L 6C 10825P Fetching: select G.M_ID from T_PROFILECONTEXT_PROFILEGROUPS CG, T_PROFILEGROUP G where CG.M_ID_EID = G.M_ID and CG.M_ID_OID = 1466585677823868928; 12:14:35.243 14 SQL 4L 6C 10825P Closing: select G.M_ID from T_PROFILECONTEXT_PROFILEGROUPS CG, T_PROFILEGROUP G where CG.M_ID_EID = G.M_ID and CG.M_ID_OID = 1466585677823868928; It’s clear that the first query took almost 100ms, while the second was executed instantly. It’s not about queries precompilation (the first one is precompiled too, as same queries happened earlier). We have DB indices for all columns used here: T_PROFILEGROUP.M_ID, T_PROFILECONTEXT_PROFILEGROUPS.M_ID_OID and T_PROFILECONTEXT_PROFILEGROUPS.M_ID_EID. My questions are: Why querying the same set of tables yields such a different performance for different parameters? Which indices are involved here? Is there any way to improve this simple query and/or the DB to make it faster? UPDATE: to give the feeling of size: Command> select count(*) from T_PROFILEGROUP; < 183840 > 1 row found. Command> select count(*) from T_PROFILECONTEXT_PROFILEGROUPS; < 2279104 > 1 row found.

    Read the article

  • Methods for ensuring security between users in multi-user applications

    - by Emilio
    I'm writing a multiuser application (.NET - C#) in which each user's data is separated from the others and there is no data that's common between users. It's critical to ensure that no user has access to another user's data. What are some approaches for implementing security at the database level and/or in the application architecture to to accomplish this? For example (and this is totally made up - I'm not suggesting it's a good or bad approach) including a userID column in all data tables might be an approach. I'm developing the app in C# (asp.net) and SQL Server 2008. I'm looking for options that are are either native in the tools I'm using or general patterns.

    Read the article

  • Serializing C# objects to DB

    - by Robert Koritnik
    I'm using a DB table with various different entities. This means that I can't have an arbitrary number of fields in it to save all kinds of different entities. I want instead save just the most important fields (dates, reference IDs - kind of foreign key to various other tables, most important text fields etc.) and an additional text field where I'd like to store more complete object data. the most obvious solution would be to use XML strings and store those. The second most obvious choice would be JSON, that usually shorter and probably also faster to serialize/deserialize... And is probably also faster. But is it really? My objects also wouldn't need to be strictly serializable, because JsonSerializer is usually able to serialize anything. Even anonymous objects, that may as well be used here. What would be the most optimal solution to solve this problem?

    Read the article

  • Running Stored Procedure with parameters resulting from query

    - by David in Dakota
    It's not hard to find developers who think cursors are gauche but I am wondering how to solve the following problem without one: Let's say I have a proc called uspStudentDelete that takes as a parameter @StudentID. uspStudentDelete applies a bunch of cascading soft delete logic, marking a flag on tables like "classes", "grades", and so on as inactive. uspStudentDelete is well vetted and has worked for some time. What would be the best way to run uspStudentDelete on the results of a query (e.g. select studentid from students where ... ) in *TSQL*?

    Read the article

  • Mysql JOIN problem

    - by FinalDestiny
    I have 2 mysql tables : Question with the following columns : id, question, nranswers Nranswers must be a number from 1 to 5 And the other table is Answers with the following columns: userid, answer . Now the problem is that I want to get the replies for each answer for one question(id 22 let's say) . P.S. If the nranswers is 3, the result should look like this: (the right number means how many times the reply number was chosen) 1 - 2 2 - 8 3 - 7 If the nranswers is 5, the result should look like this: 1 - 3 2 - 8 3 - 14 4 - 19 5 - 8 Please help me out with the query, atm he's not counting the answers that weren't chosen, only the ones that were chosen at least one time.

    Read the article

  • Rails: translations for table's column.

    - by Andrew
    In rails application I have two models: Food and Drink. Both food and drink have a name, which has to be stored in two languages. How do I better realize translations for theese tables? First solution I realized was to replace name column with name_en and name_ru. Another solution is to encode with YAML hash like { :en => 'eng', :ru => 'rus' } and store yaml as a name. What would you recommend, assuming content is not static? Maybe there's good article?

    Read the article

  • Query data using LINQ to SQL and Entity Framework with foreign key? (Help me please)

    - by The Wind
    Hello! There is a problem I need help with your query on the data using LINQ to SQL and Entity Framework (I'm using Visual Studio 2010). My picure here: http://img.tamtay.vn/files/photo2/2010/5/28/10/962/4bff3a3b_1093f58f_untitled-1.gif I have three tables: tbl NewsDetails tblNewsCategories tblNewsInCategories (See screen 1 in my picture) Now, I want to retrieve records in the tblNewsDetails table, with condition: CategoryId=1, as the following results: (See screen 2 in my picture) But NewsID and CategoryId in tblNewsInCategories table is two foreign key, I do not see them and I do not know how to use them in your code. My code has errors: (See screen 3 in my picture) Please help me. Thanks! (I am a new member, should not have the right to insert images)

    Read the article

  • Strategy for Offline/Online data synchronization

    - by Adi
    My requirement is I have server J2EE web application and client J2EE web application. Sometimes client can go offline. When client comes online he should be able to synchronize changes to and fro. Also I should be able to control which rows/tables need to be synchronized based on some filters/rules. Is there any existing Java frameworks for doing it? If I need to implement on my own, what are the different strategies that you can suggest? One solution in my mind is maintaining sql logs and executing same statements at other side during synchronization. Do you see any problems with this strategy?

    Read the article

  • Proper way to setup a UISegmentedControll on UINavigationController UINavigationBar all inside UITab

    - by Kaspa
    The title pretty much describes it all. The problem being the handling of the UISegmentedControll callbacks (button presses). If the content type of all of the nested views was the same (i.e. some UITableViewControllers) then I could just switch dataSource'es and reload the tables. However this is not the case, I have 3 very different views in there that allow further drilldown / interaction based on the NavigationControllers. So the way I have this set up ATM is that there is a "container" class that I put all of the UINavigationControllers in. They all share the same and one UISegmentedController and I redirect the callbacks to the container view controller. This does not feel too good at all. Additionally there is a problem when the user taps on the tab bar icon, the navigation controller pops to root which is ... the empty container view. Here's a picture of what I want to achieve:

    Read the article

  • combining two select statements to return one result

    - by DalivDali
    I need to combine the results for two select queries from two view tables, from which I am performing calculations. Perhaps there is an easier way to perform a query using if...else - any pointers? Essentially I need to divide everything by 'ar.time_ratio' under the condition in sql query 1, and ignore that for query 2. SELECT gs.traffic_date, gs.domain_group, gs.clicks/ar.time_ratio as 'Scaled_clicks', gs.visitors/ar.time_ratio as 'scaled_visitors', gs.revenue/ar.time_ratio as 'scaled_revenue', (gs.revenue/gs.clicks)/ar.time_ratio as 'scaled_average_cpc', (gs.clicks)/(gs.visitors)/ar.time_ratio as 'scaled_ctr', gs.average_rpm/ar.time_ratio as 'scaled_rpm', (((gs.revenue)/(gs.visitors))/ar.time_ratio)*1000 as "Ecpm" FROM group_stats gs, v_active_ratio ar WHERE ar.group_id=gs.domain_group and SELECT gs.traffic_date, gs.domain_group, gs.clicks, gs.visitors, gs.revenue, (gs.revenue/gs.clicks) as 'average_cpc', (gs.clicks)/(gs.visitors) as 'average_ctr', gs.average_rpm, ((gs.revenue)/(gs.visitors))*1000 as "Ecpm" FROM group_stats gs, v_active_ratio ar where not ar.group_id=gs.domain_group

    Read the article

  • How to include associative table information and still retain strong typing

    - by mwright
    I am using LINQ to SQL to create strongly typed objects in my project. Let's say I have an object that is represented by a database table. This object has a "Current State" that is kept in an associative table. I would like to make a single db call where I pull back the two tables joined but am unsure how I should be populating that information into some sort of object to preserve strong typing within my model so that the view using the information can just consume the information from the objects. I looked into creating a view model for this but it doesn't seem to quite fit. Am I thinking about this in the wrong way? What information can I include to help clarify my problem? Other details that may or may not be important: It's an MVC project....

    Read the article

  • many-to-many query

    - by kofto4ka
    Hello, guys! I have a problem and I dont know what is better solution. Okay, I have 2 tables: posts(id, title), posts_tags(post_id, tag_id). I have next task: must select posts with tags ids for example 4, 10 and 11. Not exactly, post could have any other tags at the same time. So, how I could do it more optimized? Creating temporary table in each query? Or may be some kind of stored procedure? In the future, user could ask script to select posts with any count of tags (it could be 1 tag only or 10 at the same time) and I must be sure that method that I will choose would be the best method for my problem. Sorry for my english, thx for attention.

    Read the article

  • Change Large Number of Record Keys using Map Table

    - by Coyote
    I have a set of records indexed by id numbers, I need to convert these record's indexes to a new id number. I have a two column table mapping the old numbers to the new numbers. For example given these two tables, what would the update statement look like? Given: OLD_TO_NEW oldid | newid ----------------- 1234 0987 7698 5645 ... ... and id | data ---------------- 1234 'yo' 7698 'hey' ... ... Need: id | data ---------------- 0987 'yo' 5645 'hey' ... ... This oracle so I have access to PL/SQL, I'm just trying to avoid it.

    Read the article

< Previous Page | 456 457 458 459 460 461 462 463 464 465 466 467  | Next Page >