Search Results

Search found 4815 results on 193 pages for 'parameterized queries'.

Page 171/193 | < Previous Page | 167 168 169 170 171 172 173 174 175 176 177 178  | Next Page >

  • Large Switch statements: Bad OOP?

    - by Mystere Man
    I've always been of the opinion that large switch statements are a symptom of bad OOP design. In the past, I've read articles that discuss this topic and they have provided altnerative OOP based approaches, typically based on polymorphism to instantiate the right object to handle the case. I'm now in a situation that has a monsterous switch statement based on a stream of data from a TCP socket in which the protocol consists of basically newline terminated command, followed by lines of data, followed by an end marker. The command can be one of 100 different commands, so I'd like to find a way to reduce this monster switch statement to something more manageable. I've done some googling to find the solutions I recall, but sadly, Google has become a wasteland of irrelevant results for many kinds of queries these days. Are there any patterns for this sort of problem? Any suggestions on possible implementations? One thought I had was to use a dictionary lookup, matching the command text to the object type to instantiate. This has the nice advantage of merely creating a new object and inserting a new command/type in the table for any new commands. However, this also has the problem of type explosion. I now need 100 new classes, plus I have to find a way to interface them cleanly to the data model. Is the "one true switch statement" really the way to go? I'd appreciate your thoughts, opinions, or comments.

    Read the article

  • How to translate the fields of a database model?

    - by Tõnis M
    I have some tables/models in a web app that will have multilingual content. For example a university, with it's description in a default language(english) and the user wants he can see the same information in another language( if the object has it's fields translated). If there were only a few languages then I would just add fields like name_en and name_de and so on, but the number of languages isn't fixed, so that' would create a mess. I could also just create a new object with the translated data but then foreign keys wouldn't work, and only some of the fields can be translated so that would create duplicate data. Storing the translations in a file and using gettext or something similar is also not an option since the objects fields can be translated by the website user, not only developers/admins. What would be the best way to design/architect such a database? Searching from the translated data should also be not too complex - as it should not require creating complex joins which would make the queries slower I'm using PostgreSQL and Ruby of Rails but I'm not looking for a technical solution but for a general idea how to design it.

    Read the article

  • Help Me With This MS-Access Query

    - by yae
    I have 2 tables: "products" and "pieces" PRODUCTS idProd product price PIECES id idProdMain idProdChild quant idProdMain and idProdChild are related with the table: "products". Other considerations is that 1 product can have some pieces and 1 product can be a piece. Price product equal a sum of quantity * price of all their pieces. "Products" table contains all products (p EXAMPLE: TABLE PRODUCTS (idProd - product - price) 1 - Computer - 300€ 2 - Hard Disk - 100€ 3 - Memory - 50€ 4 - Main Board - 100€ 5 - Software - 50€ 6 - CDroms 100 un. - 30€ TABLE PIECES (id - idProdMain - idProdChild - Quant.) 1 - 1 - 2 - 1 2 - 1 - 3 - 2 3 - 1 - 4 - 1 WHAT I NEED? I need update the price of the main product when the price of the product child (piece) is changed. Following the previous example, if I change the price of this product "memory" (is a piece too) to 60€, then product "Computer" will must change his price to 320€ How I can do it using queries? Already I have tried this to obtain the price of the main product, but not runs. This query not returns any value: SELECT Sum(products.price*pieces.quant) AS Expr1 FROM products LEFT JOIN pieces ON (products.idProd = pieces.idProdChild) AND (products.idProd = pieces.idProdChild) AND (products.idProd = pieces.idProdMain) WHERE (((pieces.idProdMain)=5)); MORE INFO The table "products" contains all the products to sell that it is in the shop. The table "pieces" is to take a control of the compound products. To know those who are the products children. For example of compound product: computers. This product is composed by other products (motherboard, hard disk, memory, cpu, etc.)

    Read the article

  • What's the best way to get a bunch of rows from MySQL if you have an array of integer primary keys?

    - by Evan P.
    I have a MySQL table with an auto-incremented integer primary key. I want to get a bunch of rows from the table based on an array of integers I have in memory in my program. The array ranges from a handful to about 1000 items. What's the most efficient query syntax to get the rows? I can think of a few: "SELECT * FROM thetable WHERE id IN (1, 2, 3, 4, 5)" (this is what I do now) "SELECT * FROM thetable where id = 1 OR id = 2 OR id = 3" Multiple queries of the form "SELECT * FROM thetable WHERE id = 1". Probably the most friendly to the query cache, but expensive due to having lots of query parsing. A union, like "SELECT * FROM thetable WHERE id = 1 UNION SELECT * FROM thetable WHERE id = 2 ..." I'm not sure if MySQL caches the results of each query; it's also the most verbose format. I think using the NoSQL interface in MySQL 5.6+ would be the most efficient way to do this, but I'm not yet up to MySQL 5.6.

    Read the article

  • All connections in pool are in use

    - by veljkoz
    We currently have a little situation on our hands - it seems that someone, somewhere forgot to close the connection in code. Result is that the pool of connections is relatively quickly exhausted. As a temporary patch we added Max Pool Size = 500; to our connection string on web service, and recycle pool when all connections are spent, until we figure this out. So far we have done this: SELECT SPId FROM MASTER..SysProcesses WHERE DBId = DB_ID('MyDb') and last_batch < DATEADD(MINUTE, -15, GETDATE()) to get SPID's that aren't used for 15 minutes. We're now trying to get the query that was executed last using that SPID with: DBCC INPUTBUFFER(61) but the queries displayed are various, meaning either something on base level regarding connection manipulation was broken, or our deduction is erroneous... Is there an error in our thinking here? Does the DBCC / sysprocesses give results we're expecting or is there some side-effect catch? (for example, connections in pool influence?) (please, stick to what we could find out using SQL since the guys that did the code are many and not all present right now)

    Read the article

  • Sql Server 2005 Check Constraint not being applied in execution when using variables

    - by DarylS
    Here is some SQL sample code: --Create 2 Sales tables with constraints based on the saledate create table Sales1(SaleDate datetime, Amount money) ALTER TABLE dbo.Sales1 ADD CONSTRAINT CK_Sales1 CHECK (([SaleDate]>='01 May 2010')) GO create table Sales2(SaleDate datetime, Amount money) ALTER TABLE dbo.Sales2 ADD CONSTRAINT CK_Sales2 CHECK (([SaleDate]<'01 May 2010')) GO --Insert some data into Sales1 insert into Sales1 (SaleDate, Amount) values ('02 May 2010', 50) insert into Sales1 (SaleDate, Amount) values ('03 May 2010', 60) GO --Insert some data into Sales2 insert into Sales2 (SaleDate, Amount) values ('30 Mar 2010', 10) insert into Sales2 (SaleDate, Amount) values ('31 Mar 2010', 20) GO --Create a view that combines these 2 tables create VIEW [dbo].[Sales] AS SELECT SaleDate, Amount FROM Sales1 UNION ALL SELECT SaleDate, Amount FROM Sales2 GO --Get the results --Query 1 select * from Sales where SaleDate < '31 Mar 2010' -- if you look at the execution plan this query only looks at Sales2 (Which is good) --Query 2 DECLARE @SaleDate datetime SET @SaleDate = '31 Mar 2010' select * from Sales where SaleDate < @SaleDate -- if you look at the execution plan this query looks at Sales1 and Sales2 (Which is NOT good) Looking at the execution plan you will see that the two queries are differnt. For Query 1 the only table that is accessed is Sales1 (which is good). For Query 2 both tables are accessed (Which is bad). Why are these execution plans different, and how do i get Query 2 to only access the relevant table when variables are used? I have tried to add indexes for the SaleDate column and that does not seem to help.

    Read the article

  • Many tables for many users?

    - by Seagull
    I am new to web programming, so excuse the ignorance... ;-) I have a web application that in many ways can be considered to be a multi-tenant environment. By this I mean that each user of the application gets their own 'custom' environment, with absolutely no interaction between those users. So far I have built the web application as a 'single user' environment. In other words, I haven't actually done anything to support multi-users, but only worked on the functionality I want from the app. Here is my problem... What's the best way to build a multi-user environment: All users point to the same 'core' backend. In other words, I build the logic to separate users via appropriate SQL queries (eg. select * from table where user='123' and attribute='456'). Each user points to a unique tablespace, which is built separately as they join the system. In this case I would simply generate ALL the relevant SQL tables per user, with some sort of suffix for the user. (eg. now a query would look like 'select * from table_ where attribute ='456'). In short, it's a difference between "select * from table where USER=" and "select * from table_USER".

    Read the article

  • What is the return type of my linq query?

    - by Ulhas Tuscano
    I have two tables A & B. I can fire Linq queries & get required data for individual tables. As i know what each of the tables will return as shown in example. But, when i join both the tables i m not aware of the return type of the Linq query. This problem can be solved by creating a class which will hold ID,Name and Address properties inside it. but,everytime before writing a join query depending on the return type i will have to create a class which is not a convinient way Is there any other mathod available to achieve this private IList<A> GetA() { var query = from a in objA select a; return query.ToList(); } private IList<B> GetB() { var query = from b in objB select b; return query.ToList(); } private IList<**returnType**?> GetJoinAAndB() { var query = from a in objA join b in objB on a.ID equals b.AID select new { a.ID, a.Name, b.Address }; return query.ToList(); }

    Read the article

  • C# casting question: from IEnumerable to custom type

    - by Sarah Vessels
    I have a custom class called Rows that implements IEnumerable<Row>. I often use LINQ queries on Rows instances: Rows rows = new Rows { row1, row2, row3 }; IEnumerable<Row> particularRows = rows.Where<Row>(row => condition); What I would like is to be able to do the following: Rows rows = new Rows { row1, row2, row3 }; Rows particularRows = (Rows)rows.Where<Row>(row => condition); However, I get a "System.InvalidCastException: Unable to cast object of type 'WhereEnumerableIterator1[NS.Row]' to type 'NS.Rows'". I do have a Rows constructor taking IEnumerable<Row>, so I could do: Rows rows = new Rows { row1, row2, row3 }; Rows particularRows = new Rows(rows.Where<Row>(row => condition)); This seems bulky, however, and I would love to be able to cast an IEnumerable<Row> to be a Rows since Rows implements IEnumerable<Row>. Any ideas?

    Read the article

  • Simple aggregating query very slow in PostgreSql, any way to improve?

    - by Ash
    HI I have a table which holds files and their types such as CREATE TABLE files ( id SERIAL PRIMARY KEY, name VARCHAR(255), filetype VARCHAR(255), ... ); and another table for holding file properties such as CREATE TABLE properties ( id SERIAL PRIMARY KEY, file_id INTEGER CONSTRAINT fk_files REFERENCES files(id), size INTEGER, ... // other property fields ); The file_id field has an index. The file table has around 800k lines, and the properties table around 200k (not all files necessarily have/need a properties). I want to do aggregating queries, for example find the average size and standard deviation for all file types. But it's very slow - around 70 seconds for the latter query. I understand it needs a sequential scan, but still it seems too much. Here's the query SELECT f.filetype, avg(size), stddev(size) FROM files as f, properties as pr WHERE f.id = pr.file_id GROUP BY f.filetype; and the explain HashAggregate (cost=140292.20..140293.94 rows=116 width=13) (actual time=74013.621..74013.954 rows=110 loops=1) -> Hash Join (cost=6780.19..138945.47 rows=179564 width=13) (actual time=1520.104..73156.531 rows=179499 loops=1) Hash Cond: (f.id = pr.file_id) -> Seq Scan on files f (cost=0.00..108365.41 rows=1140941 width=9) (actual time=0.998..62569.628 rows=805270 loops=1) -> Hash (cost=3658.64..3658.64 rows=179564 width=12) (actual time=1131.053..1131.053 rows=179499 loops=1) -> Seq Scan on properties pr (cost=0.00..3658.64 rows=179564 width=12) (actual time=0.753..557.171 rows=179574 loops=1) Total runtime: 74014.520 ms Any ideas why it is so slow/how to make it faster?

    Read the article

  • Can I create a two-column layout that fluidly adapts to narrow windows?

    - by Brant Bobby
    I'm trying to design a page that has two columns of content, div#left and div#right. (I know these aren't proper semantic identifiers, but it makes explaining easier) The widths of both columns are fixed. Desired result - Wide viewport When the viewport is too narrow to display both side-by-side, I want #right to be stacked on top of #left, like this: Desired result - narrow viewport My first thought was simply to apply float: left to #left and float: right to #right, but that makes #right attach itself to the right side of the window (which is the proper behavior for float, after all), leaving an empty space. This also leaves a big gap between the columns when the browser window is really wide. Wrong - div#right is not flush with the left side of the viewport Wrong - div#right is not on top of div#left Applying float: left to both divs would result in the wrong one moving to the bottom when the window was too small. I could probably do this with media queries, but IE doesn't support those until version 9. The source order is unimportant, but I need something that works in IE7 minimum. Is this possible to do without resorting to Javascript?

    Read the article

  • Advice on Minimizing Stored Procedure Parameters

    - by RPM1984
    Hi Guys, I have an ASP.NET MVC Web Application that interacts with a SQL Server 2008 database via Entity Framework 4.0. On a particular page, i call a stored procedure in order to pull back some results based on selections on the UI. Now, the UI has around 20 different input selections, ranging from a textbox, dropdown list, checkboxes, etc. Each of those inputs are "grouped" into logical sections. Example: Search box : "Foo" Checkbox A1: ticked, Checkbox A2: unticked Dropdown A: option 3 selected Checkbox B1: ticked, Checkbox B2: ticked, Checkbox B3: unticked So i need to call the SPROC like this: exec SearchPage_FindResults @SearchQuery = 'Foo', @IncludeA1 = 1, @IncludeA2 = 0, @DropDownSelection = 3, @IncludeB1 = 1, @IncludeB2 = 1, @IncludeB3 = 0 The UI is not too important to this question - just wanted to give some perspective. Essentially, i'm pulling back results for a search query, filtering these results based on a bunch of (optional) selections a user can filter on. Now, My questions/queries: What's the best way to pass these parameters to the stored procedure? Are there any tricks/new ways (e.g SQL Server 2008) to do this? Special "table" parameters/arrays - can we pass through User-Defined-Types? Keep in mind im using Entity Framework 4.0 - but could always use classic ADO.NET for this if required. What about XML? What are the serialization/de-serialization costs here? Is it worth it? How about a parameter for each logical section? Comma-seperated perhaps? Just thinking out loud. This page is particulary important from a user point of view, and needs to perform really well. The stored procedure is already heavy in logic, so i want to minimize the performance implications - so keep that in mind. With that said - what is the best approach here?

    Read the article

  • PHP, MySQL, Memcache / Ajax Scaling Problem

    - by Jeff Andersen
    I'm building a ajax tic tac toe game in PHP/MySQL. The premise of the game is to be able to share a url like mygame.com/123 with your friends and you play multiple simultaneous games. The way I have it set up is that a file (reload.php) is being called every 3 seconds while the user is viewing their game board space. This reload.php builds their game boards and the output (html) replaces their current game board (thus showing games in which it is their turn) Initially I built it entirely with PHP/MySQL and had zero caching. A friend gave me a suggestion to try doing all of the temporary/quick read information through memcache (storing moves, and ID matchups) and then building the game boards from this information. My issue is that, both solutions encounter a wall when there is roughly 30-40 active users with roughly 40-50 games running. It is running on a VPS from VPS.net with 2 nodes. (Dedicated CPU: 1.2GHz, RAM: 752MB) Each call to reload.php peforms 3 selects and 2 insert queries. The size of the data being pulled is negligible. The same actions happen on index.php to build the boards for the initial visit. Now that the backstory is done, my question is: Would there be a bottleneck in that each user is polling the same file every 3 seconds to rebuild their gameboards, and that all users are sitting on index.php from which the AJAX calls are made within the HTML. If so, is it possible to spread the users' calls out over a set of files designated to building the game boards (eg. reload1.php 2, 3 etc) and direct users to the appropriate file. Would this relieve the pressure? A long winded explanation; however, I didn't have anywhere else to ask. Thanks very much for any insight.

    Read the article

  • SQL aggregate query question

    - by Phil
    Hi, Can anyone help me with a SQL query in Apache Derby SQL to get a "simple" count. Given a table ABC that looks like this... id a b c 1 1 1 1 2 1 1 2 3 2 1 3 4 2 1 1 ** 5 2 1 2 ** ** 6 2 2 1 ** 7 3 1 2 8 3 1 3 9 3 1 1 How can I write a query to get a count of how may distinct values of 'a' have both (b=1 and c=2) AND (b=2 and c=1) to get the correct result of 1. (the two rows marked match the criteria and both have a value of a=2, there is only 1 distinct value of a in this table that match the criteria) The tricky bit is that (b=1 and c=2) AND (b=2 and c=1) are obviously mutually exclusive when applied to a single row. .. so how do I apply that expression across multiple rows of distinct values for a? These queries are wrong but to illustrate what I'm trying to do... "SELECT DISTINCT COUNT(a) WHERE b=1 AND c=2 AND b=2 AND c=1 ..." .. (0) no go as mutually exclusive "SELECT DISTINCT COUNT(a) WHERE b=1 AND c=2 OR b=2 AND c=1 ..." .. (3) gets me the wrong result. SELECT COUNT(a) (CASE WHEN b=1 AND c=10 THEN 1 END) FROM ABC WHERE b=2 AND c=1 .. (0) no go as mutually exclusive Cheers, Phil.

    Read the article

  • Query with UDF works in Access but gives Undefined function in expression (Err 3085) in Excel

    - by ronwest
    I have an Access table with a date/time field. I wanted to make a composite Key field out of the date/time field and 3 other text fields in the same format as the matching Key field in another database. So I concatenated the 3 text fields and wrote a User-Defined-Function in a Module to output the date field as a string in the format "YYYYMMDD". Public Function YYYYMMDD(dteDate As Date) As String YYYYMMDD = Format(dteDate, "YYYYMMDD") End Function I can then successfully run my queries in Access and it all works fine. But when I set up some DAO code in Excel and try to run the query that works fine within Access... db.Execute "qryMake_tblValsDailyAccount" ...Excel gives me the "Undefined function in expression. (Error 3085)" error. To me this is a bug in Excel and/or Access, because the (Excel) client shouldn't need to know anything about the internal calculations that normally take place perfectly in the (Access) server when in isolation. Excel should send the querydef (name with no parameters) to the server, let the server do its work then receive the answers. Why does it need to get involved with a function internal to the server? Does anyone know a way around this?

    Read the article

  • help with delete where not in query

    - by kralco626
    I have a lookup table (##lookup). I know it's bad design because I'm duplicating data, but it speeds up my queries tremendously. I have a query that populates this table insert into ##lookup select distinct col1,col2,... from table1...join...etc... I would like to simulate this behavior: delete from ##lookup insert into ##lookup select distinct col1,col2,... from table1...join...etc... This would clearly update the table correctly. But this is a lot of inserting and deleting. It messes with my indexes and locks up the table for selecting from. This table could also be updated by something like: delete from ##lookup where not in (select distinct col1,col2,... from table1...join...etc...) insert into ##lookup (select distinct col1,col2,... from table1...join...etc...) except if it is already in the table The second way may take longer, but I can say "with no lock" and I will be able to select from the table. Any ideas on how to write the query the second way?

    Read the article

  • MySQL problem reconnecting (mysqld.exe) keeps giving error...

    - by Ronedog
    Need some guidance figuring out what went wrong. I've been using mysql, phpmyadmin for just under a year on my home computer while I develop a webapp. 3 days ago I updated my windows vista with all the "wonderful" microsoft updates, security patches, etc...and now it's broke. I tried uninstalling all the upgrades, but there are 4 of them I can't unistall because microsoft says their "operating system" updates and can't be unistalled. My system is: windows vista, php 5+, mysql 5.1, Apache 2+. I can run my web app and it queries the database without any problems. However, when I run phpmyadmin to get into the database I get an error: "mysqld.exe has stopped working" and phpmyadmin crashes. I tried going to the command line for mysql to do a mysqldump to backup my database and it gives me an error "2013, could not connect to the server". If I restart the computer the webapp will work again. Basically, php can query the database, but if I try to get at the database through phpmyadmin, or the command prompt the mysqld.exe error occurs and blows mysql out. Any ideas what's going on here? Any ideas how to get around this to backup the db, so I can reinstall mysql?. I'm really lost where to start. I don't really know if the updates caused the problem, or if the 4 updates that can't be unistalled are really the problem. Any tips will be appreciated. thanks.

    Read the article

  • Best way to keep a .net client app updated with status of another application

    - by rwmnau
    I have a Windows service that's running all the time, and takes some action every 15 minutes. I also have a client WinForms app that displays some information about what the service is doing. I'd like the forms application to keep itself updated with a recent status, but I'm not sure if polling every second is a good move performance-wise. When it starts, my Windows Service opens a WCF named pipe to receive queries (from my client form) Every second, a timer on the winform sends a query to the pipe, and then displays the results. If the pipe isn't there, the form displays that the service isn't running. Is that the best way to do this? If my service opens the pipe when it starts, will it always stay open (until I close it or my service stops)? In addition to polling the service, maybe there's some way for the service to notify any watching applications of certain events, like starting and stopping processing? That way, I could poll less, since I'd presumably know about big events already, and would only be polling for progress. Anything that I'm missing?

    Read the article

  • modified closure warning in ReSharper

    - by Sarah Vessels
    I was hoping someone could explain to me what bad thing could happen in this code, which causes ReSharper to give an 'Access to modified closure' warning: bool result = true; foreach (string key in keys.TakeWhile(key => result)) { result = result && ContainsKey(key); } return result; Even if the code above seems safe, what bad things could happen in other 'modified closure' instances? I often see this warning as a result of using LINQ queries, and I tend to ignore it because I don't know what could go wrong. ReSharper tries to fix the problem by making a second variable that seems pointless to me, e.g. it changes the foreach line above to: bool result1 = result; foreach (string key in keys.TakeWhile(key => result1)) Update: on a side note, apparently that whole chunk of code can be converted to the following statement, which causes no modified closure warnings: return keys.Aggregate( true, (current, key) => current && ContainsKey(key) );

    Read the article

  • Sorting CouchDB Views By Value

    - by Lee Theobald
    Hi all, I'm testing out CouchDB to see how it could handle logging some search results. What I'd like to do is produce a view where I can produce the top queries from the results. At the moment I have something like this: Example document portion { "query": "+dangerous +dogs", "hits": "123" } Map function (Not exactly what I need/want but it's good enough for testing) function(doc) { if (doc.query) { var split = doc.query.split(" "); for (var i in split) { emit(split[i], 1); } } } Reduce Function function (key, values, rereduce) { return sum(values); } Now this will get me results in a format where a query term is the key and the count for that term on the right, which is great. But I'd like it ordered by the value, not the key. From the sounds of it, this is not yet possible with CouchDB. So does anyone have any ideas of how I can get a view where I have an ordered version of the query terms & their related counts? I'm very new to CouchDB and I just can't think of how I'd write the functions needed.

    Read the article

  • Is it possible to cache all the data in a SQL Server CE database using LinqToSql?

    - by DanM
    I'm using LinqToSql to query a small, simple SQL Server CE database. I've noticed that any operations involving sub-properties are disappointingly slow. For example, if I have a Customer table that is referenced by an Order table, LinqToSql will automatically create an EntitySet<Order> property. This is a nice convenience, allowing me to do things like Customer.Order.Where(o => o.ProductName = "Stopwatch"), but for some reason, SQL Server CE hangs up pretty bad when I try to do stuff like this. One of my queries, which isn't really that complicated takes 3-4 seconds to complete. I can get the speed up to acceptable, even fast, if I just grab the two tables individually and convert them to List<Customer> and List<Order>, then join then manually with my own query, but this is throwing out a lot of what makes LinqToSql so appealing. So, I'm wondering if I can somehow get the whole database into RAM and just query that way, then occasionally save it. Is this possible? How? If not, is there anything else I can do to boost the performance besides resorting to doing all the joins manually? Note: My database in its initial state is about 250K and I don't expect it to grow to more than 1-2Mb. So, loading the data into RAM certainly wouldn't be a problem from a memory point of view. Update Here are the table definitions for the example I used in my question: create table Order ( Id int identity(1, 1) primary key, ProductName ntext null ) create table Customer ( Id int identity(1, 1) primary key, OrderId int null references Order (Id) )

    Read the article

  • MySQL Datefields: duplicate or calculate?

    - by Konerak
    We are using a table with a structure imposed upon us more than 10 years ago. We are allowed to add columns, but urged not to change existing columns. Certain columns are meant to represent dates, but are put in different format. Amongst others: * CHAR(6): YYMMDD * CHAR(6): DDMMYY * CHAR(8): YYYYMMDD * CHAR(8): DDMMYYYY * DATE * DATETIME Since we now would like to do some more complex queries, using advanced date functions, my manager proposed to d*uplicate those problem columns* to a proper FORMATTED_OLDCOLUMNNAME column using a DATE or DATETIME format. Is this the way to go? Couldn't we just use the STR_TO_DATE function each time we accessed the columns? To avoid every query having to copy-paste the function, I could still work with a view or a stored procedure, but duplicating data to avoid recalculation sounds wrong. Solutions I see (I guess I prefer 2.2.1) 1. Physically duplicate columns 1.1 In the same table 1.1.1 Added by each script that does a modification (INSERT/UPDATE/REPLACE/...) 1.1.2 Maintained by a trigger on each modification 1.2 In a separate table 1.2.1 Added by each script that does a modification (INSERT/UPDATE/REPLACE/...) 1.2.2 Maintained by a trigger on each modification 2. On-demand transformation 2.1 Each query has to perform the transformation 2.1.1 Using copy-paste in the source code 2.1.2 Using a library 2.1.3 Using a STORED PROCEDURE 2.2 A view performs the transformation 2.2.1 A separate table replacing the entire table 2.2.2 A separate table just adding the date-fields for the primary keys Am I right to say it's better to recalculate than to store? And would a view be a good solution?

    Read the article

  • Using multiple Qt (SQL) Models

    - by radix07
    I have a near real-time application that I am using Qt and an SQLite database to run. I am curious if it is safe to have two separate models access a database at once. I know 2 separate views can access a model just fine, but I can't find any document addressing this. I also realize that SQLite is thread safe for reading, so I don't see a real issue to doing this from the SQLite part of things... Basically I want to use a QSqlTableModel to do the real-time read and write in the background and at the same time use a QSqlQueryModel to give the user desired data. Since I may be doing lots of filtering in the background using the table model I can't use it as the main view as well. I have gotten this to work for the most part, but am not sure if this is the best way to do this. If the models act like multiple SQL queries I don't believe this should be an issue, but I would like to know from someone I bit more knowledgeable about this stuff since this is pretty new to me. Thanks

    Read the article

  • SQL: find entries in 1:n relation that don't comply with condition spanning multiple rows

    - by milianw
    I'm trying to optimize SQL queries in Akonadi and came across the following problem that is apparently not easy to solve with SQL, at least for me: Assume the following table structure (should work in SQLite, PostgreSQL, MySQL): CREATE TABLE a ( a_id INT PRIMARY KEY ); INSERT INTO a (a_id) VALUES (1), (2), (3), (4); CREATE TABLE b ( b_id INT PRIMARY KEY, a_id INT, name VARCHAR(255) NOT NULL ); INSERT INTO b (b_id, a_id, name) VALUES (1, 1, 'foo'), (2, 1, 'bar'), (3, 1, 'asdf'), (4, 2, 'foo'), (5, 2, 'bar'), (6, 3, 'foo'); Now my problem is to find entries in a that are missing name entries in table b. E.g. I need to make sure each entry in a has at least the name entries "foo" and "bar" in table b. Hence the query should return something similar to: a_id = 3 is missing name "bar" a_id = 4 is missing name "foo" and "bar" Since both tables are potentially huge in Akonadi, performance is of utmost importance. One solution in MySQL would be: SELECT a.a_id, CONCAT('|', GROUP_CONCAT(name ORDER BY NAME ASC SEPARATOR '|'), '|') as names FROM a LEFT JOIN b USING( a_id ) GROUP BY a.a_id HAVING names IS NULL OR names NOT LIKE '%|bar|foo|%'; I have yet to measure the performance tomorrow, but severly doubt it's any fast for tens of thousand of entries in a and thrice as many in b. Furthermore we want to support SQLite and PostgreSQL where to my knowledge the GROUP_CONCAT function is not available. Thanks, good night.

    Read the article

  • how do I create a custom route in rails where I pass the id of an existing Model?

    - by Angela
    I created the following route: map.todo "todo/today", :controller => "todo", :action => "show_date" Originally, the 'show_date' action and associated view would display all the activities associated for that day for all the Campaigns. This ended up being very slow on the database...it would generate roughly 30 records but was still slow. So, I'm thinking of creating a partial that would first list the campaigns separately. If someone clicked on a link associated with campaign_id = 1, I want it to go to the following route: todo/today/campaign/1 Then I would like to know how to know that the '1' is the campaign_id in the controller and then just do its thing. The reason I want a distinct URL is so that I can cache this list. I have to keep going back to this and it's slow. NOTE: It's possibly the problem actually is that I've written the queries in a slow way and sqlite isn't representative of how it will be in production, in which case this work-around is unnecessary, but right now, I need a way to get back to the whole list quickly.

    Read the article

< Previous Page | 167 168 169 170 171 172 173 174 175 176 177 178  | Next Page >