Search Results

Search found 32492 results on 1300 pages for 'reporting database'.

Page 312/1300 | < Previous Page | 308 309 310 311 312 313 314 315 316 317 318 319  | Next Page >

  • Where can I find my iPhone app's Core Data persistent store?

    - by Dr Dork
    I'm diving into iPhone development, so I apologize in advance if this is a ridiculous question, but in a new iPad app project using the Core Data framework, here's the generated code for creating the persistentStoreCoordinator... - (NSPersistentStoreCoordinator *)persistentStoreCoordinator { if (persistentStoreCoordinator != nil) { return persistentStoreCoordinator; } NSURL *storeUrl = [NSURL fileURLWithPath: [[self applicationDocumentsDirectory] stringByAppendingPathComponent: @"ApplicationName.sqlite"]]; NSError *error = nil; persistentStoreCoordinator = [[NSPersistentStoreCoordinator alloc] initWithManagedObjectModel:[self managedObjectModel]]; if (![persistentStoreCoordinator addPersistentStoreWithType:NSSQLiteStoreType configuration:nil URL:storeUrl options:nil error:&error]) { /* Replace this implementation with code to handle the error appropriately. abort() causes the application to generate a crash log and terminate. You should not use this function in a shipping application, although it may be useful during development. If it is not possible to recover from the error, display an alert panel that instructs the user to quit the application by pressing the Home button. Typical reasons for an error here include: * The persistent store is not accessible * The schema for the persistent store is incompatible with current managed object model Check the error message to determine what the actual problem was. */ NSLog(@"Unresolved error %@, %@", error, [error userInfo]); abort(); } return persistentStoreCoordinator; } My questions are... The first time I run the app, is the ApplicationName.sqllite database created automatically if it doesn't exist? If not, when is it created? When data is added to it programmatically? Once the DB does exist, where can I locate the file? I'd like to open it with a different program so I can manually manipulate the data. Thanks so much in advance for your help! I'm going to continue researching these questions right now.

    Read the article

  • Help Optimizing MySQL Table (~ 500,000 records) and PHP Code.

    - by Pyrite
    I have a MySQL table that collects player data from various game servers (Urban Terror). The bot that collects the data runs 24/7, and currently the table is up to about 475,000+ records. Because of this, querying this table from PHP has become quite slow. I wonder what I can do on the database side of things to make it as optomized as possible, then I can focus on the application to query the database. The table is as follows: CREATE TABLE IF NOT EXISTS `people` ( `id` bigint(20) unsigned NOT NULL AUTO_INCREMENT, `name` varchar(40) NOT NULL, `ip` int(4) unsigned NOT NULL, `guid` varchar(32) NOT NULL, `server` int(4) unsigned NOT NULL, `date` int(11) NOT NULL, PRIMARY KEY (`id`), UNIQUE KEY `Person` (`name`,`ip`,`guid`), KEY `server` (`server`), KEY `date` (`date`), KEY `PlayerName` (`name`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 COMMENT='People that Play on Servers' AUTO_INCREMENT=475843 ; I'm storying the IPv4 (ip and server) as 4 byte integers, and using the MySQL functions NTOA(), etc to encode and decode, I heard that this way is faster, rather than varchar(15). The guid is a md5sum, 32 char hex. Date is stored as unix timestamp. I have a unique key on name, ip and guid, as to avoid duplicates of the same player. Do I have my keys setup right? Is the way I'm storing data efficient? Here is the code to query this table. You search for a name, ip, or guid, and it grabs the results of the query and cross references other records that match the name, ip, or guid from the results of the first query, and does it for each field. This is kind of hard to explain. But basically, if I search for one player by name, I'll see every other name he has used, every IP he has used and every GUID he has used. <form action="<?php echo $_SERVER['PHP_SELF']; ?>" method="post"> Search: <input type="text" name="query" id="query" /><input type="submit" name="btnSubmit" value="Submit" /> </form> <?php if (!empty($_POST['query'])) { ?> <table cellspacing="1" id="1up_people" class="tablesorter" width="300"> <thead> <tr> <th>ID</th> <th>Player Name</th> <th>Player IP</th> <th>Player GUID</th> <th>Server</th> <th>Date</th> </tr> </thead> <tbody> <?php function super_unique($array) { $result = array_map("unserialize", array_unique(array_map("serialize", $array))); foreach ($result as $key => $value) { if ( is_array($value) ) { $result[$key] = super_unique($value); } } return $result; } if (!empty($_POST['query'])) { $query = trim($_POST['query']); $count = 0; $people = array(); $link = mysql_connect('localhost', 'mysqluser', 'yea right!'); if (!$link) { die('Could not connect: ' . mysql_error()); } mysql_select_db("1up"); $sql = "SELECT id, name, INET_NTOA(ip) AS ip, guid, INET_NTOA(server) AS server, date FROM 1up_people WHERE (name LIKE \"%$query%\" OR INET_NTOA(ip) LIKE \"%$query%\" OR guid LIKE \"%$query%\")"; $result = mysql_query($sql, $link); if (!$result) { die(mysql_error()); } // Now take the initial results and parse each column into its own array while ($row = mysql_fetch_array($result, MYSQL_NUM)) { $name = htmlspecialchars($row[1]); $people[] = array( 'id' => $row[0], 'name' => $name, 'ip' => $row[2], 'guid' => $row[3], 'server' => $row[4], 'date' => $row[5] ); } // now for each name, ip, guid in results, find additonal records $people2 = array(); foreach ($people AS $person) { $ip = $person['ip']; $sql = "SELECT id, name, INET_NTOA(ip) AS ip, guid, INET_NTOA(server) AS server, date FROM 1up_people WHERE (ip = \"$ip\")"; $result = mysql_query($sql, $link); while ($row = mysql_fetch_array($result, MYSQL_NUM)) { $name = htmlspecialchars($row[1]); $people2[] = array( 'id' => $row[0], 'name' => $name, 'ip' => $row[2], 'guid' => $row[3], 'server' => $row[4], 'date' => $row[5] ); } } $people3 = array(); foreach ($people AS $person) { $guid = $person['guid']; $sql = "SELECT id, name, INET_NTOA(ip) AS ip, guid, INET_NTOA(server) AS server, date FROM 1up_people WHERE (guid = \"$guid\")"; $result = mysql_query($sql, $link); while ($row = mysql_fetch_array($result, MYSQL_NUM)) { $name = htmlspecialchars($row[1]); $people3[] = array( 'id' => $row[0], 'name' => $name, 'ip' => $row[2], 'guid' => $row[3], 'server' => $row[4], 'date' => $row[5] ); } } $people4 = array(); foreach ($people AS $person) { $name = $person['name']; $sql = "SELECT id, name, INET_NTOA(ip) AS ip, guid, INET_NTOA(server) AS server, date FROM 1up_people WHERE (name = \"$name\")"; $result = mysql_query($sql, $link); while ($row = mysql_fetch_array($result, MYSQL_NUM)) { $name = htmlspecialchars($row[1]); $people4[] = array( 'id' => $row[0], 'name' => $name, 'ip' => $row[2], 'guid' => $row[3], 'server' => $row[4], 'date' => $row[5] ); } } // Combine people and people2 into just people $people = array_merge($people, $people2); $people = array_merge($people, $people3); $people = array_merge($people, $people4); $people = super_unique($people); foreach ($people AS $person) { $date = ($person['date']) ? date("M d, Y", $person['date']) : 'Before 8/1/10'; echo "<tr>\n"; echo "<td>".$person['id']."</td>"; echo "<td>".$person['name']."</td>"; echo "<td>".$person['ip']."</td>"; echo "<td>".$person['guid']."</td>"; echo "<td>".$person['server']."</td>"; echo "<td>".$date."</td>"; echo "</tr>\n"; $count++; } // Find Total Records //$result = mysql_query("SELECT id FROM 1up_people", $link); //$total = mysql_num_rows($result); mysql_close($link); } ?> </tbody> </table> <p> <?php echo $count." Records Found for \"".$_POST['query']."\" out of $total"; ?> </p> <?php } $time_stop = microtime(true); print("Done (ran for ".round($time_stop-$time_start)." seconds)."); ?> Any help at all is appreciated! Thank you.

    Read the article

  • Echoing users by group name in PHP

    - by BobSapp
    What I want to do is click a name of a group(every group what I create other than poweruser and admin groups) and that will echo all of the users in that group from the database. I have figured out the code so far but now my problem is how will I print it all out when clicking the name of the group? My code so far is: <h3>Groups</h3> <?php include('db.php'); if (isset($_GET["groupID"])) { $sql="SELECT `group`.*, `user`.* FROM `user` inner join `group` on group.groupID=user.groupID where group.groupID= " . mysql_real_escape_string($_GET["groupID"]) ; } else { $sql="SELECT * FROM `group` WHERE groupName <> 'admin' AND groupName <> 'poweruser'" ; } $result=mysql_query($sql,$connection); while($line=mysql_fetch_array($result)){ echo "<a href='index.php?page=groups&group=".$line['groupID']."'>".$line['groupName'].'</a><br />'; } mysql_free_result($result); mysql_close($connection); ?>

    Read the article

  • Get the first and last posts in a thread

    - by Grampa
    I am trying to code a forum website and I want to display a list of threads. Each thread should be accompanied by info about the first post (the "head" of the thread) as well as the last. My current database structure is the following: threads table: id - int, PK, not NULL, auto-increment name - varchar(255) posts table: id - int, PK, not NULL, auto-increment thread_id - FK for threads The tables have other fields as well, but they are not relevant for the query. I am interested in querying threads and somehow JOINing with posts so that I obtain both the first and last post for each thread in a single query (with no subqueries). So far I am able to do it using multiple queries, and I have defined the first post as being: SELECT * FROM threads t LEFT JOIN posts p ON t.id = p.thread_id ORDER BY p.id LIMIT 0, 1 The last post is pretty much the same except for ORDER BY id DESC. Now, I could select multiple threads with their first or last posts, by doing: SELECT * FROM threads t LEFT JOIN posts p ON t.id = p.thread_id ORDER BY p.id GROUP BY t.id But of course I can't get both at once, since I would need to sort both ASC and DESC at the same time. What is the solution here? Is it even possible to use a single query? Is there any way I could change the structure of my tables to facilitate this? If this is not doable, then what tips could you give me to improve the query performance in this particular situation?

    Read the article

  • How do software projects go over budget and under-deliver?

    - by Carlos
    I've come across this story quite a few times here in the UK: NHS Computer System Summary: We're spunking £12 Billion on some health software with barely anything working. I was sitting the office discussing this with my colleagues, and we had a little think about. From what I can see, all the NHS needs is a database + middle tier of drugs/hospitals/patients/prescriptions objects, and various GUIs for doctors and nurses to look at. You'd also need to think about security and scalability. And you'd need to sit around a hospital/pharmacy/GPs office for a bit to figure out what they need. But, all told, I'd say I could knock together something with that kind of structure in a couple of days, and maybe throw in a month or two to make it work in scale. If I had a few million quid, I could probably hire some really excellent designers to make a maintainable codebase, and also buy appropriate hardware to run the system on. I hate to trivialize something that seems to have caused to much trouble, but to me it looks like just a big distributed CRUD + UI system. So how on earth did this project bloat to £12B without producing much useful software? As I don't think the software sounds so complicated, I can only imagine that something about how it was organised caused this mess. Is it outsourcing that's the problem? Is it not getting the software designers to understand the medical business that caused it? What are your experiences with projects gone over budget, under delivered? What are best practices for large projects? Have you ever worked on such a project?

    Read the article

  • SQL left join with multiple rows into one row

    - by beardedd
    Basically, I have two tables, Table A contains the actual items that I care to get out, and Table B is used for language translations. So, for example, Table A contains the actual content. Anytime text is used within the table, instead of storing actual varchar values, ids are stored that relate back to text stored in Table B. This allows me to by adding a languageID column to Table B, have multiple translations for the same row in the database. Example: Table A Title (int) Description (int) Other Data.... Table B TextID (int) - This is the column whose value is stored in other tables LanguageID (int) Text (varchar) My question is more a call for suggestions on how to best handle this. Ideally I want a query that I can use to select from the table, and get the text as opposed to the ids of the text out of the table. Currently when I have two text items in the table this is what I do: SELECT C.ID, C.Title, D.Text AS Description FROM (SELECT A.ID, A.Description, B.Text AS Title FROM TableA A, TranslationsTable B WHERE A.Title = B.TextID AND B.LanguaugeID = 1) C LEFT JOIN TranslationsTable D ON C.Description = D.TextID AND D.LanguaugeID = 1 This query gives me the row from Table A I am looking for (using where statements in the inner select statement) with the actual text based on the language ID used instead of the text ids. This works fine when I am only using one or two text items that need to be translated, but adding a third item or more, it starts to get really messy - essentially another left join on top of the example. Any suggestions on a better query, or at least a good way to handle 3 or more text items in a single row?

    Read the article

  • SELECT product from subclass: How many queries do I need?

    - by Stefano
    I am building a database similar to the one described here where I have products of different type, each type with its own attributes. I report a short version for convenience product_type ============ product_type_id INT product_type_name VARCHAR product ======= product_id INT product_name VARCHAR product_type_id INT -> Foreign key to product_type.product_type_id ... (common attributes to all product) magazine ======== magazine_id INT title VARCHAR product_id INT -> Foreign key to product.product_id ... (magazine-specific attributes) web_site ======== web_site_id INT name VARCHAR product_id INT -> Foreign key to product.product_id ... (web-site specific attributes) This way I do not need to make a huge table with a column for each attribute of different product types (most of which will then be NULL) How do I SELECT a product by product.product_id and see all its attributes? Do I have to make a query first to know what type of product I am dealing with and then, through some logic, make another query to JOIN the right tables? Or is there a way to join everything together? (if, when I retrieve the information about a product_id there are a lot of NULL, it would be fine at this point). Thank you

    Read the article

  • Index question: Select * with WHERE clause. Where and how to create index

    - by Mestika
    Hi, I’m working on optimizing some of my queries and I have a query that states: select * from SC where c_id ="+c_id” The schema of ** SC** looks like this: SC ( c_id int not null, date_start date not null, date_stop date not null, r_t_id int not null, nt int, t_p decimal, PRIMARY KEY (c_id, r_t_id, date_start, date_stop)); My immediate bid on how the index should be created is a covering index in this order: INDEX(c_id, date_start, date_stop, nt, r_t_id, t_p) The reason for this order I base on: The WHERE clause selects from c_id thus making it the first sorting order. Next, the date_start and date_stop to specify a sort of “range” to be defined in these parameters Next, nt because it will select the nt Next the r_t_id because it is a ID for a specific type of my r_t table And last the t_p because it is just a information. I don’t know if it is at all necessary to order it in a specific way when it is a SELECT ALL statement. I should say, that the SC is not the biggest table. I can say how many rows it contains but a estimate could be between <10 and 1000. The next thing to add is, that the SC, in different queries, inserts the data into the SC, and I know that indexes on tables which have insertions can be cost ineffective, but can I somehow create a golden middle way to effective this performance. Don't know if it makes a different but I'm using IBM DB2 version 9.7 database Sincerely Mestika

    Read the article

  • Connection string problems on shared hosting with sql server 2005 express

    - by dagogo
    hi i have problem connecting to my db on a shared hosting, my host provider says they deployed sql 2005 express on their database and i prepared my connection string as follows to take advantage of sql express. \ the data source nae i used originally was ./SQLExpress but my host provider asked that i change it to local host, although with the former it didnt connect, but still with the change as indicated above the error still comes up on access to my default page. the error is as follows; Server Error in '/' Application. Invalid value for key 'attachdbfilename'. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.ArgumentException: Invalid value for key 'attachdbfilename'. Source Error: Line 120: Public Function GetID(ByVal sLgaName As String) As Integer Line 121: Dim q As String = "Select PLID " & "From LGA " & "Where LGAName = " & "'" & sLgaName & "'" Line 122: Dim cn As New SqlConnection(Me.ConnectionString) Line 123: Dim cmd As New SqlCommand(q, cn) Line 124: ive read up a lot on the web and googled ma fingers numb on this, i have a deadline to deliver this project and having successfully built the app it frustrating for this to happen. pls help me.

    Read the article

  • .tpl files and website problem

    - by whitstone86
    Apologies if the title is in lowercase but it's describing an extension format. I've started using Dwoo as my template engine for PHP, and am not sure how to convert my PHP files into .tpl templates. My site is similar to, but not the same as, http://library.digiguide.com/lib/programme/Medium-319648/Drama/ with its design (except colour scheme and site name are different, plus it's in PHP - so copyright issues are avoided here, the design arguably could be seen as parody even though the content is different. The database is called tvguide, and it has these tables: Programmes House M.D. Medium Police Stop! American Dad! The tablenames of the above programmes are: housemdonair mediumonair policestopair americandad1 Episodes The tablenames for the above programmes' episode guides are: housemdepidata mediumepidata policestopepidata americandad1epidata All of them have the following rows: id (not an auto-increment, since I wish to dynamically generate a page from this) episodename seriesnumber episodenumber episodesynopsis (the above four after id do exactly as stated) I have a pagination script that works, it displays 20 records per page as I want it to. This is called pmcPagination.php - but I won't post it in full since it would take up too much space. However, I'm trying to get it so that variables are filled in like this: (ok, so the examples below are ASP.NET, but if there's a PHP/MySQL equivalent I would gratefully appreciate this!!): http://library.digiguide.com/lib/episode/741168 http://library.digiguide.com/lib/episode/714829 with the episode detail and data. My site works, but it's fairly basic, and it's not online yet until my bugs are fixed. Mod_rewrite is enabled so my site reads as http://mytvguide.com/episode/123456 or http://mytvguide.com/programme/123456 http://mytvguide.com/WorldInAction/123456/Documentary/ I've tried looking on Google, but am not sure how to get this TV guide script to work at its best - but I think .tpl, and .php/MySQL is the way to go. Any advice anyone has on making this project into a fully workable, ready to use site would be much appreciated, I've spent months refining this project! P.S. Apologies for the length of this, hope it describes my project well.

    Read the article

  • Get info from multiple files, match it and then display to end user, what is fastest?

    - by Patrick
    Hi, I need to build a website where we display data that is refreshed every 5minutes in a text file with a | separator. I currently use Java to do this. What I do now: I grab the textfile for every request through the website and process it and then display the data to the end user, this works fine, since Java can go through like 5000 lines of data fast, and when I filter it it is still extremely fast. However now the management wants the following: They added 3 textfiles with the | separator to it, and now want me to also read those files and match the information on certain fields, and if there is a match also display that information to the end user. I think soon enough, although Java is fast, I will run into trouble when 10 people want that information and I have to run through 4 total files matching the information. What can I do to make this process super fast? My Creative solutions so far: -Leave it this way, since Java is fast and end users can wait (probably less then 1second) -Have a background process that dumps new data into a MySQL database every 5minutes, since databases are extremely good at getting the same data from multiple tables. Thank you!

    Read the article

  • Determining Best Table Structure for MySQL Performance

    - by Joe Majewski
    I'm working on a browser-based RPG for one of my websites, and right now I'm trying to determine the best way to organize my SQL tables for performance and maintenance. Here's my question: Does the number of columns in an SQL table affect the speed in which it can be queried? I am not a newbie when it comes to PHP or MySQL. I used to develop things with the common goal of getting them to work, but I've recently advanced to the stage where a functional program is not good enough unless it's fast and reliable. Anyways, right now I have a members table that has around 15 columns. It contains information such as the player's username, password, email, logins, page views, etcetera. It doesn't contain any information on the player's progress in the game, however. If I added columns for things such as army size, gold, turns, and whatnot, then it could easily rise to around 40 or 50 total columns. Oh, and my database structure IS normalized. Will a table with 50 columns that gets constantly queried be a bad idea? Should I split it into two tables; one for the user's general information and one for the user's game statistics? I know I could check the query time myself, but I haven't actually created the tables yet and I think I'd be better off with some professional advice on this important decision for my game. Thank you for your time! :)

    Read the article

  • Updating a composite primary key

    - by VBCSharp
    I am struggling with the philosophical discussions about whether or not to use composite primary keys on my SQL Server database. I have always used the surrogate keys in the past and I am challenging myself by leaving my comfort zone to try something different. I have read many discussion but can't come to any kind of solution yet. The struggle I am having is when I have to update a record with the composite PK. For example, the record in questions is like this: ContactID, RoleID, EffectiveDate, TerminationDT. The PK in this case is the ContactID, RoleID, and EffectiveDate. TerminationDT can be null. If in my UI, the user changes the RoleID and then I need to update the record. Using the surrogate key I can do an Update Table Set RoleID = 1 WHERE surrogateID = Z. However, using the Composite Key way, once one of the fields in the composite key changes I have no way to reference the old record to update it without now maintaining somewhere in the UI a reference to the old values. I do not bind datasources in my UI. I open a connection, get the data and store it in a bucket, then close the connection. What are everyone's opinions? Thanks.

    Read the article

  • Pre-done SQLs to be converted to Rails' style moduls

    - by Hoornet
    I am a Rails newbie and would really appreciate if someone converted these SQLs to complete modules for rails. I know its a lot to ask but I can't just use find_by_sql for all of them. Or can I? These are the SQLs (they run on MS-SQL): 1) SELECT STANJA_NA_DAN_POSTAVKA.STA_ID, STP_DATE, STP_TIME, STA_OPIS, STA_SIFRA, STA_POND FROM STANJA_NA_DAN_POSTAVKA INNER JOIN STANJA_NA_DAN ON(STANJA_NA_DAN.STA_ID=STANJA_NA_DAN_POSTAVKA.STA_ID) WHERE ((OSE_ID=10)AND (STANJA_NA_DAN_POSTAVKA.STP_DATE={d '2010-03-30'}) AND (STANJA_NA_DAN_POSTAVKA.STP_DATE<={d '2010-03-30'})) 2) SELECT ZIGI_OBDELANI.OSE_ID, ZIGI_OBDELANI.DOG_ID AS DOG_ID, ZIGI_OBDELANI.ZIO_DATUM AS DATUM, ZIGI_PRICETEK.ZIG_TIME_D AS ZIG_PRICETEK, ZIGI_KONEC.ZIG_TIME_D AS ZIG_KONEC FROM (ZIGI_OBDELANI INNER JOIN ZIGI ZIGI_PRICETEK ON ZIGI_OBDELANI.ZIG_ID_PRICETEK = ZIGI_PRICETEK.ZIG_ID) INNER JOIN ZIGI ZIGI_KONEC ON ZIGI_OBDELANI.ZIG_ID_KONEC = ZIGI_KONEC.ZIG_ID WHERE (ZIGI_OBDELANI.OSE_ID = 10) AND (ZIGI_OBDELANI.ZIO_DATUM = {d '2010-03-30'}) AND (ZIGI_OBDELANI.ZIO_DATUM <= {d '2010-03-30'}) AND (ZIGI_PRICETEK.ZIG_VELJAVEN < 0) AND (ZIGI_KONEC.ZIG_VELJAVEN < 0) ORDER BY ZIGI_OBDELANI.OSE_ID, ZIGI_PRICETEK.ZIG_TIME ASC 3) SELECT STA_ID, SUM(STP_TIME) AS SUM_STP_TIME, COUNT(STA_ID) FROM STANJA_NA_DAN_POSTAVKA WHERE ((STP_DATE={d '2010-03-30'}) AND (STP_DATE<={d '2010-03-30'}) AND (STA_ID=3) AND (OSE_ID=10)) GROUP BY STA_ID 4) SELECT DATUM, TDN_ID, TDN_OPIS, URN_OPIS, MOZNI_PROBLEMI, PRIHOD, ODHOD, OBVEZNOST, ZAKLJUCEVANJE_DATUM FROM OBRACUNAJ_DAN WHERE ((OSE_ID=10) AND (DATUM={d '2010-02-28'}) AND (DATUM<={d '2010-03-30'})) ORDER BY DATUM These SQLs are daily working hours and I got them as is. Also I got Database with it which (as you can see from the SQL-s) is not in Rails conventions. As a P.S.: 1)Things like STP_DATE={d '2010-03-30'}) are of course dates (in Slovenian date notation) and will be replaced with a variable (date), so that the user could choose date from and date to. 2) All of this data will be shown in the same page in the table,so maybe all in one module? Or many?; if this helps, maybe. So can someone help me? Its for my work and its my 1st project and I am a Rails newbie and the bosses are getting inpatient(they are getting quite loud actually) Thank you very very much!

    Read the article

  • which sql consumes less memory

    - by prmatta
    Yesterday I asked a question on how to re-write sql to do selects and inserts in batches. I needed to do this to try and consume less virtual memory, since I need to move millions of rows here. The object is to move rows from Table B into Table A. Here are the ways I can think of doing this: SQL #1) INSERT INTO A (x, y, z) SELECT x, y, z FROM B b WHERE ... SQL #2) FOREACH SELECT x,y,z FROM B b WHERE ... INSERT INTO A(x,y,z); END FOREACH; SQL #3) FOREACH SELECT FIRST 2000 x,y,z FROM B b WHERE ... INSERT INTO A(x,y,z); END FOREACH; SQL #4) FOREACH SELECT FIRST 2000 x,y,z FROM B b WHERE ... AND NOT EXISTS IN (SELECT * FROM A) INSERT INTO A(x,y,z); END FOREACH; Are any of the above incorrect? The database is informix 11.5.

    Read the article

  • What are provenly scalable data persistence solutions for consumer profiles?

    - by Hubbard
    Consumer profiles with analytical scores [ConsumerID, 1..n demographical variables, 1...n analytical scores e.g. "likely to churn" "likely to buy an item 100$ in worth" etc.] have to be possible to query fast if they are to be used in customizing web-sites, consumer communications etc. Well. If you have: Large number of consumers Large profiles with a huge set of variables (as profiles describing human behaviour are likely to be..) ...you are in trouble. If you really have a physical relational database to which you target a query and then a physical disk starts to rotate someplace to give you an individual profile or a set of profiles, the profile user (a web site customizing a page, a recommendation engine making a recommendation..) has died of boredom before getting any observable results. There is the possibility of having the profiles in memory, which would of course increase the performance hugely. What are the most proven solutions for a fast-response, scalable consumer profile storage? Is there a shootout of these someplace?

    Read the article

  • MS SQL - High performance data inserting with stored procedures

    - by Marks
    Hi. Im searching for a very high performant possibility to insert data into a MS SQL database. The data is a (relatively big) construct of objects with relations. For security reasons i want to use stored procedures instead of direct table access. Lets say i have a structure like this: Document MetaData User Device Content ContentItem[0] SubItem[0] SubItem[1] SubItem[2] ContentItem[1] ... ContentItem[2] ... Right now I think of creating one big query, doing somehting like this (Just pseudo-code): EXEC @DeviceID = CreateDevice ...; EXEC @UserID = CreateUser ...; EXEC @DocID = CreateDocument @DeviceID, @UserID, ...; EXEC @ItemID = CreateItem @DocID, ... EXEC CreateSubItem @ItemID, ... EXEC CreateSubItem @ItemID, ... EXEC CreateSubItem @ItemID, ... ... But is this the best solution for performance? If not, what would be better? Split it into more querys? Give all Data to one big stored procedure to reduce size of query? Any other performance clue? I also thought of giving multiple items to one stored procedure, but i dont think its possible to give a non static amount of items to a stored procedure. Since 'INSERT INTO A VALUES (B,C),(C,D),(E,F) is more performant than 3 single inserts i thought i could get some performance here. Thanks for any hints, Marks

    Read the article

  • Help Optimizing MySQL Table (~ 500,000 records).

    - by Pyrite
    I have a MySQL table that collects player data from various game servers (Urban Terror). The bot that collects the data runs 24/7, and currently the table is up to about 475,000+ records. Because of this, querying this table from PHP has become quite slow. I wonder what I can do on the database side of things to make it as optomized as possible, then I can focus on the application to query the database. The table is as follows: CREATE TABLE IF NOT EXISTS `people` ( `id` bigint(20) unsigned NOT NULL AUTO_INCREMENT, `name` varchar(40) NOT NULL, `ip` int(4) unsigned NOT NULL, `guid` varchar(32) NOT NULL, `server` int(4) unsigned NOT NULL, `date` int(11) NOT NULL, PRIMARY KEY (`id`), UNIQUE KEY `Person` (`name`,`ip`,`guid`), KEY `server` (`server`), KEY `date` (`date`), KEY `PlayerName` (`name`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 COMMENT='People that Play on Servers' AUTO_INCREMENT=475843 ; I'm storying the IPv4 (ip and server) as 4 byte integers, and using the MySQL functions NTOA(), etc to encode and decode, I heard that this way is faster, rather than varchar(15). The guid is a md5sum, 32 char hex. Date is stored as unix timestamp. I have a unique key on name, ip and guid, as to avoid duplicates of the same player. Do I have my keys setup right? Is the way I'm storing data efficient? Here is the code to query this table. You search for a name, ip, or guid, and it grabs the results of the query and cross references other records that match the name, ip, or guid from the results of the first query, and does it for each field. This is kind of hard to explain. But basically, if I search for one player by name, I'll see every other name he has used, every IP he has used and every GUID he has used. <form action="<?php echo $_SERVER['PHP_SELF']; ?>" method="post"> Search: <input type="text" name="query" id="query" /><input type="submit" name="btnSubmit" value="Submit" /> </form> <?php if (!empty($_POST['query'])) { ?> <table cellspacing="1" id="1up_people" class="tablesorter" width="300"> <thead> <tr> <th>ID</th> <th>Player Name</th> <th>Player IP</th> <th>Player GUID</th> <th>Server</th> <th>Date</th> </tr> </thead> <tbody> <?php function super_unique($array) { $result = array_map("unserialize", array_unique(array_map("serialize", $array))); foreach ($result as $key => $value) { if ( is_array($value) ) { $result[$key] = super_unique($value); } } return $result; } if (!empty($_POST['query'])) { $query = trim($_POST['query']); $count = 0; $people = array(); $link = mysql_connect('localhost', 'mysqluser', 'yea right!'); if (!$link) { die('Could not connect: ' . mysql_error()); } mysql_select_db("1up"); $sql = "SELECT id, name, INET_NTOA(ip) AS ip, guid, INET_NTOA(server) AS server, date FROM 1up_people WHERE (name LIKE \"%$query%\" OR INET_NTOA(ip) LIKE \"%$query%\" OR guid LIKE \"%$query%\")"; $result = mysql_query($sql, $link); if (!$result) { die(mysql_error()); } // Now take the initial results and parse each column into its own array while ($row = mysql_fetch_array($result, MYSQL_NUM)) { $name = htmlspecialchars($row[1]); $people[] = array( 'id' => $row[0], 'name' => $name, 'ip' => $row[2], 'guid' => $row[3], 'server' => $row[4], 'date' => $row[5] ); } // now for each name, ip, guid in results, find additonal records $people2 = array(); foreach ($people AS $person) { $ip = $person['ip']; $sql = "SELECT id, name, INET_NTOA(ip) AS ip, guid, INET_NTOA(server) AS server, date FROM 1up_people WHERE (ip = \"$ip\")"; $result = mysql_query($sql, $link); while ($row = mysql_fetch_array($result, MYSQL_NUM)) { $name = htmlspecialchars($row[1]); $people2[] = array( 'id' => $row[0], 'name' => $name, 'ip' => $row[2], 'guid' => $row[3], 'server' => $row[4], 'date' => $row[5] ); } } $people3 = array(); foreach ($people AS $person) { $guid = $person['guid']; $sql = "SELECT id, name, INET_NTOA(ip) AS ip, guid, INET_NTOA(server) AS server, date FROM 1up_people WHERE (guid = \"$guid\")"; $result = mysql_query($sql, $link); while ($row = mysql_fetch_array($result, MYSQL_NUM)) { $name = htmlspecialchars($row[1]); $people3[] = array( 'id' => $row[0], 'name' => $name, 'ip' => $row[2], 'guid' => $row[3], 'server' => $row[4], 'date' => $row[5] ); } } $people4 = array(); foreach ($people AS $person) { $name = $person['name']; $sql = "SELECT id, name, INET_NTOA(ip) AS ip, guid, INET_NTOA(server) AS server, date FROM 1up_people WHERE (name = \"$name\")"; $result = mysql_query($sql, $link); while ($row = mysql_fetch_array($result, MYSQL_NUM)) { $name = htmlspecialchars($row[1]); $people4[] = array( 'id' => $row[0], 'name' => $name, 'ip' => $row[2], 'guid' => $row[3], 'server' => $row[4], 'date' => $row[5] ); } } // Combine people and people2 into just people $people = array_merge($people, $people2); $people = array_merge($people, $people3); $people = array_merge($people, $people4); $people = super_unique($people); foreach ($people AS $person) { $date = ($person['date']) ? date("M d, Y", $person['date']) : 'Before 8/1/10'; echo "<tr>\n"; echo "<td>".$person['id']."</td>"; echo "<td>".$person['name']."</td>"; echo "<td>".$person['ip']."</td>"; echo "<td>".$person['guid']."</td>"; echo "<td>".$person['server']."</td>"; echo "<td>".$date."</td>"; echo "</tr>\n"; $count++; } // Find Total Records //$result = mysql_query("SELECT id FROM 1up_people", $link); //$total = mysql_num_rows($result); mysql_close($link); } ?> </tbody> </table> <p> <?php echo $count." Records Found for \"".$_POST['query']."\" out of $total"; ?> </p> <?php } $time_stop = microtime(true); print("Done (ran for ".round($time_stop-$time_start)." seconds)."); ?> Any help at all is appreciated! Thank you.

    Read the article

  • Customer wants some data to appear after you later delete rows. System giant / not my creation. Fast

    - by John Sullivan
    This is a fairly common problem, it probably has a name, I just don't know what it is. A.) User sees obscure piece of information in Row B of L_OBSCURE_INFO displayed on some screen at a certain point. It is in table L_Obscure_info. B.) Under certain circumstances we want to correctly delete data in L_OBSCURE_INFO. Unfortunately, nobody accounted for the fact that the user might want to backtrack and see some random piece of information that was most recently in L_OBSCURE_INFO. C.) The system is enormous and L_OBSCURE_INFO is used all the time. You have no idea what the ramifications are of implementing some kind of hack and whatever you do, you don't want to introduce more bugs. I think the best approach would be to create an L_OBSCURE_INFO_HISTORY table and record a record in there every time you change data. But god help your ensuring it's accurate in this system where L_OBSCURE_INFO is being touched everywhere and you don't have time to implement L_OBSCURE_INFO_HISTORY. Is there a particularly easy, clever design solution for this kind of problem -- basically an elegant database hack? If not, is this kind of design problem under a particular class of problems or have a name?

    Read the article

  • Multiple Foreign keys to a single table and single key pointing to more than one table

    - by user1216775
    I need some suggestions from the database design experts here. I have around six foreign keys into a single table (defect) which all point to primary key in user table. It is like: defect (.....,assigned_to,created_by,updated_by,closed_by...) If I want to get information about the defect I can make six joins. Do we have any better way to do it? Another one is I have a states table which can store one of the user-defined set of values. I have defect table and task table and I want both of these tables to share the common state table (New, In Progress etc.). So I created: task (.....,state_id,type_id,.....) defect(.....,state_id,type_id,...) state(state_id,state_name,...) importance(imp_id,imp_name,...) There are many such common attributes along with state like importance(normal, urgent etc), priority etc. And for all of them I want to use same table. I am keeping one flag in each of the tables to differentiate task and defect. What is the best solution in such a case? If somebody is using this application in health domain, they would like to assign different types, states, importances for their defect or tasks. Moreover when a user selects any project I want to display all the types,states etc under configuration parameters section.

    Read the article

  • Random Alpha numeric generator

    - by AAA
    Hi, I want to give our users in the database a unique alpha-numeric id. I am using the code below, will this always generate a unique id? Below is the old and updated version of the code: New php: // Generate Guid function NewGuid() { $s = strtoupper(md5(uniqid("something",true))); $guidText = substr($s,0,8) . '-' . substr($s,8,4) . '-' . substr($s,12,4). '-' . substr($s,16,4). '-' . substr($s,20); return $guidText; } // End Generate Guid $Guid = NewGuid(); echo $Guid; echo "<br><br><br>"; Old PHP: // Generate Guid function NewGuid() { $s = strtoupper(md5(uniqid("something",true))); $guidText = substr($s,0,8) . '-' . substr($s,8,4) . '-' . substr($s,12,4). '-' . substr($s,16,4). '-' . substr($s,20); return $guidText; } // End Generate Guid $Guid = NewGuid(); echo $Guid; echo "<br><br><br>"; Will this first code guarantee uniqueness?

    Read the article

  • What are your best practices for ensuring the correctness of the reports from SQL?

    - by snezmqd4
    Part of my work involves creating reports and data from SQL Server to be used as information for decision. The majority of the data is aggregated, like inventory, sales and costs totals from departments, and other dimensions. When I am creating the reports, and more specifically, I am developing the SELECTs to extract the aggregated data from the OLTP database, I worry about mistaking a JOIN or a GROUP BY, for example, returning incorrect results. I try to use some "best practices" to prevent me for "generating" wrong numbers: When creating an aggregated data set, always explode this data set without the aggregation and look for any obvious error. Export the exploded data set to Excel and compare the SUM(), AVG(), etc, from SQL Server and Excel. Involve the people who would use the information and ask for some validation (ask people to help to identify mistakes on the numbers). Never deploy those things in the afternoon - when possible, try to take a look at the T-SQL on the next morning with a refreshed mind. I had many bugs corrected using this simple procedure. Even with those procedures, I always worry about the numbers. What are your best practices for ensuring the correctness of the reports?

    Read the article

  • What to set the scalar type to contain a byte []. Entity in MVC2

    - by Brad8118
    I'm trying out the EF 4.0 and using the Model first approach. I'd like to store images into the database and I'm not sure of the best type for the scalar in the entity. I currently have it(the image scalar type) setup as a binary. From what I have been reading the best way to store the image in the db is a byte[]. So I'm assuming that binary is the way to go. If there is a better way I'd switch. In my controller I have: //file from client to store in the db HttpPostedFileBase file = Request.Files[inputTagName]; if (file.ContentLength > 0) { keyToAdd.Image = new byte[file.ContentLength]; file.InputStream.Write(keyToAdd.Image, 0, file.ContentLength); } This builds fine but when I run it I get an exception writing the stream to keyToAdd.Image. The exception is something like: Method does not exist. Any ideas? Note that when using a EF 4.0 model first approach I only have int16, int32, double, string, decimal, binary, byte, DateTime, Double, Single, and SByte as available types. Thanks

    Read the article

  • When to use CTEs to encapsulate sub-results, and when to let the RDBMS worry about massive joins.

    - by IanC
    This is a SQL theory question. I can provide an example, but I don't think it's needed to make my point. Anyone experienced with SQL will immediately know what I'm talking about. Usually we use joins to minimize the number of records due to matching the left and right rows. However, under certain conditions, joining tables cause a multiplication of results where the result is all permutations of the left and right records. I have a database which has 3 or 4 such joins. This turns what would be a few records into a multitude. My concern is that the tables will be large in production, so the number of these joined rows will be immense. Further, heavy math is performed on each row, and the idea of performing math on duplicate rows is enough to make anyone shudder. I have two questions. The first is, is this something I should care about, or will SQL Server intelligently realize these rows are all duplicates and optimize all processing accordingly? The second is, is there any advantage to grouping each part of the query so as to get only the distinct values going into the next part of the query, using something like: WITH t1 AS ( SELECT DISTINCT... [or GROUP BY] ), t2 AS ( SELECT DISTINCT... ), t3 AS ( SELECT DISTINCT... ) SELECT... I have often seen the use of DISTINCT applied to subqueries. There is obviously a reason for doing this. However, I'm talking about something a little different and perhaps more subtle and tricky.

    Read the article

  • Social Networking & Network Affiliations

    - by Code Sherpa
    Hi. I am in the process of planning a database for a social networking project and stumbled upon this url which is a (crude) reverse engineered guess at facebook's schema: http://www.flickr.com/photos/ikhnaton2/533233247/ What is of interest to me is the notion of "Affiliations" and I am trying to fully understand how they work, technically speaking. Where I am somewhat confused is the NetworkID column in the FacebookGroups", "FacebookEvent", and "Affiliations" tables (NID in Affiliations). How are these network affiliations interconnected? In my own project, I have a simple profile table: CREATE TABLE [dbo].[Profiles]( [profileid] [int] IDENTITY(1,1) NOT NULL, [userid] [uniqueidentifier] NOT NULL, [username] [varchar](255) COLLATE Latin1_General_CI_AI NOT NULL, [applicationname] [varchar](255) COLLATE Latin1_General_CI_AI NOT NULL, [isanonymous] [bit] NULL, [lastactivity] [datetime] NULL, [lastupdated] [datetime] NULL, CONSTRAINT [PK__Profiles__1DB06A4F] PRIMARY KEY CLUSTERED ( [profileid] ASC )WITH (IGNORE_DUP_KEY = OFF) ON [PRIMARY], CONSTRAINT [PKProfiles] UNIQUE NONCLUSTERED ( [username] ASC, [applicationname] ASC )WITH (IGNORE_DUP_KEY = OFF) ON [PRIMARY] ) ON [PRIMARY] One profile can have many affiliations. And one affiliation can have many profiles. And I would like to design it in such a way that relationships between affiliations tells me something about the associated profiles. In fact, based on the affiliations that users select, I would like to know how to infer as many things as possible about that person. My question is, how should I be designing my network affiliation tables and how do they operate per my above requirements? A rough SQL schema would be appreciated in your response. Thanks in advance...

    Read the article

< Previous Page | 308 309 310 311 312 313 314 315 316 317 318 319  | Next Page >