Search Results

Search found 4640 results on 186 pages for 'unique'.

Page 156/186 | < Previous Page | 152 153 154 155 156 157 158 159 160 161 162 163  | Next Page >

  • Multi-client C# ODBC (Sybase/Oracle/MSSQL) table access question.

    - by Hamish Grubijan
    I am working on a feature that would allow clients pick a unique identifier (ci_name). The code below is a generic version that gets expanded to the right sql depending on the vendor. Hopefully it makes sense. #include "sql.h" create table client_identification ( ci_id TYPE_ID IDENTITY, ci_name varchar(64) not null, constraint ci_pk primary key (ci_name) ); go CREATE_SEQUENCE(ci_id) There will be simple stored procedures for adding, retrieving, and deleting these user records. This will be used by several admins. This will not happen very frequently, but there is still a possibility that something will be added or deleted since the list was initially retrieved. I have not yet decided if I need to detect the case of a double delete, but the user name cannot be created twice - primary key constraint will object. I want to be able to detect this particular case and display something like: "you snooze - you loose." :) I would like to leverage the pk constraint instead of doing some extra sql gymnastics. So, how can I detect this case cleanly, so that it works in MS SQL 2008, Sybase, and Oracle? I hope to do better than catch a general ODBC exception and parse out the text and look for what Sybase, Oracle, and MSSQL would give me back. Oracle is a little different. We actually prepend these variables to the Oracle version of stored procedures because they are not available otherwise: Vret_val out number, Vtran_count in out number, Vmessage_count in out number, Thanks. General helpful tips and comments are welcome, except for naming convention ones ( I do not have a choice here, plus I mangled the actual names a bit).

    Read the article

  • Help Optimizing MySQL Table (~ 500,000 records) and PHP Code.

    - by Pyrite
    I have a MySQL table that collects player data from various game servers (Urban Terror). The bot that collects the data runs 24/7, and currently the table is up to about 475,000+ records. Because of this, querying this table from PHP has become quite slow. I wonder what I can do on the database side of things to make it as optomized as possible, then I can focus on the application to query the database. The table is as follows: CREATE TABLE IF NOT EXISTS `people` ( `id` bigint(20) unsigned NOT NULL AUTO_INCREMENT, `name` varchar(40) NOT NULL, `ip` int(4) unsigned NOT NULL, `guid` varchar(32) NOT NULL, `server` int(4) unsigned NOT NULL, `date` int(11) NOT NULL, PRIMARY KEY (`id`), UNIQUE KEY `Person` (`name`,`ip`,`guid`), KEY `server` (`server`), KEY `date` (`date`), KEY `PlayerName` (`name`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 COMMENT='People that Play on Servers' AUTO_INCREMENT=475843 ; I'm storying the IPv4 (ip and server) as 4 byte integers, and using the MySQL functions NTOA(), etc to encode and decode, I heard that this way is faster, rather than varchar(15). The guid is a md5sum, 32 char hex. Date is stored as unix timestamp. I have a unique key on name, ip and guid, as to avoid duplicates of the same player. Do I have my keys setup right? Is the way I'm storing data efficient? Here is the code to query this table. You search for a name, ip, or guid, and it grabs the results of the query and cross references other records that match the name, ip, or guid from the results of the first query, and does it for each field. This is kind of hard to explain. But basically, if I search for one player by name, I'll see every other name he has used, every IP he has used and every GUID he has used. <form action="<?php echo $_SERVER['PHP_SELF']; ?>" method="post"> Search: <input type="text" name="query" id="query" /><input type="submit" name="btnSubmit" value="Submit" /> </form> <?php if (!empty($_POST['query'])) { ?> <table cellspacing="1" id="1up_people" class="tablesorter" width="300"> <thead> <tr> <th>ID</th> <th>Player Name</th> <th>Player IP</th> <th>Player GUID</th> <th>Server</th> <th>Date</th> </tr> </thead> <tbody> <?php function super_unique($array) { $result = array_map("unserialize", array_unique(array_map("serialize", $array))); foreach ($result as $key => $value) { if ( is_array($value) ) { $result[$key] = super_unique($value); } } return $result; } if (!empty($_POST['query'])) { $query = trim($_POST['query']); $count = 0; $people = array(); $link = mysql_connect('localhost', 'mysqluser', 'yea right!'); if (!$link) { die('Could not connect: ' . mysql_error()); } mysql_select_db("1up"); $sql = "SELECT id, name, INET_NTOA(ip) AS ip, guid, INET_NTOA(server) AS server, date FROM 1up_people WHERE (name LIKE \"%$query%\" OR INET_NTOA(ip) LIKE \"%$query%\" OR guid LIKE \"%$query%\")"; $result = mysql_query($sql, $link); if (!$result) { die(mysql_error()); } // Now take the initial results and parse each column into its own array while ($row = mysql_fetch_array($result, MYSQL_NUM)) { $name = htmlspecialchars($row[1]); $people[] = array( 'id' => $row[0], 'name' => $name, 'ip' => $row[2], 'guid' => $row[3], 'server' => $row[4], 'date' => $row[5] ); } // now for each name, ip, guid in results, find additonal records $people2 = array(); foreach ($people AS $person) { $ip = $person['ip']; $sql = "SELECT id, name, INET_NTOA(ip) AS ip, guid, INET_NTOA(server) AS server, date FROM 1up_people WHERE (ip = \"$ip\")"; $result = mysql_query($sql, $link); while ($row = mysql_fetch_array($result, MYSQL_NUM)) { $name = htmlspecialchars($row[1]); $people2[] = array( 'id' => $row[0], 'name' => $name, 'ip' => $row[2], 'guid' => $row[3], 'server' => $row[4], 'date' => $row[5] ); } } $people3 = array(); foreach ($people AS $person) { $guid = $person['guid']; $sql = "SELECT id, name, INET_NTOA(ip) AS ip, guid, INET_NTOA(server) AS server, date FROM 1up_people WHERE (guid = \"$guid\")"; $result = mysql_query($sql, $link); while ($row = mysql_fetch_array($result, MYSQL_NUM)) { $name = htmlspecialchars($row[1]); $people3[] = array( 'id' => $row[0], 'name' => $name, 'ip' => $row[2], 'guid' => $row[3], 'server' => $row[4], 'date' => $row[5] ); } } $people4 = array(); foreach ($people AS $person) { $name = $person['name']; $sql = "SELECT id, name, INET_NTOA(ip) AS ip, guid, INET_NTOA(server) AS server, date FROM 1up_people WHERE (name = \"$name\")"; $result = mysql_query($sql, $link); while ($row = mysql_fetch_array($result, MYSQL_NUM)) { $name = htmlspecialchars($row[1]); $people4[] = array( 'id' => $row[0], 'name' => $name, 'ip' => $row[2], 'guid' => $row[3], 'server' => $row[4], 'date' => $row[5] ); } } // Combine people and people2 into just people $people = array_merge($people, $people2); $people = array_merge($people, $people3); $people = array_merge($people, $people4); $people = super_unique($people); foreach ($people AS $person) { $date = ($person['date']) ? date("M d, Y", $person['date']) : 'Before 8/1/10'; echo "<tr>\n"; echo "<td>".$person['id']."</td>"; echo "<td>".$person['name']."</td>"; echo "<td>".$person['ip']."</td>"; echo "<td>".$person['guid']."</td>"; echo "<td>".$person['server']."</td>"; echo "<td>".$date."</td>"; echo "</tr>\n"; $count++; } // Find Total Records //$result = mysql_query("SELECT id FROM 1up_people", $link); //$total = mysql_num_rows($result); mysql_close($link); } ?> </tbody> </table> <p> <?php echo $count." Records Found for \"".$_POST['query']."\" out of $total"; ?> </p> <?php } $time_stop = microtime(true); print("Done (ran for ".round($time_stop-$time_start)." seconds)."); ?> Any help at all is appreciated! Thank you.

    Read the article

  • Problem validating an XSD file: The content type of a derived type and that of its base must both be mixed or both be element-only

    - by Paulo Tavares
    Hi, I have following XML schema: <?xml version="1.0" encoding="UTF-8"?> <schema xmlns:netconf="urn:ietf:params:xml:ns:netconf:base:1.0" targetNamespace="urn:ietf:params:xml:ns:netconf:base:1.0" ... <complexType name="dataInlineType"> <xs:complexContent> <xs:extension base="xs:anyType"/> </xs:complexContent> </complexType> <complexType name="get-config_output_type__" > <complexContent> <extension base="netconf:dataInlineType"> <sequence> <element name="data"> <complexType> <sequence> <element name="__.get-config.output.data.A__" minOccurs="0" maxOccurs="unbounded" /> </sequence> </complexType> </element> <element name="__.get-config.A__" minOccurs="0" maxOccurs="unbounded"/> </sequence> </extension> </complexContent> And I getting the folling error: cos-ct-extends.1.4.3.2.2.1.a: The content type of a derived type and that of its base must both be mixed or both be element-only. Type 'get-config_output_type__' is element only, but its base type is not. If I put both elements mixed="true" I get another error: cos-nonambig: WC[##any] and "urn:ietf:params:xml:ns:netconf:base:1.0":data (or elements from their substitution group) violate "Unique Particle Attribution". During validation against this schema, ambiguity would be created for those two particles. I using the Eclipse to validate my schema, so what can I do?

    Read the article

  • Is there a more memory efficient way to search through a Core Data database?

    - by Kristian K
    I need to see if an object that I have obtained from a CSV file with a unique identifier exists in my Core Data Database, and this is the code I deemed suitable for this task: NSFetchRequest *fetchRequest = [[NSFetchRequest alloc] init]; NSEntityDescription *entity; entity = [NSEntityDescription entityForName:@"ICD9" inManagedObjectContext:passedContext]; [fetchRequest setEntity:entity]; NSPredicate *pred = [NSPredicate predicateWithFormat:@"uniqueID like %@", uniqueIdentifier]; [fetchRequest setPredicate:pred]; NSError *err; NSArray* icd9s = [passedContext executeFetchRequest:fetchRequest error:&err]; [fetchRequest release]; if ([icd9s count] > 0) { for (int i = 0; i < [icd9s count]; i++) { NSAutoreleasePool *pool = [[NSAutoreleasePool alloc]init]; NSString *name = [[icd9s objectAtIndex:i] valueForKey:@"uniqueID"]; if ([name caseInsensitiveCompare:uniqueIdentifier] == NSOrderedSame && name != nil) { [pool release]; return [icd9s objectAtIndex:i]; } [pool release]; } } return nil; After more thorough testing it appears that this code is responsible for a huge amount of leaking in the app I'm writing (it crashes on a 3GS before making it 20 percent through the 1459 items). I feel like this isn't the most efficient way to do this, any suggestions for a more memory efficient way? Thanks in advance!

    Read the article

  • Avoiding duplication in setting properties on the task in Rake tasks

    - by Stray
    I have a bunch of rake building tasks. They each have unique input / output properties, but the majority of the properties I set on the tasks are the same each time. Currently I'm doing that via simple repetition like this: task :buildThisModule => "bin/modules/thisModule.swf" mxmlc "bin/modules/thisModule.swf" do |t| t.input = "src/project/modules/ThisModule.as" t.prop1 = value1 t.prop2 = value2 ... (And many more property=value sets that are the same in each task) end task :buildThatModule => "bin/modules/thatModule.swf" mxmlc "bin/modules/thatModule.swf" do |t| t.input = "src/project/modules/ThatModule.as" t.prop1 = value1 t.prop2 = value2 ... (And many more property=value sets that are the same in each task) end In my usual programming headspace I'd expect to be able to break out the population of the recurring task properties to a re-usable function. Is there a rake analogy for this? Some way I can have a single function where the shared properties are set on any task? Something equivalent to: task :buildThisModule => "bin/modules/thisModule.swf" mxmlc "bin/modules/thisModule.swf" do |t| addCommonTaskParameters(t) t.input = "src/project/modules/ThisModule.as" end task :buildThatModule => "bin/modules/thatModule.swf" mxmlc "bin/modules/thatModule.swf" do |t| addCommonTaskParameters(t) t.input = "src/project/modules/ThatModule.as" end Thanks.

    Read the article

  • How much does an InnoDB table benefit from having fixed-length rows?

    - by Philip Eve
    I know that dependent on the database storage engine in use, a performance benefit can be found if all of the rows in the table can be guaranteed to be the same length (by avoiding nullable columns and not using any VARCHAR, TEXT or BLOB columns). I'm not clear on how far this applies to InnoDB, with its funny table arrangements. Let's give an example: I have the following table CREATE TABLE `PlayerGameRcd` ( `User` SMALLINT UNSIGNED NOT NULL, `Game` MEDIUMINT UNSIGNED NOT NULL, `GameResult` ENUM('Quit', 'Kicked by Vote', 'Kicked by Admin', 'Kicked by System', 'Finished 5th', 'Finished 4th', 'Finished 3rd', 'Finished 2nd', 'Finished 1st', 'Game Aborted', 'Playing', 'Hide' ) NOT NULL DEFAULT 'Playing', `Inherited` TINYINT NOT NULL, `GameCounts` TINYINT NOT NULL, `Colour` TINYINT UNSIGNED NOT NULL, `Score` SMALLINT UNSIGNED NOT NULL DEFAULT 0, `NumLongTurns` TINYINT UNSIGNED NOT NULL DEFAULT 0, `Notes` MEDIUMTEXT, `CurrentOccupant` TINYINT UNSIGNED NOT NULL DEFAULT 0, PRIMARY KEY (`Game`, `User`), UNIQUE KEY `PGR_multi_uk` (`Game`, `CurrentOccupant`, `Colour`), INDEX `Stats_ind_PGR` (`GameCounts`, `GameResult`, `Score`, `User`), INDEX `GameList_ind_PGR` (`User`, `CurrentOccupant`, `Game`, `Colour`), CONSTRAINT `Constr_PlayerGameRcd_User_fk` FOREIGN KEY `User_fk` (`User`) REFERENCES `User` (`UserID`) ON DELETE CASCADE ON UPDATE CASCADE, CONSTRAINT `Constr_PlayerGameRcd_Game_fk` FOREIGN KEY `Game_fk` (`Game`) REFERENCES `Game` (`GameID`) ON DELETE CASCADE ON UPDATE CASCADE ) ENGINE=INNODB CHARACTER SET utf8 COLLATE utf8_general_ci The only column that is nullable is Notes, which is MEDIUMTEXT. This table presently has 33097 rows (which I appreciate is small as yet). Of these rows, only 61 have values in Notes. How much of an improvement might I see from, say, adding a new table to store the Notes column in and performing LEFT JOINs when necessary?

    Read the article

  • What prevents a user from adding controls to an ASP.NET page client side?

    - by Curtis White
    This goes back to my other question which I thought was sufficiently answers but upon reflect am not sure that it was (sorry). Backgrounder: I am generating a form dynamically. I am pulling from the database the controls. I must associate each control with a database ID which is not the user's session id. I do this currently by storing my ID in the ID for the web control with some other stuff to make it unique/clear what I am doing. On the post back, I iterate through all the controls on my web page checking for my special identifier, ie, MyGeneratedTextBox_ID_Unique. This process enables for 2 important steps, identifying the control was one I generated and also getting the ID for this input field. And, all of this works but I'm still concerned about the security of it. I do not see a security issue with showing the actual database ID's in this case, although agree it is not desirable. However, I am concerned of the following possibilities: If a user could add a nefarious control to my collection and use that for a SQL injection attack. More academic, but if a user could somehow store data for fields they do not have access too by changing the id's. I agree this is a "hack" of a way to do it. But my question is, is it a security risk and is there an 'easy' way to do it in a less hack way? I assume that only the controls that are created/instantiated on the page are added to the controls list.. thus all controls must be created server side and thus the security issue is address but just wanted to validate. Thanks again. PS: I could see adding a property for each control and encrypting the viewstate would be a little more secure.

    Read the article

  • Complex Quary on cassandra

    - by Sadiqur Rahman
    I have heard on cassandra database engine few days ago and searching for a good documentation on it. after studying on cassandra I got cassandra is more scalable than other data engine. I also read on Amazon SimpleDB but as SimpleDB has a limitation 10GB/table and Google Datastore is slower than Amazon SimpleDB, I prefer not to use them (Google Datastore, Amazon SimpleDB). So for making our site scaled specially high write rates with massive data, I like to use Cassandra as out Data Engine. But before starting using cassandra I am confused on "How to handle complex data using casssandra". I am giving you the MySQL database structure below, Please read this and give me a good suggestion. Users Table hasColum ID Primary hasColum email Unique hasColum FirstName hasColum LastName Category Table hasColum ID Primary hasColum Parent hasColum Category Posts Table hasColum ID Primary hasColum UID Index foreign key linked to users-ID hasColum CID Index foreign key linked to Category-ID hasColum Title hasColum Post Index hasColum PunDate Comments hasColum ID primary hasColum UID Index foreign key linked to users-ID hasColum PID Index foreign key linked to Posts-ID hasColum Comment User Group hasColum ID primary hasColum Name UserToGroup Table (for many to many relation only) hasColum UID foreign key linked to Users-ID hasColum GID foreign key linked to Group-ID Finally for your information, I like to use SimpleCassie PHP Class http://code.google.com/p/simpletools-php/ So, it will be very helpful if you can give me example using SimpleCassie

    Read the article

  • I'm doing a lot of lists and dictionary sorting...and this is causing memory errors in Python websit

    - by alex
    I retrieved data from the log table in my database. Then I started finding unique users, comparing/sorting lists, etc. In the end I got down to this. stats = {'2010-03-19': {'date': '2010-03-19', 'unique_users': 312, 'queries': 1465}, '2010-03-18': {'date': '2010-03-18', 'unique_users': 329, 'queries': 1659}, '2010-03-17': {'date': '2010-03-17', 'unique_users': 379, 'queries': 1845}, '2010-03-16': {'date': '2010-03-16', 'unique_users': 434, 'queries': 2336}, '2010-03-15': {'date': '2010-03-15', 'unique_users': 390, 'queries': 2138}, '2010-03-14': {'date': '2010-03-14', 'unique_users': 460, 'queries': 2221}, '2010-03-13': {'date': '2010-03-13', 'unique_users': 507, 'queries': 2242}, '2010-03-12': {'date': '2010-03-12', 'unique_users': 629, 'queries': 3523}, '2010-03-11': {'date': '2010-03-11', 'unique_users': 811, 'queries': 4274}, '2010-03-10': {'date': '2010-03-10', 'unique_users': 171, 'queries': 1297}, '2010-03-26': {'date': '2010-03-26', 'unique_users': 299, 'queries': 1617}, '2010-03-27': {'date': '2010-03-27', 'unique_users': 323, 'queries': 1310}, '2010-03-24': {'date': '2010-03-24', 'unique_users': 352, 'queries': 2112}, '2010-03-25': {'date': '2010-03-25', 'unique_users': 330, 'queries': 1290}, '2010-03-22': {'date': '2010-03-22', 'unique_users': 329, 'queries': 1798}, '2010-03-23': {'date': '2010-03-23', 'unique_users': 329, 'queries': 1857}, '2010-03-20': {'date': '2010-03-20', 'unique_users': 368, 'queries': 1693}, '2010-03-21': {'date': '2010-03-21', 'unique_users': 329, 'queries': 1511}, '2010-03-29': {'date': '2010-03-29', 'unique_users': 325, 'queries': 1718}, '2010-03-28': {'date': '2010-03-28', 'unique_users': 340, 'queries': 1815}, '2010-03-30': {'date': '2010-03-30', 'unique_users': 329, 'queries': 1891}} It's not a big dictionary. But when I try to do one last thing...it craps out on me. for k, v in stats: mylist.append(v) too many values to unpack What the heck does that mean??? TOO MANY VALUES TO UNPACK.

    Read the article

  • Understanding REST through an example

    - by grifaton
    My only real exposure to the ideas of REST has been through Ruby on Rails' RESTful routing. This has suited me well for the kind of CRUD-based applications I have built with Rails, but consequently my understanding of RESTfulness is somewhat limited. Let's say we have a finite collection of Items, each of which has a unique ID, and a number of properties, such as colour, shape, and size (which might be undefined for some Items). Items can be used by a client for a period of time, but each Item can only be used by one client at once. Access to Items is regulated by a server. Clients can request the temporary use of certain items from a server. Usually, clients will only be interested in getting access to a number of Items with particular properties, rather than getting access to specific Items. When a client requests use of a number of Items, the server responds with a list of IDs corresponding to the request, or with a response that says that the requested Items are not currently available or do not exist. A client can make the following kinds of request: Tell me how many green triangle Items there are (in total/available). Give me use of 200 large red Items. I have finished with Items 21, 23, 23. Add 100 new red square Items. Delete 50 small green Items. Modify all big yellow pentagon Items to be blue. The toy example above is like a resource allocation problem I have had to deal with recently. How should I go about thinking about it RESTfully?

    Read the article

  • how do I get the program to Know which annotation is selected and be able to access properties of it

    - by kevin Mendoza
    So far my program can display a database of custom annotation views. Eventually I want my program to be able to display extra information after a button on the annotation bubble is clicked. Each element in the database has a unique entry Number, so I thought it would be a good idea to add this entry number as a property of the custom annotation. The problem I am having is that after the button is clicked and the program switches to a new view I am unable to retrieve the entry number of the annotation I selected. Below is the code that assigns the entry Number property to the annotation: for (id mine in mines) { workingCoordinate.latitude = [[mine latitudeInitial] doubleValue]; workingCoordinate.longitude = [[mine longitudeInitial] doubleValue]; iProspectAnnotation *tempMine = [[iProspectAnnotation alloc] initWithCoordinate:workingCoordinate]; [tempMine setTitle:[mine mineName]]; [tempMine setAnnotationEntryNumber:[mine entryNumber]]; } [mines dealloc]; When the button on an annotation is selected, this is the code that initializes the new view: - (void)mapView:(MKMapView *)mapView annotationView:(MKAnnotationView *)view calloutAccessoryControlTapped:(UIControl *)control { mineInformationController *controller = [[mineInformationController alloc] initWithNibName:@"mineInformationController" bundle:nil]; controller.modalTransitionStyle = UIModalTransitionStyleCrossDissolve; [self presentModalViewController:controller animated:YES]; [controller release]; } and lastly is my attempt at retrieving the entryNumber property from the new view so that I can compare it to the mines database and retrieve more information on the array element. iProspectFresno_LiteAppDelegate *appDelegate = (iProspectFresno_LiteAppDelegate *)[[UIApplication sharedApplication] delegate]; NSMutableArray* mines = [[NSMutableArray alloc] initWithArray:(NSMutableArray *)appDelegate.mines]; for(id mine in mines) { if ([[mine entryNumber] isEqualToNumber: /*the entry Number of the selected annotation*/]) { /* display the information in the mine object */ } } So how do I access this entry number property in this new view controller?

    Read the article

  • How to store and locate multiple interface types within a Delphi TInterfaceList

    - by Brian Frost
    Hi, I'm storing small interfaces from a range of objects into a single TInterfaceList 'store' with the intention of offering list of specific interface types to the end user, so each interface will expose a 'GetName' function but all other methods are unique to that interface type. For example here are two interfaces: IBase = interface //---------------------------------------- function GetName : string; //---------------------------------------- end; IMeasureTemperature = interface(IBase) //------------------------------------ function MeasureTemperature : double; //---------------------------------------- end; IMeasureHumidity = interface(IBase) //---------------------------------------- function MeasureHumidity: double; //---------------------------------------- end; I put several of these interfaces into a single TInterfaceList and then I'd like to scan the list for a specific interface type (e.g. 'IMeasureTemperature') building another list of pointers to the objects exporting those interfaces. I wish to make no assumptions about the locations of those objects, some may export more than one type of interface. I know I could do this with a class hierarchy using something like: If FList[I] is TMeasureTemperature then .. but I'd like to do something simliar with an interface type, Is this possible?

    Read the article

  • How to insert several thousand columns into sqlite3?

    - by user291071
    Similar to my last question, but I ran into problem lets say I have a simple dictionary like below but its Big, when I try inserting a big dictionary using the methods below I get operational error for the c.execute(schema) for too many columns so what should be my alternate method to populate an sql databases columns? Using the alter table command and add each one individually? import sqlite3 con = sqlite3.connect('simple.db') c = con.cursor() dic = { 'x1':{'y1':1.0,'y2':0.0}, 'x2':{'y1':0.0,'y2':2.0,'joe bla':1.5}, 'x3':{'y2':2.0,'y3 45 etc':1.5} } # 1. Find the unique column names. columns = set() for _, cols in dic.items(): for key, _ in cols.items(): columns.add(key) # 2. Create the schema. col_defs = [ # Start with the column for our key name '"row_name" VARCHAR(2) NOT NULL PRIMARY KEY' ] for column in columns: col_defs.append('"%s" REAL NULL' % column) schema = "CREATE TABLE simple (%s);" % ",".join(col_defs) c.execute(schema) # 3. Loop through each row for row_name, cols in dic.items(): # Compile the data we have for this row. col_names = cols.keys() col_values = [str(val) for val in cols.values()] # Insert it. sql = 'INSERT INTO simple ("row_name", "%s") VALUES ("%s", "%s");' % ( '","'.join(col_names), row_name, '","'.join(col_values) )

    Read the article

  • How to insert and call by row and column into sqlite3 python, great tutorial problem.

    - by user291071
    Lets say i have a simple array of x rows and y columns with corresponding values, What is the best method to do 3 things? How to insert, update a value at a specific row column? How to select a value for each row and column, import sqlite3 con = sqlite3.connect('simple.db') c = con.cursor() c.execute('''create table simple (links text)''') con.commit() dic = {'x1':{'y1':1.0,'y2':0.0},'x2':{'y1':0.0,'y2':2.0,'y3':1.5},'x3':{'y2':2.0,'y3':1.5}} ucols = {} ## my current thoughts are collect all row values and all column values from dic and populate table row and columns accordingly how to call by row and column i havn't figured out yet ##populate rows in first column for row in dic: print row c.execute("""insert into simple ('links') values ('%s')"""%row) con.commit() ##unique columns for row in dic: print row for col in dic[row]: print col ucols[col]=dic[row][col] ##populate columns for col in ucols: print col c.execute("alter table simple add column '%s' 'float'" % col) con.commit() #functions needed ##insert values into sql by row x and column y?how to do this e.g. x1 and y2 should put in 0.0 ##I tried as follows didn't work for row in dic: for col in dic[row]: val =dic[row][col] c.execute("""update simple SET '%s' = '%f' WHERE 'links'='%s'"""%(col,val,row)) con.commit() ##update value at a specific row x and column y? ## select a value at a specific row x and column y?

    Read the article

  • Distinct or group by on some columns but not others

    - by Nazadus
    I have a view that I'm trying to filter with something similar to DISTINCT on some columns but not others. I have a view like this: Name LastName Zip Street1 HouseholdID (may not be unique because it may have multiple addresses -- think of it in the logical sense as grouping persons but not physical locations; If you lookup HouseholdID 4130, you may get two rows.. or more, because the person may have mutiple mailing locations) City State I need to pull all those columns but filter on LastName,Zip, and Street1. Here's the fun part: The filter is arbitrary -- meaning I don't care which one of the duplicates goes away. This is for a mail out type thing and the other information is not used for any other reason than than to look up a specific person if needed (I have no idea why). So.. given one of the records, you can easily figure out the removed ones. As it stands now, my Sql-Fu fails me and I'm filtering in C# which is incredibly slow and is pretty much a foreach that starts with an empty list and adds the row in if the combined last name, zip, and street aren't are not in the list. I feel like I'm missing a simple / basic part of SQL that I should be understanding.

    Read the article

  • Seperating and counting CSV entries from database (Access/ASp Classic)

    - by Katherine Perotta
    hey i could really use some help with this one. I have a faq with multiple "tags" and I would like to separate and count them. They are currently in the database as follows: ID-----------------TITLE--------------CONTENT-----------TAGS Sample Records: 1---------------sampletitle 1---------amplecontent--------tag1,tag2,tag3 2---------------sampletitle 2---------moresamplestuff-----tag3,tag4,tag5 How could I go about counting the number of times each tag is used? In the end, would it be easier to just create a separate table called TAGS, with a single tag corresponding to a single ID in FAQ? The only reason I don't prefer doing something like that is because I have so much data already it would take quite a while. However, if there's no alternative or if its easier than doing string parsing like that, im willing to do it. The goal is to display each unique tag and the number of times it is used. Would it be better to do the heavy lifting in the database or ASP? I have gotten as far as getting a list of all tags and displaying them in an array (with each tag separated). So at this point what I need to do is count each value and then remove the duplicates (while preserving the count number somewhere). This is in ASP classic using an Access database. Thanks!

    Read the article

  • Issue with transparent texture on 3D primitive, XNA 4.0

    - by Bevin
    I need to draw a large set of cubes, all with (possibly) unique textures on each side. Some of the textures also have parts of transparency. The cubes that are behind ones with transparent textures should show through the transparent texture. However, it seems that the order in which I draw the cubes decides if the transparency works or not, which is something I want to avoid. Look here: cubeEffect.CurrentTechnique = cubeEffect.Techniques["Textured"]; Block[] cubes = new Block[4]; cubes[0] = new Block(BlockType.leaves, new Vector3(0, 0, 3)); cubes[1] = new Block(BlockType.dirt, new Vector3(0, 1, 3)); cubes[2] = new Block(BlockType.log, new Vector3(0, 0, 4)); cubes[3] = new Block(BlockType.gold, new Vector3(0, 1, 4)); foreach(Block b in cubes) { b.shape.RenderShape(GraphicsDevice, cubeEffect); } This is the code in the Draw method. It produces this result: As you can see, the textures behind the leaf cube are not visible on the other side. When i reverse index 3 and 0 on in the array, I get this: It is clear that the order of drawing is affecting the cubes. I suspect it may have to do with the blend mode, but I have no idea where to start with that.

    Read the article

  • Using variables before include()ing them

    - by phenry
    I'm using a file, page.php, as an HTML container for several content files; i.e., page.php defines most of the common structure of the page, and the content files just contain the text that's unique to every page. What I would like to do is include some PHP code with each content file that defines metadata for the page such as its title, a banner graphic to use, etc. For example, a content file might look like this (simplified): <?php $page_title="My Title"; ?> <h1>Hello world!</h1> The name of the file would be passed as a URL parameter to page.php, which would look like this: <html> <head> <title><?php echo $page_title; ?></title> </head> <body> <?php include($_GET['page']); ?> </body> </html> The problem with this approach is that the variable gets defined after it is used, which of course won't work. Output buffering also doesn't seem to help. Is there an alternate approach I can use? I'd prefer not to define the text in the content file as a PHP heredoc block, because that smashes the HTML syntax highlighting in my text editor. I also don't want to use JavaScript to rewrite page elements after the fact, because many of these pages don't otherwise use JavaScript and I'd rather not introduce it as a dependency if I don't have to.

    Read the article

  • SQL Duplicates Issue

    - by jeff
    I have two tables : Product and ProductRateDetail. The parent table is Product. I have duplicate records in the product table which need to be unique. There are entries in the ProductRateDetail table which correspond to duplicate records in the product table. Somehow I need to update the ProductRateDetail table to match the original (older) ID from the Product table and then remove the duplicates from the product table. I would do this manually but there are 100's of records. I.e. something like UPDATE tbl_productRateDetail SET productID = (originalID from tbl_product) then something like DELETE from tbl_product WHERE duplicate ID and only delete the recently added ID data example: (sorry can't work out this formatting thing) select * from dbo.Product where ProductCode = '10003' tbl_product ProductID ProductTypeID ProductDescription ProductCode ProductSize 365 1 BEND DOUBLE FLANGED 10003 80mmX90deg 1354 1 BEND DOUBLE FLANGED 10003 80mmX90deg SELECT * FROM [MSTS2].[dbo].[ProductRateDetail] WHERE ProductID in (365,1354) tbl_productratedetail ProductRateDetailID ProductRateID ProductID UnitRate 365 1 365 16.87 1032 5 365 16.87 2187 10 365 16.87 2689 11 365 16.87 3191 12 365 16.87 7354 21 1354 21.30 7917 22 1354 21.30 8480 23 1354 21.30 9328 25 1354 21.30 9890 26 1354 21.30 10452 27 1354 21.30 Please help!

    Read the article

  • Processing a database queue across multiple threads - design advice

    - by rwmnau
    I have a SQL Server table full of orders that my program needs to "follow up" on (call a webservice to see if something has been done with them). My application is multi-threaded, and could have instances running on multiple servers. Currently, every so often (on a Threading timer), the process selects 100 rows, at random (ORDER BY NEWID()), from the list of "unconfirmed" orders and checks them, marking off any that come back successfully. The problem is that there's a lot of overlap between the threads, and between the different processes, and their's no guarantee that a new order will get checked any time soon. Also, some orders will never be "confirmed" and are dead, which means that they get in the way of orders that need to be confirmed, slowing the process down if I keep selecting them over and over. What I'd prefer is that all outstanding orders get checked, systematically. I can think of two easy ways do this: The application fetches one order to check at a time, passing in the last order it checked as a parameter, and SQL Server hands back the next order that's unconfirmed. More database calls, but this ensures that every order is checked in a reasonable timeframe. However, different servers may re-check the same order in succession, needlessly. The SQL Server keeps track of the last order it asked a process to check up on, maybe in a table, and gives a unique order to every request, incrementing its counter. This involves storing the last order somewhere in SQL, which I wanted to avoid, but it also ensures that threads won't needlessly check the same orders at the same time Are there any other ideas I'm missing? Does this even make sense? Let me know if I need some clarification.

    Read the article

  • MySQL: SELECT highest column value when WHERE finds similar entries

    - by Ike
    My question is comparable to this one, but not quite the same. I have a database with a huge amount of books, with different editions of some of the same book titles. I'm looking for an SQL statement giving me the highest edition number of each of the titles I'm selecting with a WHERE clause (to find specific book series). Here's what the table looks like: |id|title |edition|year| |--|-------------------------|-------|----| |01|Serie One Title One |1 |2007| |02|Serie One Title One |2 |2008| |03|Serie One Title One |3 |2009| |04|Serie One Title Two |1 |2001| |05|Serie One Title Three |1 |2008| |06|Serie One Title Three |2 |2009| |07|Serie One Title Three |3 |2010| |08|Serie One Title Three |4 |2011| |--|-------------------------|-------|----| The result I'm looking for is this: |id|title |edition|year| |--|-------------------------|-------|----| |03|Serie One Title One |3 |2009| |04|Serie One Title Two |1 |2001| |08|Serie One Title Three |4 |2011| |--|-------------------------|-------|----| The closest I got was using this statement: select id, title, max(edition), max(year) from books where title like "serie one%" group by name; but it returns the highest edition and year and includes the first id it finds: |--|-----------------------|-------|----| |01|Serie One Title One |3 |2009| |04|Serie One Title Two |1 |2001| |05|Serie One Title Three |4 |2011| |--|-----------------------|-------|----| This fancy join also comes close, but doesn't give the right result: select b.id, b.title, b.edition, b.year from books b inner join (select name, max(edition) as maxedition from books group by title) g on b.edition = g.maxedition where b.title like "serie one%" group by title; Using this I'm getting unique titles, but mostly old editions.

    Read the article

  • Determining the chances of an event occurring when it hasn't occurred yet

    - by sanity
    A user visits my website at time t, and they may or may not click on a particular link I care about, if they do I record the fact that they clicked the link, and also the duration since t that they clicked it, call this d. I need an algorithm that allows me to create a class like this: class ClickProbabilityEstimate { public void reportImpression(long id); public void reportClick(long id); public double estimateClickProbability(long id); } Every impression gets a unique id, and this is used when reporting a click to indicate which impression the click belongs to. I need an algorithm that will return a probability, based on how much time has past since an impression was reported, that the impression will receive a click, based on how long previous clicks required. Clearly one would expect that this probability will decrease over time if there is still no click. If necessary, we can set an upper-bound, beyond which we consider the click probability to be 0 (eg. if its been an hour since the impression occurred, we can be pretty sure there won't be a click). The algorithm should be both space and time efficient, and hopefully make as few assumptions as possible, while being elegant. Ease of implementation would also be nice. Any ideas?

    Read the article

  • Why cant Git merge file changes with a modified parent/master.

    - by Andy
    I have a file with one line in it. I create a branch and add a second line to the same file. Save and commit to the branch. I switch back to the master. And add a different, second line to the file. Save and commit to the master. So there's now 3 unique lines in total. If I now try and merge the branch back to the master, it suffers a merge conflict. Why cant Git simple merge each line, one after the other? My attempt at merge behaves something like this: PS D:\dev\testing\test1> git merge newbranch Auto-merging hello.txt CONFLICT (content): Merge conflict in hello.txt Automatic merge failed; fix conflicts and then commit the result. PS D:\dev\testing\test1> git diff diff --cc hello.txt index 726eeaf,e48d31a..0000000 --- a/hello.txt +++ b/hello.txt @@@ -1,2 -1,2 +1,6 @@@ This is the first line. - New line added by master. -Added a line in newbranch. ++<<<<<<< HEAD ++New line added by master. ++======= ++Added a line in newbranch. ++>>>>>>> newbranch Is there a way to make it slot lines in automatically, one after the other?

    Read the article

  • What is the standard way to bundle OSGi dependent libraries?

    - by Chris
    Hi, I have a project that references a number of open source libraries, some new, some not so new. That said, they are all stable and I wish to stick with my chosen versions until I have time to migrate to the newer versions (I tested hsqldb 2.0 yesterday and it contains many api changes). One of the libraries I have wish to embed is Jasper Reports, but as you all surely know, it comes with a mountain of supporting jar files and I have only need a subset of the mountain (known) therefore I am planning to custom bundle all of my dependant libraries. So: Does everyone custom-make their own OSGi bundles for open-source libraries they are using or is there a master source of OSGi versions of common libraries? Also, I was thinking that it would be far simpler for each of my bundles simply to embed their dependent jars within the bundle itself. Is this possible? If I choose to embed the 3rd party foc libraries within a bundle, I assume I will need to produce 2 jar files, one without the embedded libraries (for libraries to be loaded via the classpath via standard classloader), and one osgi version that includes the embedded libraryy, therefore should I choose a bundle name like this <<myprojectname>>-<<subproject>>-osgi-.1.0.0.jar ? If I cannot embed the open source libraries and choose to custom bundle the open source libraries (via bnd), should I choose a unique bundle name to avoid conflict with a possible official bundle? e.g. <<myprojectname>>-<<3rdpartylibname>>-<<3rdpartylibversion>>.jar ? My non-OSGi enabled project currently scans for custom plugins via scanning the META-INF folders in my various plugin jars via Service.providers(...). If I go OSGi, will this mechanism still work?

    Read the article

  • Speed comparison - Template specialization vs. Virtual Function vs. If-Statement

    - by Person
    Just to get it out of the way... Premature optimization is the root of all evil Make use of OOP etc. I understand. Just looking for some advice regarding the speed of certain operations that I can store in my grey matter for future reference. Say you have an Animation class. An animation can be looped (plays over and over) or not looped (plays once), it may have unique frame times or not, etc. Let's say there are 3 of these "either or" attributes. Note that any method of the Animation class will at most check for one of these (i.e. this isn't a case of a giant branch of if-elseif). Here are some options. 1) Give it boolean members for the attributes given above, and use an if statement to check against them when playing the animation to perform the appropriate action. Problem: Conditional checked every single time the animation is played. 2) Make a base animation class, and derive other animations classes such as LoopedAnimation and AnimationUniqueFrames, etc. Problem: Vtable check upon every call to play the animation given that you have something like a vector<Animation>. Also, making a separate class for all of the possible combinations seems code bloaty. 3) Use template specialization, and specialize those functions that depend on those attributes. Like template<bool looped, bool uniqueFrameTimes> class Animation. Problem: The problem with this is that you couldn't just have a vector<Animation> for something's animations. Could also be bloaty. I'm wondering what kind of speed each of these options offer? I'm particularly interested in the 1st and 2nd option because the 3rd doesn't allow one to iterate through a general container of Animations. In short, what is faster - a vtable fetch or a conditional?

    Read the article

< Previous Page | 152 153 154 155 156 157 158 159 160 161 162 163  | Next Page >