Search Results

Search found 4640 results on 186 pages for 'unique'.

Page 156/186 | < Previous Page | 152 153 154 155 156 157 158 159 160 161 162 163  | Next Page >

  • Multi-client C# ODBC (Sybase/Oracle/MSSQL) table access question.

    - by Hamish Grubijan
    I am working on a feature that would allow clients pick a unique identifier (ci_name). The code below is a generic version that gets expanded to the right sql depending on the vendor. Hopefully it makes sense. #include "sql.h" create table client_identification ( ci_id TYPE_ID IDENTITY, ci_name varchar(64) not null, constraint ci_pk primary key (ci_name) ); go CREATE_SEQUENCE(ci_id) There will be simple stored procedures for adding, retrieving, and deleting these user records. This will be used by several admins. This will not happen very frequently, but there is still a possibility that something will be added or deleted since the list was initially retrieved. I have not yet decided if I need to detect the case of a double delete, but the user name cannot be created twice - primary key constraint will object. I want to be able to detect this particular case and display something like: "you snooze - you loose." :) I would like to leverage the pk constraint instead of doing some extra sql gymnastics. So, how can I detect this case cleanly, so that it works in MS SQL 2008, Sybase, and Oracle? I hope to do better than catch a general ODBC exception and parse out the text and look for what Sybase, Oracle, and MSSQL would give me back. Oracle is a little different. We actually prepend these variables to the Oracle version of stored procedures because they are not available otherwise: Vret_val out number, Vtran_count in out number, Vmessage_count in out number, Thanks. General helpful tips and comments are welcome, except for naming convention ones ( I do not have a choice here, plus I mangled the actual names a bit).

    Read the article

  • VB.NET Update Access Database with DataTable

    - by sinDizzy
    I've been perusing some hep forums and some help books but cant seem to get my head wrapped around this. My task is to read data from two text files and then load that data into an existing MS Access 2007 database. So here is what i'm trying to do: Read data from first text file and for every line of data add data to a DataTable using CarID as my unique field. Read data from second text file and look for existing CarID in DataTable if exists update that row. If it doesnt exist add a new row. once im done push the contents of the DataTable to the database. What i have so far: Dim sSQL As String = "SELECT * FROM tblCars" Dim da As New OleDb.OleDbDataAdapter(sSQL, conn) Dim ds As New DataSet da.Fill(ds, "CarData") Dim cb As New OleDb.OleDbCommandBuilder(da) 'loop read a line of text and parse it out. gets dd, dc, and carId 'create a new empty row Dim dsNewRow As DataRow = ds.Tables("CarData").NewRow() 'update the new row with fresh data dsNewRow.Item("DriveDate") = dd dsNewRow.Item("DCode") = dc dsNewRow.Item("CarNum") = carID 'about 15 more fields 'add the filled row to the DataSet table ds.Tables("CarData").Rows.Add(dsNewRow) 'end loop 'update the database with the new rows da.Update(ds, "CarData") Questions: In constructing my table i use "SELECT * FROM tblCars" but what if that table has millions of records already. Is that not a waste of resources? Should i be trying something different if i want to update with new records? Once Im done with the first text file i then go to my next text file. Whats the best approach here: To First look for an existing record based on CarNum or to create a second table and then merge the two at the end? Finally when the DataTable is done being populated and im pushing it to the database i want to make sure that if records already exist with three primary fields (DriveDate, DCode, and CarNum) that they get updated with new fields and if it doesn't exist then those records get appended. Is that possible with my process? tia AGP

    Read the article

  • Multiple "ObjectChangeTracker" getting created, can it be avoided?

    - by user555937
    Hi, We are working on a POC where we have following architecture (MVVM), WPF(Client) + WCF + Model(DataAccess)+ ADO.Net Entity Framework 4.0 (with SQL Server 2008 R2 as DB) All are different projects. In the DataAccess layer we have created different Entity Models(edmx) based on the functionality. The tables under perticular flow are grouped and created different entity models. We are using self tracking entities to and fro to communicate with the WPF client through wcf service. For Single model everything works fine. But when we created a Multiple models then few issues started coming. Mutliple models have few duplicate tables/entities. Two probels are, 1) When we try to access entities from different models mutiple objects "ObjectChangeTracker" are getting created. E.g. CompanyModel(edmx) - Company(Entity) - ObjectChangeTracker, ObjectState ProductModel(edmx) - Customer(Entity) - ObjectChangeTracker1, ObjectState1 OrderModel(edmx) - Oder(Entity) - ObjectChangeTracker2, ObjectState2 Is there any way to avoid this? 2) There are few tables which shared across the Models, E.g. Company(Entity) is used in All above mdoels. During compile time it does not thow any error. But run time It gives error saying "Schema specified is not valid. Errors: The mapping of CLR type to EDM type is ambiguous because multiple CLR types match the EDM type "Company"".. To resolve this, we renamed the entities with some prefix to make them Unique. Is there any other way we can resolve this without changing the name of the entity in the same assembly? Thanks in advance and appreciate if anyone has approach for these issues. Thanks, Kiran

    Read the article

  • how do I get the program to Know which annotation is selected and be able to access properties of it

    - by kevin Mendoza
    So far my program can display a database of custom annotation views. Eventually I want my program to be able to display extra information after a button on the annotation bubble is clicked. Each element in the database has a unique entry Number, so I thought it would be a good idea to add this entry number as a property of the custom annotation. The problem I am having is that after the button is clicked and the program switches to a new view I am unable to retrieve the entry number of the annotation I selected. Below is the code that assigns the entry Number property to the annotation: for (id mine in mines) { workingCoordinate.latitude = [[mine latitudeInitial] doubleValue]; workingCoordinate.longitude = [[mine longitudeInitial] doubleValue]; iProspectAnnotation *tempMine = [[iProspectAnnotation alloc] initWithCoordinate:workingCoordinate]; [tempMine setTitle:[mine mineName]]; [tempMine setAnnotationEntryNumber:[mine entryNumber]]; } [mines dealloc]; When the button on an annotation is selected, this is the code that initializes the new view: - (void)mapView:(MKMapView *)mapView annotationView:(MKAnnotationView *)view calloutAccessoryControlTapped:(UIControl *)control { mineInformationController *controller = [[mineInformationController alloc] initWithNibName:@"mineInformationController" bundle:nil]; controller.modalTransitionStyle = UIModalTransitionStyleCrossDissolve; [self presentModalViewController:controller animated:YES]; [controller release]; } and lastly is my attempt at retrieving the entryNumber property from the new view so that I can compare it to the mines database and retrieve more information on the array element. iProspectFresno_LiteAppDelegate *appDelegate = (iProspectFresno_LiteAppDelegate *)[[UIApplication sharedApplication] delegate]; NSMutableArray* mines = [[NSMutableArray alloc] initWithArray:(NSMutableArray *)appDelegate.mines]; for(id mine in mines) { if ([[mine entryNumber] isEqualToNumber: /*the entry Number of the selected annotation*/]) { /* display the information in the mine object */ } } So how do I access this entry number property in this new view controller?

    Read the article

  • Using variables before include()ing them

    - by phenry
    I'm using a file, page.php, as an HTML container for several content files; i.e., page.php defines most of the common structure of the page, and the content files just contain the text that's unique to every page. What I would like to do is include some PHP code with each content file that defines metadata for the page such as its title, a banner graphic to use, etc. For example, a content file might look like this (simplified): <?php $page_title="My Title"; ?> <h1>Hello world!</h1> The name of the file would be passed as a URL parameter to page.php, which would look like this: <html> <head> <title><?php echo $page_title; ?></title> </head> <body> <?php include($_GET['page']); ?> </body> </html> The problem with this approach is that the variable gets defined after it is used, which of course won't work. Output buffering also doesn't seem to help. Is there an alternate approach I can use? I'd prefer not to define the text in the content file as a PHP heredoc block, because that smashes the HTML syntax highlighting in my text editor. I also don't want to use JavaScript to rewrite page elements after the fact, because many of these pages don't otherwise use JavaScript and I'd rather not introduce it as a dependency if I don't have to.

    Read the article

  • Processing a database queue across multiple threads - design advice

    - by rwmnau
    I have a SQL Server table full of orders that my program needs to "follow up" on (call a webservice to see if something has been done with them). My application is multi-threaded, and could have instances running on multiple servers. Currently, every so often (on a Threading timer), the process selects 100 rows, at random (ORDER BY NEWID()), from the list of "unconfirmed" orders and checks them, marking off any that come back successfully. The problem is that there's a lot of overlap between the threads, and between the different processes, and their's no guarantee that a new order will get checked any time soon. Also, some orders will never be "confirmed" and are dead, which means that they get in the way of orders that need to be confirmed, slowing the process down if I keep selecting them over and over. What I'd prefer is that all outstanding orders get checked, systematically. I can think of two easy ways do this: The application fetches one order to check at a time, passing in the last order it checked as a parameter, and SQL Server hands back the next order that's unconfirmed. More database calls, but this ensures that every order is checked in a reasonable timeframe. However, different servers may re-check the same order in succession, needlessly. The SQL Server keeps track of the last order it asked a process to check up on, maybe in a table, and gives a unique order to every request, incrementing its counter. This involves storing the last order somewhere in SQL, which I wanted to avoid, but it also ensures that threads won't needlessly check the same orders at the same time Are there any other ideas I'm missing? Does this even make sense? Let me know if I need some clarification.

    Read the article

  • Avoiding duplication in setting properties on the task in Rake tasks

    - by Stray
    I have a bunch of rake building tasks. They each have unique input / output properties, but the majority of the properties I set on the tasks are the same each time. Currently I'm doing that via simple repetition like this: task :buildThisModule => "bin/modules/thisModule.swf" mxmlc "bin/modules/thisModule.swf" do |t| t.input = "src/project/modules/ThisModule.as" t.prop1 = value1 t.prop2 = value2 ... (And many more property=value sets that are the same in each task) end task :buildThatModule => "bin/modules/thatModule.swf" mxmlc "bin/modules/thatModule.swf" do |t| t.input = "src/project/modules/ThatModule.as" t.prop1 = value1 t.prop2 = value2 ... (And many more property=value sets that are the same in each task) end In my usual programming headspace I'd expect to be able to break out the population of the recurring task properties to a re-usable function. Is there a rake analogy for this? Some way I can have a single function where the shared properties are set on any task? Something equivalent to: task :buildThisModule => "bin/modules/thisModule.swf" mxmlc "bin/modules/thisModule.swf" do |t| addCommonTaskParameters(t) t.input = "src/project/modules/ThisModule.as" end task :buildThatModule => "bin/modules/thatModule.swf" mxmlc "bin/modules/thatModule.swf" do |t| addCommonTaskParameters(t) t.input = "src/project/modules/ThatModule.as" end Thanks.

    Read the article

  • How to insert several thousand columns into sqlite3?

    - by user291071
    Similar to my last question, but I ran into problem lets say I have a simple dictionary like below but its Big, when I try inserting a big dictionary using the methods below I get operational error for the c.execute(schema) for too many columns so what should be my alternate method to populate an sql databases columns? Using the alter table command and add each one individually? import sqlite3 con = sqlite3.connect('simple.db') c = con.cursor() dic = { 'x1':{'y1':1.0,'y2':0.0}, 'x2':{'y1':0.0,'y2':2.0,'joe bla':1.5}, 'x3':{'y2':2.0,'y3 45 etc':1.5} } # 1. Find the unique column names. columns = set() for _, cols in dic.items(): for key, _ in cols.items(): columns.add(key) # 2. Create the schema. col_defs = [ # Start with the column for our key name '"row_name" VARCHAR(2) NOT NULL PRIMARY KEY' ] for column in columns: col_defs.append('"%s" REAL NULL' % column) schema = "CREATE TABLE simple (%s);" % ",".join(col_defs) c.execute(schema) # 3. Loop through each row for row_name, cols in dic.items(): # Compile the data we have for this row. col_names = cols.keys() col_values = [str(val) for val in cols.values()] # Insert it. sql = 'INSERT INTO simple ("row_name", "%s") VALUES ("%s", "%s");' % ( '","'.join(col_names), row_name, '","'.join(col_values) )

    Read the article

  • I'm doing a lot of lists and dictionary sorting...and this is causing memory errors in Python websit

    - by alex
    I retrieved data from the log table in my database. Then I started finding unique users, comparing/sorting lists, etc. In the end I got down to this. stats = {'2010-03-19': {'date': '2010-03-19', 'unique_users': 312, 'queries': 1465}, '2010-03-18': {'date': '2010-03-18', 'unique_users': 329, 'queries': 1659}, '2010-03-17': {'date': '2010-03-17', 'unique_users': 379, 'queries': 1845}, '2010-03-16': {'date': '2010-03-16', 'unique_users': 434, 'queries': 2336}, '2010-03-15': {'date': '2010-03-15', 'unique_users': 390, 'queries': 2138}, '2010-03-14': {'date': '2010-03-14', 'unique_users': 460, 'queries': 2221}, '2010-03-13': {'date': '2010-03-13', 'unique_users': 507, 'queries': 2242}, '2010-03-12': {'date': '2010-03-12', 'unique_users': 629, 'queries': 3523}, '2010-03-11': {'date': '2010-03-11', 'unique_users': 811, 'queries': 4274}, '2010-03-10': {'date': '2010-03-10', 'unique_users': 171, 'queries': 1297}, '2010-03-26': {'date': '2010-03-26', 'unique_users': 299, 'queries': 1617}, '2010-03-27': {'date': '2010-03-27', 'unique_users': 323, 'queries': 1310}, '2010-03-24': {'date': '2010-03-24', 'unique_users': 352, 'queries': 2112}, '2010-03-25': {'date': '2010-03-25', 'unique_users': 330, 'queries': 1290}, '2010-03-22': {'date': '2010-03-22', 'unique_users': 329, 'queries': 1798}, '2010-03-23': {'date': '2010-03-23', 'unique_users': 329, 'queries': 1857}, '2010-03-20': {'date': '2010-03-20', 'unique_users': 368, 'queries': 1693}, '2010-03-21': {'date': '2010-03-21', 'unique_users': 329, 'queries': 1511}, '2010-03-29': {'date': '2010-03-29', 'unique_users': 325, 'queries': 1718}, '2010-03-28': {'date': '2010-03-28', 'unique_users': 340, 'queries': 1815}, '2010-03-30': {'date': '2010-03-30', 'unique_users': 329, 'queries': 1891}} It's not a big dictionary. But when I try to do one last thing...it craps out on me. for k, v in stats: mylist.append(v) too many values to unpack What the heck does that mean??? TOO MANY VALUES TO UNPACK.

    Read the article

  • Is there a more memory efficient way to search through a Core Data database?

    - by Kristian K
    I need to see if an object that I have obtained from a CSV file with a unique identifier exists in my Core Data Database, and this is the code I deemed suitable for this task: NSFetchRequest *fetchRequest = [[NSFetchRequest alloc] init]; NSEntityDescription *entity; entity = [NSEntityDescription entityForName:@"ICD9" inManagedObjectContext:passedContext]; [fetchRequest setEntity:entity]; NSPredicate *pred = [NSPredicate predicateWithFormat:@"uniqueID like %@", uniqueIdentifier]; [fetchRequest setPredicate:pred]; NSError *err; NSArray* icd9s = [passedContext executeFetchRequest:fetchRequest error:&err]; [fetchRequest release]; if ([icd9s count] > 0) { for (int i = 0; i < [icd9s count]; i++) { NSAutoreleasePool *pool = [[NSAutoreleasePool alloc]init]; NSString *name = [[icd9s objectAtIndex:i] valueForKey:@"uniqueID"]; if ([name caseInsensitiveCompare:uniqueIdentifier] == NSOrderedSame && name != nil) { [pool release]; return [icd9s objectAtIndex:i]; } [pool release]; } } return nil; After more thorough testing it appears that this code is responsible for a huge amount of leaking in the app I'm writing (it crashes on a 3GS before making it 20 percent through the 1459 items). I feel like this isn't the most efficient way to do this, any suggestions for a more memory efficient way? Thanks in advance!

    Read the article

  • Issue with transparent texture on 3D primitive, XNA 4.0

    - by Bevin
    I need to draw a large set of cubes, all with (possibly) unique textures on each side. Some of the textures also have parts of transparency. The cubes that are behind ones with transparent textures should show through the transparent texture. However, it seems that the order in which I draw the cubes decides if the transparency works or not, which is something I want to avoid. Look here: cubeEffect.CurrentTechnique = cubeEffect.Techniques["Textured"]; Block[] cubes = new Block[4]; cubes[0] = new Block(BlockType.leaves, new Vector3(0, 0, 3)); cubes[1] = new Block(BlockType.dirt, new Vector3(0, 1, 3)); cubes[2] = new Block(BlockType.log, new Vector3(0, 0, 4)); cubes[3] = new Block(BlockType.gold, new Vector3(0, 1, 4)); foreach(Block b in cubes) { b.shape.RenderShape(GraphicsDevice, cubeEffect); } This is the code in the Draw method. It produces this result: As you can see, the textures behind the leaf cube are not visible on the other side. When i reverse index 3 and 0 on in the array, I get this: It is clear that the order of drawing is affecting the cubes. I suspect it may have to do with the blend mode, but I have no idea where to start with that.

    Read the article

  • What prevents a user from adding controls to an ASP.NET page client side?

    - by Curtis White
    This goes back to my other question which I thought was sufficiently answers but upon reflect am not sure that it was (sorry). Backgrounder: I am generating a form dynamically. I am pulling from the database the controls. I must associate each control with a database ID which is not the user's session id. I do this currently by storing my ID in the ID for the web control with some other stuff to make it unique/clear what I am doing. On the post back, I iterate through all the controls on my web page checking for my special identifier, ie, MyGeneratedTextBox_ID_Unique. This process enables for 2 important steps, identifying the control was one I generated and also getting the ID for this input field. And, all of this works but I'm still concerned about the security of it. I do not see a security issue with showing the actual database ID's in this case, although agree it is not desirable. However, I am concerned of the following possibilities: If a user could add a nefarious control to my collection and use that for a SQL injection attack. More academic, but if a user could somehow store data for fields they do not have access too by changing the id's. I agree this is a "hack" of a way to do it. But my question is, is it a security risk and is there an 'easy' way to do it in a less hack way? I assume that only the controls that are created/instantiated on the page are added to the controls list.. thus all controls must be created server side and thus the security issue is address but just wanted to validate. Thanks again. PS: I could see adding a property for each control and encrypting the viewstate would be a little more secure.

    Read the article

  • FluentNHibernate error -- "Invalid object name"

    - by goober
    I'm attempting to do the most simple of mappings with FluentNHibernate & Sql2005. Basically, I have a database table called "sv_Categories". I'd like to add a category, setting the ID automatically, and adding the userid and title supplied. Database table layout: CategoryID -- int -- not-null, primary key, auto-incrementing UserID -- uniqueidentifier -- not null Title -- varchar(50) -- not null Simple. My SessionFactory code (which works, as far as I can tell): _SessionFactory = Fluently.Configure().Database( MsSqlConfiguration.MsSql2005 .ConnectionString(c => c.FromConnectionStringWithKey("SVTest"))) .Mappings(x => x.FluentMappings.AddFromAssemblyOf<CategoryMap>()) .BuildSessionFactory(); My ClassMap code: public class CategoryMap : ClassMap<Category> { public CategoryMap() { Id(x => x.ID).Column("CategoryID").Unique(); Map(x => x.Title).Column("Title").Not.Nullable(); Map(x => x.UserID).Column("UserID").Not.Nullable(); } } My Class code: public class Category { public virtual int ID { get; private set; } public virtual string Title { get; set; } public virtual Guid UserID { get; set; } public Category() { // do nothing } } And the page where I save the object: public void Add(Category catToAdd) { using (ISession session = SessionProvider.GetSession()) { using (ITransaction Transaction = session.BeginTransaction()) { session.Save(catToAdd); Transaction.Commit(); } } } I receive the error Invalid object name 'Category'. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.Data.SqlClient.SqlException: Invalid object name 'Category'. I think it might be that I haven't told the CategoryMap class to use the "sv_Categories" table, but I'm not sure how to do that. Any help would be appreciated. Thanks!

    Read the article

  • How much does an InnoDB table benefit from having fixed-length rows?

    - by Philip Eve
    I know that dependent on the database storage engine in use, a performance benefit can be found if all of the rows in the table can be guaranteed to be the same length (by avoiding nullable columns and not using any VARCHAR, TEXT or BLOB columns). I'm not clear on how far this applies to InnoDB, with its funny table arrangements. Let's give an example: I have the following table CREATE TABLE `PlayerGameRcd` ( `User` SMALLINT UNSIGNED NOT NULL, `Game` MEDIUMINT UNSIGNED NOT NULL, `GameResult` ENUM('Quit', 'Kicked by Vote', 'Kicked by Admin', 'Kicked by System', 'Finished 5th', 'Finished 4th', 'Finished 3rd', 'Finished 2nd', 'Finished 1st', 'Game Aborted', 'Playing', 'Hide' ) NOT NULL DEFAULT 'Playing', `Inherited` TINYINT NOT NULL, `GameCounts` TINYINT NOT NULL, `Colour` TINYINT UNSIGNED NOT NULL, `Score` SMALLINT UNSIGNED NOT NULL DEFAULT 0, `NumLongTurns` TINYINT UNSIGNED NOT NULL DEFAULT 0, `Notes` MEDIUMTEXT, `CurrentOccupant` TINYINT UNSIGNED NOT NULL DEFAULT 0, PRIMARY KEY (`Game`, `User`), UNIQUE KEY `PGR_multi_uk` (`Game`, `CurrentOccupant`, `Colour`), INDEX `Stats_ind_PGR` (`GameCounts`, `GameResult`, `Score`, `User`), INDEX `GameList_ind_PGR` (`User`, `CurrentOccupant`, `Game`, `Colour`), CONSTRAINT `Constr_PlayerGameRcd_User_fk` FOREIGN KEY `User_fk` (`User`) REFERENCES `User` (`UserID`) ON DELETE CASCADE ON UPDATE CASCADE, CONSTRAINT `Constr_PlayerGameRcd_Game_fk` FOREIGN KEY `Game_fk` (`Game`) REFERENCES `Game` (`GameID`) ON DELETE CASCADE ON UPDATE CASCADE ) ENGINE=INNODB CHARACTER SET utf8 COLLATE utf8_general_ci The only column that is nullable is Notes, which is MEDIUMTEXT. This table presently has 33097 rows (which I appreciate is small as yet). Of these rows, only 61 have values in Notes. How much of an improvement might I see from, say, adding a new table to store the Notes column in and performing LEFT JOINs when necessary?

    Read the article

  • How to insert and call by row and column into sqlite3 python, great tutorial problem.

    - by user291071
    Lets say i have a simple array of x rows and y columns with corresponding values, What is the best method to do 3 things? How to insert, update a value at a specific row column? How to select a value for each row and column, import sqlite3 con = sqlite3.connect('simple.db') c = con.cursor() c.execute('''create table simple (links text)''') con.commit() dic = {'x1':{'y1':1.0,'y2':0.0},'x2':{'y1':0.0,'y2':2.0,'y3':1.5},'x3':{'y2':2.0,'y3':1.5}} ucols = {} ## my current thoughts are collect all row values and all column values from dic and populate table row and columns accordingly how to call by row and column i havn't figured out yet ##populate rows in first column for row in dic: print row c.execute("""insert into simple ('links') values ('%s')"""%row) con.commit() ##unique columns for row in dic: print row for col in dic[row]: print col ucols[col]=dic[row][col] ##populate columns for col in ucols: print col c.execute("alter table simple add column '%s' 'float'" % col) con.commit() #functions needed ##insert values into sql by row x and column y?how to do this e.g. x1 and y2 should put in 0.0 ##I tried as follows didn't work for row in dic: for col in dic[row]: val =dic[row][col] c.execute("""update simple SET '%s' = '%f' WHERE 'links'='%s'"""%(col,val,row)) con.commit() ##update value at a specific row x and column y? ## select a value at a specific row x and column y?

    Read the article

  • MySQL: SELECT highest column value when WHERE finds similar entries

    - by Ike
    My question is comparable to this one, but not quite the same. I have a database with a huge amount of books, with different editions of some of the same book titles. I'm looking for an SQL statement giving me the highest edition number of each of the titles I'm selecting with a WHERE clause (to find specific book series). Here's what the table looks like: |id|title |edition|year| |--|-------------------------|-------|----| |01|Serie One Title One |1 |2007| |02|Serie One Title One |2 |2008| |03|Serie One Title One |3 |2009| |04|Serie One Title Two |1 |2001| |05|Serie One Title Three |1 |2008| |06|Serie One Title Three |2 |2009| |07|Serie One Title Three |3 |2010| |08|Serie One Title Three |4 |2011| |--|-------------------------|-------|----| The result I'm looking for is this: |id|title |edition|year| |--|-------------------------|-------|----| |03|Serie One Title One |3 |2009| |04|Serie One Title Two |1 |2001| |08|Serie One Title Three |4 |2011| |--|-------------------------|-------|----| The closest I got was using this statement: select id, title, max(edition), max(year) from books where title like "serie one%" group by name; but it returns the highest edition and year and includes the first id it finds: |--|-----------------------|-------|----| |01|Serie One Title One |3 |2009| |04|Serie One Title Two |1 |2001| |05|Serie One Title Three |4 |2011| |--|-----------------------|-------|----| This fancy join also comes close, but doesn't give the right result: select b.id, b.title, b.edition, b.year from books b inner join (select name, max(edition) as maxedition from books group by title) g on b.edition = g.maxedition where b.title like "serie one%" group by title; Using this I'm getting unique titles, but mostly old editions.

    Read the article

  • What is the standard way to bundle OSGi dependent libraries?

    - by Chris
    Hi, I have a project that references a number of open source libraries, some new, some not so new. That said, they are all stable and I wish to stick with my chosen versions until I have time to migrate to the newer versions (I tested hsqldb 2.0 yesterday and it contains many api changes). One of the libraries I have wish to embed is Jasper Reports, but as you all surely know, it comes with a mountain of supporting jar files and I have only need a subset of the mountain (known) therefore I am planning to custom bundle all of my dependant libraries. So: Does everyone custom-make their own OSGi bundles for open-source libraries they are using or is there a master source of OSGi versions of common libraries? Also, I was thinking that it would be far simpler for each of my bundles simply to embed their dependent jars within the bundle itself. Is this possible? If I choose to embed the 3rd party foc libraries within a bundle, I assume I will need to produce 2 jar files, one without the embedded libraries (for libraries to be loaded via the classpath via standard classloader), and one osgi version that includes the embedded libraryy, therefore should I choose a bundle name like this <<myprojectname>>-<<subproject>>-osgi-.1.0.0.jar ? If I cannot embed the open source libraries and choose to custom bundle the open source libraries (via bnd), should I choose a unique bundle name to avoid conflict with a possible official bundle? e.g. <<myprojectname>>-<<3rdpartylibname>>-<<3rdpartylibversion>>.jar ? My non-OSGi enabled project currently scans for custom plugins via scanning the META-INF folders in my various plugin jars via Service.providers(...). If I go OSGi, will this mechanism still work?

    Read the article

  • Determining the chances of an event occurring when it hasn't occurred yet

    - by sanity
    A user visits my website at time t, and they may or may not click on a particular link I care about, if they do I record the fact that they clicked the link, and also the duration since t that they clicked it, call this d. I need an algorithm that allows me to create a class like this: class ClickProbabilityEstimate { public void reportImpression(long id); public void reportClick(long id); public double estimateClickProbability(long id); } Every impression gets a unique id, and this is used when reporting a click to indicate which impression the click belongs to. I need an algorithm that will return a probability, based on how much time has past since an impression was reported, that the impression will receive a click, based on how long previous clicks required. Clearly one would expect that this probability will decrease over time if there is still no click. If necessary, we can set an upper-bound, beyond which we consider the click probability to be 0 (eg. if its been an hour since the impression occurred, we can be pretty sure there won't be a click). The algorithm should be both space and time efficient, and hopefully make as few assumptions as possible, while being elegant. Ease of implementation would also be nice. Any ideas?

    Read the article

  • Can I have two names for the same variable?

    - by Roman
    The short version of the question: I do: x = y. Then I change x, and y is unchanged. What I want is to "bind" x and y in such a way that I change y whenever I change x. The extended version (with some details): I wrote a class ("first" class) which generates objects of another class ("second" class). In more details, every object of the second class has a name as a unique identifier. I call a static method of the first class with a name of the object from the second class. The first class checks if such an object was already generated (if it is present in the static HashMap of the first class). If it is already there, it is returned. If it is not yet there, it is created, added to the HashMap and returned. And then I have the following problem. At some stage of my program, I take an object with a specific name from the HashMap of the first class. I do something with this object (for example change values of some fields). But the object in the HashMap does not see these changes! So, in fact, I do not "take" an object from the HashMap, I "create a copy" of this object and this is what I would like to avoid.

    Read the article

  • How to store and locate multiple interface types within a Delphi TInterfaceList

    - by Brian Frost
    Hi, I'm storing small interfaces from a range of objects into a single TInterfaceList 'store' with the intention of offering list of specific interface types to the end user, so each interface will expose a 'GetName' function but all other methods are unique to that interface type. For example here are two interfaces: IBase = interface //---------------------------------------- function GetName : string; //---------------------------------------- end; IMeasureTemperature = interface(IBase) //------------------------------------ function MeasureTemperature : double; //---------------------------------------- end; IMeasureHumidity = interface(IBase) //---------------------------------------- function MeasureHumidity: double; //---------------------------------------- end; I put several of these interfaces into a single TInterfaceList and then I'd like to scan the list for a specific interface type (e.g. 'IMeasureTemperature') building another list of pointers to the objects exporting those interfaces. I wish to make no assumptions about the locations of those objects, some may export more than one type of interface. I know I could do this with a class hierarchy using something like: If FList[I] is TMeasureTemperature then .. but I'd like to do something simliar with an interface type, Is this possible?

    Read the article

  • Problem validating an XSD file: The content type of a derived type and that of its base must both be mixed or both be element-only

    - by Paulo Tavares
    Hi, I have following XML schema: <?xml version="1.0" encoding="UTF-8"?> <schema xmlns:netconf="urn:ietf:params:xml:ns:netconf:base:1.0" targetNamespace="urn:ietf:params:xml:ns:netconf:base:1.0" ... <complexType name="dataInlineType"> <xs:complexContent> <xs:extension base="xs:anyType"/> </xs:complexContent> </complexType> <complexType name="get-config_output_type__" > <complexContent> <extension base="netconf:dataInlineType"> <sequence> <element name="data"> <complexType> <sequence> <element name="__.get-config.output.data.A__" minOccurs="0" maxOccurs="unbounded" /> </sequence> </complexType> </element> <element name="__.get-config.A__" minOccurs="0" maxOccurs="unbounded"/> </sequence> </extension> </complexContent> And I getting the folling error: cos-ct-extends.1.4.3.2.2.1.a: The content type of a derived type and that of its base must both be mixed or both be element-only. Type 'get-config_output_type__' is element only, but its base type is not. If I put both elements mixed="true" I get another error: cos-nonambig: WC[##any] and "urn:ietf:params:xml:ns:netconf:base:1.0":data (or elements from their substitution group) violate "Unique Particle Attribution". During validation against this schema, ambiguity would be created for those two particles. I using the Eclipse to validate my schema, so what can I do?

    Read the article

  • Datastructure choices for highspeed and memory efficient detection of duplicate of strings

    - by Jonathan Holland
    I have a interesting problem that could be solved in a number of ways: I have a function that takes in a string. If this function has never seen this string before, it needs to perform some processing. If the function has seen the string before, it needs to skip processing. After a specified amount of time, the function should accept duplicate strings. This function may be called thousands of time per second, and the string data may be very large. This is a highly abstracted explanation of the real application, just trying to get down to the core concept for the purpose of the question. The function will need to store state in order to detect duplicates. It also will need to store an associated timestamp in order to expire duplicates. It does NOT need to store the strings, a unique hash of the string would be fine, providing there is no false positives due to collisions (Use a perfect hash?), and the hash function was performant enough. The naive implementation would be simply (in C#): Dictionary<String,DateTime> though in the interest of lowering memory footprint and potentially increasing performance I'm evaluating a custom data structures to handle this instead of a basic hashtable. So, given these constraints, what would you use? EDIT, some additional information that might change proposed implementations: 99% of the strings will not be duplicates. Almost all of the duplicates will arrive back to back, or nearly sequentially. In the real world, the function will be called from multiple worker threads, so state management will need to be synchronized.

    Read the article

  • Understanding REST through an example

    - by grifaton
    My only real exposure to the ideas of REST has been through Ruby on Rails' RESTful routing. This has suited me well for the kind of CRUD-based applications I have built with Rails, but consequently my understanding of RESTfulness is somewhat limited. Let's say we have a finite collection of Items, each of which has a unique ID, and a number of properties, such as colour, shape, and size (which might be undefined for some Items). Items can be used by a client for a period of time, but each Item can only be used by one client at once. Access to Items is regulated by a server. Clients can request the temporary use of certain items from a server. Usually, clients will only be interested in getting access to a number of Items with particular properties, rather than getting access to specific Items. When a client requests use of a number of Items, the server responds with a list of IDs corresponding to the request, or with a response that says that the requested Items are not currently available or do not exist. A client can make the following kinds of request: Tell me how many green triangle Items there are (in total/available). Give me use of 200 large red Items. I have finished with Items 21, 23, 23. Add 100 new red square Items. Delete 50 small green Items. Modify all big yellow pentagon Items to be blue. The toy example above is like a resource allocation problem I have had to deal with recently. How should I go about thinking about it RESTfully?

    Read the article

  • Why cant Git merge file changes with a modified parent/master.

    - by Andy
    I have a file with one line in it. I create a branch and add a second line to the same file. Save and commit to the branch. I switch back to the master. And add a different, second line to the file. Save and commit to the master. So there's now 3 unique lines in total. If I now try and merge the branch back to the master, it suffers a merge conflict. Why cant Git simple merge each line, one after the other? My attempt at merge behaves something like this: PS D:\dev\testing\test1> git merge newbranch Auto-merging hello.txt CONFLICT (content): Merge conflict in hello.txt Automatic merge failed; fix conflicts and then commit the result. PS D:\dev\testing\test1> git diff diff --cc hello.txt index 726eeaf,e48d31a..0000000 --- a/hello.txt +++ b/hello.txt @@@ -1,2 -1,2 +1,6 @@@ This is the first line. - New line added by master. -Added a line in newbranch. ++<<<<<<< HEAD ++New line added by master. ++======= ++Added a line in newbranch. ++>>>>>>> newbranch Is there a way to make it slot lines in automatically, one after the other?

    Read the article

  • How to manage feeds with subclassed object in Django 1.2?

    - by Matteo
    Hi, I'm trying to generate a feed rss from a model like this one, selecting all the Entry objects: from django.db import models from django.contrib.sites.models import Site from django.contrib.auth.models import User from imagekit.models import ImageModel import datetime class Entry(ImageModel): date_pub = models.DateTimeField(default=datetime.datetime.now) author = models.ForeignKey(User) via = models.URLField(blank=True) comments_allowed = models.BooleanField(default=True) icon = models.ImageField(upload_to='icon/',blank=True) class IKOptions: spec_module = 'journal.icon_specs' cache_dir = 'icon/resized' image_field = 'icon' class Post(Entry): title = models.CharField(max_length=200) description = models.TextField() slug = models.SlugField(unique=True) def __unicode__(self): return self.title class Photo(Entry): alt = models.CharField(max_length=200) description = models.TextField(blank=True) original = models.ImageField(upload_to='photo/') class IKOptions: spec_module = 'journal.photo_specs' cache_dir = 'photo/resized' image_field = 'original' def __unicode__(self): return self.alt class Quote(Entry): blockquote = models.TextField() cite = models.TextField(blank=True) def __unicode__(self): return self.blockquote When I use the render_to_response in my views I simply call: def get_journal_entries(request): entries = Entry.objects.all().order_by('-date_pub') return render_to_response('journal/entries.html', {'entries':entries}) And then I use a conditional template to render the right snippets of html: {% extends "base.html" %} {% block main %} <hr> {% for entry in entries %} {% if entry.post %}[...]{% endif %}[...] But I cannot do the same with the Feed Framework in django 1.2... Any suggestion, please?

    Read the article

< Previous Page | 152 153 154 155 156 157 158 159 160 161 162 163  | Next Page >