Search Results

Search found 2667 results on 107 pages for 'peopletools strategy'.

Page 92/107 | < Previous Page | 88 89 90 91 92 93 94 95 96 97 98 99  | Next Page >

  • Hibernate design to speed up querying of large dataset

    - by paddydub
    I currently have the below tables representing a bus network mapped in hibernate, accessed from a Spring MVC based bus route planner I'm trying to make my route planner application perform faster, I load all the above tables into Lists to perform the route planner logic. I would appreciate if anyone has any ideas of how to speed my performace Or any suggestions of another method to approach this problem of handling a large set of data Coordinate Connections Table (INT,INT,INT, DOUBLE)( Containing 50,000 Coordinate Connections) ID, FROMCOORDID, TOCOORDID, DISTANCE 1 1 2 0.383657 2 1 17 0.173201 3 1 63 0.258781 4 1 64 0.013726 5 1 65 0.459829 6 1 95 0.458769 Coordinate Table (INT,DECIMAL, DECIMAL) (Containing 4700 Coordinates) ID , LAT, LNG 0 59.352669 -7.264341 1 59.352669 -7.264341 2 59.350012 -7.260653 3 59.337585 -7.189798 4 59.339221 -7.193582 5 59.341408 -7.205888 Bus Stop Table (INT, INT, INT)(Containing 15000 Stops) StopID RouteID COORDINATEID 1000100001 100 17 1000100002 100 18 1000100003 100 19 1000100004 100 20 1000100005 100 21 1000100006 100 22 1000100007 100 23 This is how long it takes to load all the data from each table: stop.findAll = 148ms, stops.size: 15670 Hibernate: select coordinate0_.COORDINATEID as COORDINA1_2_, coordinate0_.LAT as LAT2_, coordinate0_.LNG as LNG2_ from COORDINATES coordinate0_ coord.findAll = 51ms , coordinates.size: 4704 Hibernate: select coordconne0_.COORDCONNECTIONID as COORDCON1_3_, coordconne0_.DISTANCE as DISTANCE3_, coordconne0_.FROMCOORDID as FROMCOOR3_3_, coordconne0_.TOCOORDID as TOCOORDID3_ from COORDCONNECTIONS coordconne0_ coordinateConnectionDao.findAll = 238ms ; coordConnectioninates.size:48132 Hibernate Annotations @Entity @Table(name = "STOPS") public class Stop implements Serializable { @Id @GeneratedValue(strategy = GenerationType.AUTO) @Column(name = "STOPID") private int stopID; @Column(name = "ROUTEID", nullable = false) private int routeID; @ManyToOne(fetch = FetchType.LAZY) @JoinColumn(name = "COORDINATEID", nullable = false) private Coordinate coordinate; } @Table(name = "COORDINATES") public class Coordinate { @Id @GeneratedValue @Column(name = "COORDINATEID") private int CoordinateID; @Column(name = "LAT") private double latitude; @Column(name = "LNG") private double longitude; } @Entity @Table(name = "COORDCONNECTIONS") public class CoordConnection { @Id @GeneratedValue @Column(name = "COORDCONNECTIONID") private int CoordinateID; @ManyToOne(fetch = FetchType.LAZY) @JoinColumn(name = "FROMCOORDID", nullable = false) private Coordinate fromCoordID; @ManyToOne(fetch = FetchType.LAZY) @JoinColumn(name = "TOCOORDID", nullable = false) private Coordinate toCoordID; @Column(name = "DISTANCE", nullable = false) private double distance; }

    Read the article

  • Lost with hibernate - OneToMany resulting in the one being pulled back many times..

    - by Andy
    I have this DB design: CREATE TABLE report ( ID MEDIUMINT PRIMARY KEY NOT NULL AUTO_INCREMENT, user MEDIUMINT NOT NULL, created TIMESTAMP NOT NULL, state INT NOT NULL, FOREIGN KEY (user) REFERENCES user(ID) ON UPDATE CASCADE ON DELETE CASCADE ); CREATE TABLE reportProperties ( ID MEDIUMINT NOT NULL, k VARCHAR(128) NOT NULL, v TEXT NOT NULL, PRIMARY KEY( ID, k ), FOREIGN KEY (ID) REFERENCES report(ID) ON UPDATE CASCADE ON DELETE CASCADE ); and this Hibernate Markup: @Table(name="report") @Entity(name="ReportEntity") public class ReportEntity extends Report{ @Id @GeneratedValue(strategy = GenerationType.AUTO) @Column(name="ID") private Integer ID; @Column(name="user") private Integer user; @Column(name="created") private Timestamp created; @Column(name="state") private Integer state = ReportState.RUNNING.getLevel(); @OneToMany(mappedBy="pk.ID", fetch=FetchType.EAGER) @JoinColumns( @JoinColumn(name="ID", referencedColumnName="ID") ) @MapKey(name="pk.key") private Map<String, ReportPropertyEntity> reportProperties = new HashMap<String, ReportPropertyEntity>(); } and @Table(name="reportProperties") @Entity(name="ReportPropertyEntity") public class ReportPropertyEntity extends ReportProperty{ @Embeddable public static class ReportPropertyEntityPk implements Serializable{ /** * long#serialVersionUID */ private static final long serialVersionUID = 2545373078182672152L; @Column(name="ID") protected int ID; @Column(name="k") protected String key; } @EmbeddedId protected ReportPropertyEntityPk pk = new ReportPropertyEntityPk(); @Column(name="v") protected String value; } And i have inserted on Report and 4 Properties for that report. Now when i execute this: this.findByCriteria( Order.asc("created"), Restrictions.eq("user", user.getObject(UserField.ID)) ) ); I get back the report 4 times, instead of just the once with a Map with the 4 properties in. I'm not great at Hibernate to be honest, prefer straight SQL but I must learn, but i can't see what it is that is wrong.....? Any suggestions?

    Read the article

  • Handling selection and deselection of objects in a user interface

    - by Dennis
    I'm writing a small audio application (in Silverlight, but that's not really relevant I suppose), and I'm a struggling with a problem I've faced before and never solved properly. This time I want to get it right. In the application there is one Arrangement control, which contains several Track controls, and every Track can contain AudioObject controls (these are all custom user controls). The user needs to be able to select audio objects, and when these objects are selected they are rendered differently. I can make this happen by hooking into the MouseDown event of the AudioObject control and setting state accordingly. So far so good, but when an audio object is selected, all other audio objects need to be deselected (well, unless the user holds the shift key). Audio objects don't know about other audio objects though, so they have no way to tell the rest to deselect themselves. Now if I would approach this problem like I did the last time I would pass a reference to the Arrangement control in the constructor of the AudioObject control and give the Arrangement control a DeselectAll() method or something like that, which would tell all Track controls to deselect all their AudioObject controls. This feels wrong, and if I apply this strategy to similar issues I'm afraid I will soon end up with every object having a reference to every other object, creating one big tightly coupled mess. It feels like opening the floodgates for poorly designed code. Is there a better way to handle this?

    Read the article

  • Fluently setting C# properties and chaining methods

    - by John Feminella
    I'm using .NET 3.5. We have some complex third-party classes which are automatically generated and out of my control, but which we must work with for testing purposes. I see my team doing a lot of deeply-nested property getting/setting in our test code, and it's getting pretty cumbersome. To remedy the problem, I'd like to make a fluent interface for setting properties on the various objects in the hierarchical tree. There are a large number of properties and classes in this third-party library, and it would be too tedious to map everything manually. My initial thought was to just use object initializers. Red, Blue, and Green are properties, and Mix() is a method that sets a fourth property Color to the closest RGB-safe color with that mixed color. Paints must be homogenized with Stir() before they can be used. Bucket b = new Bucket() { Paint = new Paint() { Red = 0.4; Blue = 0.2; Green = 0.1; } }; That works to initialize the Paint, but I need to chain Mix() and other methods to it. Next attempt: Create<Bucket>(Create<Paint>() .SetRed(0.4) .SetBlue(0.2) .SetGreen(0.1) .Mix().Stir() ) But that doesn't scale well, because I'd have to define a method for each property I want to set, and there are hundreds of different properties in all the classes. Also, C# doesn't have a way to dynamically define methods prior to C# 4, so I don't think I can hook into things to do this automatically in some way. Third attempt: Create<Bucket>(Create<Paint>().Set(p => { p.Red = 0.4; p.Blue = 0.2; p.Green = 0.1; }).Mix().Stir() ) That doesn't look too bad, and seems like it'd be feasible. Is this an advisable approach? Is it possible to write a Set method that works this way? Or should I be pursuing an alternate strategy?

    Read the article

  • Saving Abstract and Sub classes to database

    - by bretddog
    Hi, I have an abstract class "StrategyBase", and a set of sub classes, StrategyA/B/C etc. The sub classes use some of the properties of the base class, and have some individual properties. My question is how to save this to a database. I'm currently using SqlCE, and Linq-To-Sql by creating entity classes automatically with SqlMetal.exe. I've seen there are three solutions shown in this question, but I'm not able to see how these solutions will work or not with SqlMetal/entity classes. Though it seems to me the "concrete table inheritance" would probably work without any manual modifying. What about the other two, would they be problematic? For "Single Table Inheritance" wouldn't all classes get all variables, even though they don't need them? And for the "Class table inheritance" solution I can't really see at all how that will map into the entity-classes for a useful purpose. I may note that I extend these partial entity classes for making the classes of my business objects. I may also consider moving to EntityFramework instead of SqlMetal/Linq2Sql, so would be nice also to know if that makes any difference to what schema is easy to implement. One likely important thing to note is that I will constantly be develop new strategies, which makes me have to modify the program code, and probably the database shcema; when adding a new strategy. Sorry the question is a bit "all over the place", but hopefully it's some clear advantages/disadvantages here that you may be able to advice. ? Cheers!

    Read the article

  • [Delphi] How would you refactor this code?

    - by Al C
    This hypothetical example illustrates several problems I can't seem to get past, even though I keep trying!! ... Suppose the original code is a long event handler, coded in the UI, triggered when a user clicks a cell in a grid. Expressed as pseudocode it's: if Condition1=true then begin //loop through every cell in row, //if aCell/headerCellValue>1 then //color aCell red end else if Condition2=true then begin //do some other calculation adding cell and headerCell values, and //if some other product>2 then //color the whole row green end else show an error message I look at this and say "Ah, refactor to the strategy pattern! The code will be easier to understand, easier to debug, and easier to later extend!" I get that. And I can easily break the code into multiple procedures. The problem is ultimately scope related. Assume the pseudocode makes extensive use of grid properties, values displayed in cells, maybe even built-in grid methods. How do you move all that to another unit, without referencing the grid component in the UI--which would break all the "rules" about loose coupling that make OOP valuable? ... I'm really looking forward to responses. Thanks, as always -- Al C.

    Read the article

  • File based caching under PHP

    - by azatoth
    I've been using http://code.google.com/p/phpbrowscap/ for a project, and it usually works nice. But a few times it's cache, which is plain php-files (see http://code.google.com/p/phpbrowscap/source/browse/trunk/browscap/Browscap.php#372 et. al.), has been "zeroed", i.e. the whole cache file has become large blob of NULLs. Instead of trying to find out why the files become NULL, I though perhaps it might be better to change the caching strategy to something more resilient. So I do wonder if you has any good ideas what would be a good solution; I've been looking at http://www.jongales.com/blog/2009/02/18/simple-file-based-php-cache-class/ and http://www.phpclasses.org/package/313-PHP-Cache-arbitrary-data-in-files-.html and I also though of just saving an serialized array to the file instead of pure php as it's been doing now; But I'm uncertain what approach I should target here. I'm grateful for any insight into this area of technology, as I know it's complex from a performance point of view.

    Read the article

  • Hibernate MapKeyManyToMany gives composite key where none exists

    - by larsrc
    I have a Hibernate (3.3.1) mapping of a map using a three-way join table: @Entity public class SiteConfiguration extends ConfigurationSet { @ManyToMany @MapKeyManyToMany(joinColumns=@JoinColumn(name="SiteTypeInstallationId")) @JoinTable( name="SiteConfig_InstConfig", joinColumns = @JoinColumn(name="SiteConfigId"), inverseJoinColumns = @JoinColumn(name="InstallationConfigId") ) Map<SiteTypeInstallation, InstallationConfiguration> installationConfigurations = new HashMap<SiteTypeInstallation, InstallationConfiguration>(); ... } The underlying table (in Oracle 11g) is: Name Null Type ------------------------------ -------- ---------- SITECONFIGID NOT NULL NUMBER(19) SITETYPEINSTALLATIONID NOT NULL NUMBER(19) INSTALLATIONCONFIGID NOT NULL NUMBER(19) The key entity used to have a three-column primary key in the database, but is now redefined as: @Entity public class SiteTypeInstallation implements IdResolvable { @Id @GeneratedValue(generator="SiteTypeInstallationSeq", strategy= GenerationType.SEQUENCE) @SequenceGenerator(name = "SiteTypeInstallationSeq", sequenceName = "SEQ_SiteTypeInstallation", allocationSize = 1) long id; @ManyToOne @JoinColumn(name="SiteTypeId") SiteType siteType; @ManyToOne @JoinColumn(name="InstalationRoleId") InstallationRole role; @ManyToOne @JoinColumn(name="InstallationTypeId") InstType type; ... } The table for this has a primary key 'Id' and foreign key constraints+indexes for each of the other columns: Name Null Type ------------------------------ -------- ---------- SITETYPEID NOT NULL NUMBER(19) INSTALLATIONROLEID NOT NULL NUMBER(19) INSTALLATIONTYPEID NOT NULL NUMBER(19) ID NOT NULL NUMBER(19) For some reason, Hibernate thinks the key of the map is composite, even though it isn't, and gives me this error: org.hibernate.MappingException: Foreign key (FK1A241BE195C69C8:SiteConfig_InstConfig [SiteTypeInstallationId])) must have same number of columns as the referenced primary key (SiteTypeInstallation [SiteTypeId,InstallationRoleId]) If I remove the annotations on installationConfigurations and make it transient, the error disappears. I am very confused why it thinks SiteTypeInstallation has a composite key at all when @Id is clearly defining a simple key, and doubly confused why it picks exactly just those two columns. Any idea why this happens? Is it possible that JBoss (5.0 EAP) + Hibernate somehow remembers a mistaken idea of the primary key across server restarts and code redeployments? Thanks in advance, -Lars

    Read the article

  • Hibernate: Querying objects by attributes of inherited classes

    - by MichaelD
    Hi all, I ran into a problem with Hibernate concerning queries on classes which use inheritance. Basically I've the following class hierarchy: @Entity @Table( name = "recording" ) class Recording { ClassA attributeSet; ... } @Entity @Inheritance( strategy = InheritanceType.JOINED ) @Table( name = "classA" ) public class ClassA { String Id; ... } @Entity @Table( name = "ClassB1" ) @PrimaryKeyJoinColumn( name = "Id" ) public class ClassB1 extends ClassA { private Double P1300; private Double P2000; } @Entity @Table( name = "ClassB2" ) @PrimaryKeyJoinColumn( name = "Id" ) public class ClassB2 extends ClassA { private Double P1300; private Double P3000; } The hierarchy is already given like this and I cannot change it easily. As you can see ClassB1 and ClassB2 inherit from ClassA. Both classes contain a set of attributes which sometimes even have the same names (but I can't move them to ClassA since there are possible more sub-classes which do not use them). The Recording class references one instance of one of this classes. Now my question: What I want to do is selecting all Recording objects in my database which refer to an instance of either ClassB1 or ClassB2 with e.g. the field P1300 == 15.5 (so this could be ClassB1 or ClassB2 instances since the P1300 attribute is declared in both classes). What I tried is something like this: Criteria criteria = session.createCriteria(Recording.class); criteria.add( Restrictions.eq( "attributeSet.P1300", new Double(15.5) ) ); criteria.list(); But since P1300 is not an attribute of ClassA hibernate throws an exception telling me: could not resolve property: P1300 of: ClassA How can I tell hibernate that it should search in all subclasses to find the attribute I want to filter? Thanks MichaelD

    Read the article

  • What is a NULL value

    - by Adi
    I am wondering , what exactly is stored in the memory when we say a particular variable pointer to be NULL. suppose I have a structure, say typdef struct MEM_LIST MEM_INSTANCE; struct MEM_LIST { char *start_addr; int size; MEM_INSTANCE *next; }; MEM_INSTANCE *front; front = (MEM_INSTANCE*)malloc(sizeof(MEM_INSTANCE*)); -1) If I make front=NULL. What will be the value which actually gets stored in the different fields of the front, say front-size ,front-start_addr. Is it 0 or something else. I have limited knowledge in this NULL thing. -2) If I do a free(front); It frees the memory which is pointed out by front. So what exactly free means here, does it make it NULL or make it all 0. -3) What can be a good strategy to deal with initialization of pointers and freeing them . Thanks in advance

    Read the article

  • Advice Please: SQL Server Identity vs Unique Identifier keys when using Entity Framework

    - by c.batt
    I'm in the process of designing a fairly complex system. One of our primary concerns is supporting SQL Server peer-to-peer replication. The idea is to support several geographically separated nodes. A secondary concern has been using a modern ORM in the middle tier. Our first choice has always been Entity Framework, mainly because the developers like to work with it. (They love the LiNQ support.) So here's the problem: With peer-to-peer replication in mind, I settled on using uniqueidentifier with a default value of newsequentialid() for the primary key of every table. This seemed to provide a good balance between avoiding key collisions and reducing index fragmentation. However, it turns out that the current version of Entity Framework has a very strange limitation: if an entity's key column is a uniqueidentifier (GUID) then it cannot be configured to use the default value (newsequentialid()) provided by the database. The application layer must generate the GUID and populate the key value. So here's the debate: abandon Entity Framework and use another ORM: use NHibernate and give up LiNQ support use linq2sql and give up future support (not to mention get bound to SQL Server on DB) abandon GUIDs and go with another PK strategy devise a method to generate sequential GUIDs (COMBs?) at the application layer I'm leaning towards option 1 with linq2sql (my developers really like linq2[stuff]) and 3. That's mainly because I'm somewhat ignorant of alternate key strategies that support the replication scheme we're aiming for while also keeping things sane from a developer's perspective. Any insight or opinion would be greatly appreciated.

    Read the article

  • Parsing HTTP - Bytes.length != String.length

    - by hotzen
    Hello, I consume HTTP via nio.SocketChannel, so I get chunks of data as Array[Byte]. I want to put these chunks into a parser and continue parsing after each chunk has been put. HTTP itself seems to use an ISO8859-Charset but the Payload/Body itself may be arbitrarily encoded: If the HTTP Content-Length specifies X bytes, the UTF8-decoded Body may have much less Characters (1 Character may be represented in UTF8 by 2 bytes, etc). So what is a good parsing strategy to honor an explicitly specified Content-Length and/or a Transfer-Encoding: Chunked which specifies a chunk-length to be honored. append each data-chunk to an mutable.ArrayBuffer[Byte], search for CRLF in the bytes, decode everything from 0 until CRLF to String and match with Regular-Expressions like StatusRegex, HeaderRegex, etc? decode each data-chunk with the proper charset (e.g. iso8859, utf8, etc) and add to StringBuilder. With this solution I am not able to honor any Content-Length or Chunk-Size, but.. do I have to care for it? any other solution... ?

    Read the article

  • How to index a table with a Type 2 slowly changing dimension for optimal performance

    - by The Lazy DBA
    Suppose you have a table with a Type 2 slowly-changing dimension. Let's express this table as follows, with the following columns: * [Key] * [Value1] * ... * [ValueN] * [StartDate] * [ExpiryDate] In this example, let's suppose that [StartDate] is effectively the date in which the values for a given [Key] become known to the system. So our primary key would be composed of both [StartDate] and [Key]. When a new set of values arrives for a given [Key], we assign [ExpiryDate] to some pre-defined high surrogate value such as '12/31/9999'. We then set the existing "most recent" records for that [Key] to have an [ExpiryDate] that is equal to the [StartDate] of the new value. A simple update based on a join. So if we always wanted to get the most recent records for a given [Key], we know we could create a clustered index that is: * [ExpiryDate] ASC * [Key] ASC Although the keyspace may be very wide (say, a million keys), we can minimize the number of pages between reads by initially ordering them by [ExpiryDate]. And since we know the most recent record for a given key will always have an [ExpiryDate] of '12/31/9999', we can use that to our advantage. However... what if we want to get a point-in-time snapshot of all [Key]s at a given time? Theoretically, the entirety of the keyspace isn't all being updated at the same time. Therefore for a given point-in-time, the window between [StartDate] and [ExpiryDate] is variable, so ordering by either [StartDate] or [ExpiryDate] would never yield a result in which all the records you're looking for are contiguous. Granted, you can immediately throw out all records in which the [StartDate] is greater than your defined point-in-time. In essence, in a typical RDBMS, what indexing strategy affords the best way to minimize the number of reads to retrieve the values for all keys for a given point-in-time? I realize I can at least maximize IO by partitioning the table by [Key], however this certainly isn't ideal. Alternatively, is there a different type of slowly-changing-dimension that solves this problem in a more performant manner?

    Read the article

  • How do I dispatch to a method based on a parameter's runtime type in C# < 4?

    - by Evan Barkley
    I have an object o which guaranteed at runtime to be one of three types A, B, or C, all of which implement a common interface I. I can control I, but not A, B, or C. (Thus I could use an empty marker interface, or somehow take advantage of the similarities in the types by using the interface, but I can't add new methods or change existing ones in the types.) I also have a series of methods MethodA, MethodB, and MethodC. The runtime type of o is looked up and is then used as a parameter to these methods. public void MethodA(A a) { ... } public void MethodB(B b) { ... } public void MethodC(C c) { ... } Using this strategy, right now a check has to be performed on the type of o to determine which method should be invoked. Instead, I would like to simply have three overloaded methods: public void Method(A a) { ... } // these are all overloads of each other public void Method(B b) { ... } public void Method(C c) { ... } Now I'm letting C# do the dispatch instead of doing it manually myself. Can this be done? The naive straightforward approach doesn't work, of course: Cannot resolve method 'Method(object)'. Candidates are: void Method(A) void Method(B) void Method(C)

    Read the article

  • Horizontally-centering an absolute position to match a relative position

    - by Chris Vandevelde
    I'm trying to make a div box, containing various elements, fixed at the top of the page once the page has been scrolled so that the box would normally be out of view, but scroll normally until that point (like the behaviour at http://perldoc.perl.org/perl.html). The conditionally-fixed part is pretty simple to implement (set the position to "fixed" once the user has scrolled past a certain point, and "static" once it's scrolled back up), but I'm having trouble with the positioning and dimensions; it screws up if I'm not specifying an absolute position (if I'm using % or "auto", rather than px, em, cm, etc.) or it, confusingly, left-aligns if the box is less than the width of the page. I can understand why, more or less, I'm just trying to fix it. My strategy right now is to have an invisible DIV hold the place of the box and use Prototype's clonePosition() function to hold its position, but it doesn't seem to work for some reason. Neither does copying the margin from one element to the other. Any ideas? Bonus points and eternal gratitude if you can come up with an idea that will adjust itself with the browser window (like auto margins) without setting an onresize event.

    Read the article

  • Object equality in context of hibernate / webapp

    - by bert
    How do you handle object equality for java objects managed by hibernate? In the 'hibernate in action' book they say that one should favor business keys over surrogate keys. Most of the time, i do not have a business key. Think of addresses mapped to a person. The addresses are keeped in a Set and displayed in a Wicket RefreshingView (with a ReuseIfEquals strategy). I could either use the surrogate id or use all fields in the equals() and hashCode() functions. The problem is that those fields change during the lifetime ob the object. Either because the user entered some data or the id changes due to JPA merge() being called inside the OSIV (Open Session in View) filter. My understanding of the equals() and hashCode() contract is that those should not change during the lifetime of an object. What i have tried so far: equals() based on hashCode() which uses the database id (or super.hashCode() if id is null). Problem: new addresses start with an null id but get an id when attached to a person and this person gets merged() (re-attached) in the osiv-filter. lazy compute the hashcode when hashCode() is first called and make that hashcode @Transitional. Does not work, as merge() returns a new object and the hashcode does not get copied over. What i would need is an ID that gets assigned during object creation I think. What would be my options here? I don't want to introduce some additional persistent property. Is there a way to explicitly tell JPA to assign an ID to an object? Regards

    Read the article

  • Tail recursion and memoization with C#

    - by Jay
    I'm writing a function that finds the full path of a directory based on a database table of entries. Each record contains a key, the directory's name, and the key of the parent directory (it's the Directory table in an MSI if you're familiar). I had an iterative solution, but it started looking a little nasty. I thought I could write an elegant tail recursive solution, but I'm not sure anymore. I'll show you my code and then explain the issues I'm facing. Dictionary<string, string> m_directoryKeyToFullPathDictionary = new Dictionary<string, string>(); ... private string ExpandDirectoryKey(Database database, string directoryKey) { // check for terminating condition string fullPath; if (m_directoryKeyToFullPathDictionary.TryGetValue(directoryKey, out fullPath)) { return fullPath; } // inductive step Record record = ExecuteQuery(database, "SELECT DefaultDir, Directory_Parent FROM Directory where Directory.Directory='{0}'", directoryKey); // null check string directoryName = record.GetString("DefaultDir"); string parentDirectoryKey = record.GetString("Directory_Parent"); return Path.Combine(ExpandDirectoryKey(database, parentDirectoryKey), directoryName); } This is how the code looked when I realized I had a problem (with some minor validation/massaging removed). I want to use memoization to short circuit whenever possible, but that requires me to make a function call to the dictionary to store the output of the recursive ExpandDirectoryKey call. I realize that I also have a Path.Combine call there, but I think that can be circumvented with a ... + Path.DirectorySeparatorChar + .... I thought about using a helper method that would memoize the directory and return the value so that I could call it like this at the end of the function above: return MemoizeHelper( m_directoryKeyToFullPathDictionary, Path.Combine(ExpandDirectoryKey(database, parentDirectoryKey)), directoryName); But I feel like that's cheating and not going to be optimized as tail recursion. Any ideas? Should I be using a completely different strategy? This doesn't need to be a super efficient algorithm at all, I'm just really curious. I'm using .NET 4.0, btw. Thanks!

    Read the article

  • Stored Procedures In Source Control - Automate Build/Deployment Process

    - by Alex
    My company provides a large .NET service-oriented solution. The services layer interact with a T-SQL back-end consisting of hundreds of tables and stored procedures. Our C# code is in version-control (SVN) but our stored procedures and schema are not. After much lobbying of expedient upper-management, I was allowed to review our (non-existent) build/deployment process to accomplish the following goals: Place schema and stored procedures under source-control. Automate the build/deployment process. I would like to proceed per the accepted answer's strategy in this post but have additional questions: I would like to use Hudson as my build server. Is this a reasonable choice for a C#/SQL solution? What better alternatives should I explore? Assuming I have all triggers, stored-procedures, schema, etc... under source control, and that they are scripted to individual files, how do I generate a build script which will take into account dependencies/references between these items? (SQL Server does this automatically, but it generates one giant script) What does the workflow of performing an update at the client look like? i.e. I have to keep existing table data. How do I roll-back schema changes? I am the only programmer. Several other pseudo-technical staff like to make changes directly inside SQL Management Studio. Is it realistic to expect others to adhere to this solution -- how can I enforce this? Thank you in advance for your help.

    Read the article

  • NoSQL for filesystem storage organization and replication?

    - by wheaties
    We've been discussing design of a data warehouse strategy within our group for meeting testing, reproducibility, and data syncing requirements. One of the suggested ideas is to adapt a NoSQL approach using an existing tool rather than try to re-implement a whole lot of the same on a file system. I don't know if a NoSQL approach is even the best approach to what we're trying to accomplish but perhaps if I describe what we need/want you all can help. Most of our files are large, 50+ Gig in size, held in a proprietary, third-party format. We need to be able to access each file by a name/date/source/time/artifact combination. Essentially a key-value pair style look-up. When we query for a file, we don't want to have to load all of it into memory. They're really too large and would swamp our server. We want to be able to somehow get a reference to the file and then use a proprietary, third-party API to ingest portions of it. We want to easily add, remove, and export files from storage. We'd like to set up automatic file replication between two servers (we can write a script for this.) That is, sync the contents of one server with another. We don't need a distributed system where it only appears as if we have one server. We'd like complete replication. We also have other smaller files that have a tree type relationship with the Big files. One file's content will point to the next and so on, and so on. It's not a "spoked wheel," it's a full blown tree. We'd prefer a Python, C or C++ API to work with a system like this but most of us are experienced with a variety of languages. We don't mind as long as it works, gets the job done, and saves us time. What you think? Is there something out there like this?

    Read the article

  • Linking a template class using another template class (error LNK2001)

    - by Luís Guilherme
    I implemented the "Strategy" design pattern using an Abstract template class, and two subclasses. Goes like this: template <class T> class Neighbourhood { public: virtual void alter(std::vector<T>& array, int i1, int i2) = 0; }; and template <class T> class Swap : public Neighbourhood<T> { public: virtual void alter(std::vector<T>& array, int i1, int i2); }; There's another subclass, just like this one, and alter is implemented in the cpp file. Ok, fine! Now I declare another method, in another class (including neighbourhood header file, of course), like this: void lSearch(/*parameters*/, Neighbourhood<LotSolutionInformation> nhood); It compiles fine and cleanly. When starting to link, I get the following error: 1>SolverFV.obj : error LNK2001: unresolved external symbol "public: virtual void __thiscall lsc::Neighbourhood<class LotSolutionInformation>::alter(class std::vector<class LotSolutionInformation,class std::allocator<class LotSolutionInformation> > &,int,int)" (?alter@?$Neighbourhood@VLotSolutionInformation@@@lsc@@UAEXAAV?$vector@VLotSolutionInformation@@V?$allocator@VLotSolutionInformation@@@std@@@std@@HH@Z)

    Read the article

  • DDSteps date question.

    - by Srini
    DDStep Date Question: Currently trying to pass just the date from excel. But getting the below error while doing it. Failed to convert property value of type [java.lang.String] to required type [java.util.Date] for property ...no matching editors or conversion strategy found spring for date conversion I even tried to add customEditorConfigurer in the ddsteps-context file. Still getting error. But in their pet store example looks like it works fine. Any help is appreciated. <entry key="java.util.Date"> <bean class="org.springframework.beans.propertyeditors.CustomDateEditor"> <constructor-arg> <bean class="java.text.SimpleDateFormat"> <constructor-arg value="yyyy-MM-dd" /> </bean> </constructor-arg> <constructor-arg value="false" /> </bean> </entry>

    Read the article

  • Drawing an image in Java, slow as hell on a netbook.

    - by Norswap
    In follow-up to my previous questions (especially this one : http://stackoverflow.com/questions/2684123/java-volatileimage-slower-than-bufferedimage), i have noticed that simply drawing an Image (it doesn't matter if it's buffered or volatile, since the computer has no accelerated memory*, and tests shows it's doesn't change anything), tends to be very long. (*) System.out.println(GraphicsEnvironment.getLocalGraphicsEnvironment() .getDefaultScreenDevice().getAvailableAcceleratedMemory()); --> 0 How long ? For a 500x400 image, about 0.04 seconds. This is only drawing the image on the backbuffer (obtained via buffer strategy). Now considering that world of warcraft runs on that netbook (tough it is quite laggy) and that online java games seems to have no problem whatsoever, this is quite thought provoking. I'm quite certain I didn't miss something obvious, I've searched extensively the web, but nothing will do. So do any of you java whiz have an idea of what obscure problem might be causing this (or maybe it is normal, tough I doubt it) ? PS : As I'm writing this I realized this might be cause by my Linux installation (archlinux) tough I have the correct Intel driver. But my computer normally has "Integrated Intel Graphics Media Accelerator 950", which would mean it should have accelerated video memory somehow. Any ideas about this side of things ?

    Read the article

  • c# Sending emails with authentication. standard approach not working

    - by Ready Cent
    I am trying to send an email using the following very standard code. However, I get the error that follow... MailMessage message = new MailMessage(); message.Sender = new MailAddress("[email protected]"); message.To.Add("[email protected]"); message.Subject = "test subject"; message.Body = "test body"; SmtpClient client = new SmtpClient(); client.Host = "mail.myhost.com"; //client.Port = 587; NetworkCredential cred = new NetworkCredential(); cred.UserName = "[email protected]"; cred.Password = "correct password"; cred.Domain = "mail.myhost.com"; client.Credentials = cred; client.UseDefaultCredentials = false; client.Send(message); Mailbox unavailable. The server response was: No such user here. This recipient email address definitely works. To make this account work I had to do some special steps in outlook. Specifically, I had to do change account settings - more settings - outgoing server - my outgoing server requires authentication & use same settings. I am wondering if there is some other strategy. I think the key here is that my host is Server Intellect and I know that some people on here use them so hopefully someone else has been able to get through this. I did talk to support but they said with coding issues I am on my own :o

    Read the article

  • Am I understanding premature optimization correctly?

    - by Ed Mazur
    I've been struggling with an application I'm writing and I think I'm beginning to see that my problem is premature optimization. The perfectionist side of me wants to make everything optimal and perfect the first time through, but I'm finding this is complicating the design quite a bit. Instead of writing small, testable functions that do one simple thing well, I'm leaning towards cramming in as much functionality as possible in order to be more efficient. For example, I'm avoiding multiple trips to the database for the same piece of information at the cost of my code becoming more complex. One part of me wants to just not worry about redundant database calls. It would make it easier to write correct code and the amount of data being fetched is small anyway. The other part of me feels very dirty and unclean doing this. :-) I'm leaning towards just going to the database multiple times, which I think is the right move here. It's more important that I finish the project and I feel like I'm getting hung up because of optimizations like this. My question is: is this the right strategy to be using when avoiding premature optimization?

    Read the article

  • Mule ESB 3.2 Splitter destroys Enricher results

    - by Eddie
    Here is the snippet of my flow: <logger message="PRODUCT_ID = #[header:productID]" level="INFO" doc:name="Logger"/> <splitter evaluator="jxpath" expression="//*/BisacHeaderCodes" doc:name="Splitter"/> <logger message="PRODUCT_ID_POST_SPLITTER = #[header:productID]" level="INFO" doc:name="Logger"/> #[header:productID] was set up prior to Logger call. I tried #[variable:productID] and got the same result. When I run it, this is the out put I get: INFO 2012-04-05 23:12:47,865 [[bookinista_order_management].connector.http.mule.default.receiver.02] org.mule.api.processor.LoggerMessageProcessor: PRODUCT_ID = 72 ERROR 2012-04-05 23:12:47,871 [[bookinista_order_management].connector.http.mule.default.receiver.02] org.mule.exception.DefaultSystemExceptionStrategy: Caught exception in Exception Strategy: Expression Evaluator "header" with expression "outbound:productID" returned null but a value was required. org.mule.api.expression.RequiredValueException: Expression Evaluator "header" with expression "outbound:productID" returned null but a value was required. So, right before Splitter, I have a perfect value in my header, and right after Splitter, that value disappears! I understand that Splitter propagates only part of payloda, but shouldn't it leave headers and variables alone? Any ideas for a workaround?

    Read the article

< Previous Page | 88 89 90 91 92 93 94 95 96 97 98 99  | Next Page >