Search Results

Search found 2613 results on 105 pages for 'it strategy'.

Page 89/105 | < Previous Page | 85 86 87 88 89 90 91 92 93 94 95 96  | Next Page >

  • In IIS6, how to provide authenticated access to static files on remote server

    - by frankadelic
    We have a library of ZIP files that we would like to make available for download at an ASP.NET site. The files are sitting on a NAS device that is accessible from out web farm. Here is our initial strategy: Map an IIS virtual directory to the shared drive at path /zipfiles Users can download the zip files when given the URL However, if users share links to the files, anyone can download them. We would instead like to make use of the ASP.NET forms authentication in our site to validate users' requests before initiating the file transfer. A few problems: A request for a zip file is handled by IIS, not ASP.NET. So it is not subject to forms authentication. In addition, we don't want ASP.NET to handle the request, because it uses up an ASP.NET thread and is not scalable for download of large files. So, configuring the asp.net dll to handle *.zip requests is not an option. Any ideas on this? One idea we've tossed around is this: Initial request for download will be for an ashx handler. This handler will, after authentication, generate a download token which is saved to a database. Then, the user is redirected to the file with token appended in QueryString (e.g. /files/xyz.zip?token=123456789). An ISAPI plugin will be used to check the token. Also, the token will expire after x amount of time. Any thoughts on this? I have not implemented an ISAPI plugin so I'm not sure if this will even work. I would like to avoid custom coding since security is an issue and I'd prefer to use a time-tested solution.

    Read the article

  • Truncate portions of a string to limit the whole string's length in Ruby

    - by Horace Loeb
    Suppose you want to generate dynamic page titles that look like this: "It was all a dream, I used to read word up magazine" from "Juicy" by The Notorious B.I.G I.e., "LYRICS" from "SONG_NAME" by ARTIST However, your title can only be 69 characters total and this template will sometimes generate titles that are longer. One strategy for solving this problem is to truncate the entire string to 69 characters. However, a better approach is to truncate the less important parts of the string first. I.e., your algorithm might look something like this: Truncate the lyrics until the entire string is <= 69 characters If you still need to truncate, truncate the artist name until the entire string is <= 69 characters If you still need to truncate, truncate the song name until the entire string is <= 69 characters If all else fails, truncate the entire string to 69 characters Ideally the algorithm would also limit the amount each part of the string could be truncated. E.g., step 1 would really be "Truncate the lyrics to a minimum of 10 characters until the entire string is <= 69 characters" Since this is such a common situation, I was wondering if someone has a library or code snippet that can take care of it.

    Read the article

  • Java application design question

    - by ring bearer
    I have a hobby project, which is basically to maintain 'todo' tasks in the way I like. One task can be described as: public class TodoItem { private String subject; private Date dueBy; private Date startBy; private Priority priority; private String category; private Status status; private String notes; } As you can imagine I would have 1000s of todo items at a given time. What is the best strategy to store a todo item? (currently on an XML file) such that all the items are loaded quickly up on app start up(the application shows kind of a dashboard of all the items at start up)? What is the best way to design its back-end so that it can be ported to Android/or a J2ME based phone? Currently this is done using Java Swing. What should I concentrate on so that it works efficiently on a device where memory is limited? The application throws open a form to enter new todo task. For now, I would like to save the newly added task to my-todos.xml once the user presses "save" button. What are the common ways to append such a change to an existing XML file?(note that I don't want to read the whole file again and then persist)

    Read the article

  • How can I SETF an element in a tree by an accessor?

    - by Willi Ballenthin
    We've been using Lisp in my AI course. The assignments I've received have involved searching and generating tree-like structures. For each assignment, I've ended up writing something like: (defun initial-state () (list 0 ; score nil ; children 0 ; value 0)) ; something else and building my functions around these "states", which are really just nested lists with some loosely defined structure. To make the structure more rigid, I've tried to write accessors, such as: (defun state-score ( state ) (nth 2 state)) This works for reading the value (which should be all I need to do in a nicely functional world. However, as time crunches, and I start to madly hack, sometimes I want a mutable structure). I don't seem to be able to SETF the returned ...thing (place? value? pointer?). I get an error with something like: (setf (state-score *state*) 10) Sometimes I seem to have a little more luck writing the accessor/mutator as a macro: (defmacro state-score ( state ) `(nth 2 ,state)) However I don't know why this should be a macro, so I certainly shouldn't write it as a macro (except that sometimes it works. Programming by coincidence is bad). What is an appropriate strategy to build up such structures? More importantly, where can I learn about whats going on here (what operations affect the memory in what way)?

    Read the article

  • How to implement web cache: internal fragmentation VS external fragmentation

    - by Summer_More_More_Tea
    Hi there: I come up with this question when play with Firefox web cache: in which approach does the browser cache a response in limited disk space(take my configuration as an example, 50MB is the upper bound)? I think two ways can be employed. One is cache the total response object one by one, but this is inefficient and will introduce external fragmentation, thus the total cache space may not be fully used. The second is take the total space(50MB) as a consecutive file, splitting it into fixed-length slots; incoming response objects will also be treated blocks of data with the same length as the slots. We can fill slots until the whole file is run out of, then some displacement algorithm can be used to swap out the old cached objects. The latter approach will of course bing in internal fragmentation, but in my opinion is easier to implement and maintain than the first strategy. But when I enter Firefox's Cache directory, I find it (maybe) use a different method: a lot of varied-length files reside in that directory and all those files are filled with undisplayable characters. I don't but really want to know what mechanism that a commercial browser, e.g. Firefoix, employed to implement web cache. Regards.

    Read the article

  • Image drawing library for Haskell?

    - by absz
    I'm working on a Haskell program for playing spatial games: I have a graph of a bunch of "individuals" playing the Prisoner's Dilemma, but only with their immediate neighbors, and copying the strategies of the people who do best. I've reached a point where I need to draw an image of the world, and this is where I've hit problems. Two of the possible geometries are easy: if people have four or eight neighbors each, then I represent each one as a filled square (with color corresponding to strategy) and tile the plane with these. However, I also have a situation where people have six neighbors (hexagons) or three neighbors (triangles). My question, then, is: what's a good Haskell library for creating images and drawing shapes on them? I'd prefer that it create PNGs, but I'm not incredibly picky. I was originally using Graphics.GD, but it only exports bindings to functions for drawing points, lines, arcs, ellipses, and non-rotated rectangles, which is not sufficient for my purposes (unless I want to draw hexagons pixel by pixel*). I looked into using foreign import, but it's proving a bit of a hassle (partly because the polygon-drawing function requires an array of gdPoint structs), and given that my requirements may grow, it would be nice to use an in-Haskell solution and not have to muck about with the FFI (though if push comes to shove, I'm willing to do that). Any suggestions? * That is also an option, actually; any tips on how to do that would also be appreciated, though I think a library would be easier.

    Read the article

  • Optimizing processing and management of large Java data arrays

    - by mikera
    I'm writing some pretty CPU-intensive, concurrent numerical code that will process large amounts of data stored in Java arrays (e.g. lots of double[100000]s). Some of the algorithms might run millions of times over several days so getting maximum steady-state performance is a high priority. In essence, each algorithm is a Java object that has an method API something like: public double[] runMyAlgorithm(double[] inputData); or alternatively a reference could be passed to the array to store the output data: public runMyAlgorithm(double[] inputData, double[] outputData); Given this requirement, I'm trying to determine the optimal strategy for allocating / managing array space. Frequently the algorithms will need large amounts of temporary storage space. They will also take large arrays as input and create large arrays as output. Among the options I am considering are: Always allocate new arrays as local variables whenever they are needed (e.g. new double[100000]). Probably the simplest approach, but will produce a lot of garbage. Pre-allocate temporary arrays and store them as final fields in the algorithm object - big downside would be that this would mean that only one thread could run the algorithm at any one time. Keep pre-allocated temporary arrays in ThreadLocal storage, so that a thread can use a fixed amount of temporary array space whenever it needs it. ThreadLocal would be required since multiple threads will be running the same algorithm simultaneously. Pass around lots of arrays as parameters (including the temporary arrays for the algorithm to use). Not good since it will make the algorithm API extremely ugly if the caller has to be responsible for providing temporary array space.... Allocate extremely large arrays (e.g. double[10000000]) but also provide the algorithm with offsets into the array so that different threads will use a different area of the array independently. Will obviously require some code to manage the offsets and allocation of the array ranges. Any thoughts on which approach would be best (and why)?

    Read the article

  • What is the best approach for creating a Common Information Model?

    - by Kaiser Advisor
    Hi, I would like to know the best approach to create a Common Information Model. Just to be clear, I've also heard it referred to as a canonical information model, semantic information model, and master data model - As far as I can tell, they are all referring to the same concept. I've heard in the past that a combined "top-down" and "bottom-up" approach is best. This has the advantage of incorporating "Ivory tower" architects and developers - The work will meet somewhere in the middle and usually be both logical and practical. However, this involves bringing in a lot of people with different skill sets. I've also seen a couple of references to the Distributed Management Task Force, but I can't glean much on best practices in terms of CIM development. This is something I'm quite interested in getting some feedback on since having a strong CIM is a prerequisite to SOA. Thanks for your help! KA Update I've heard another strategy goes along with overall SOA implementation: Get the business involved, and seek executive sponsorship. This would be part of the "Top-down" effort.

    Read the article

  • Is there such a thing as IMAP for podcasts?

    - by Gerrit
    Is there such a thing as IMAP for podcasts? I own a desktop, laptop, iPod, smartphone and a web-client all downloading StackOverflow Podcasts. (among others) They all tell me which episodes are available and which are already played. Everything is a horrible mess, ofcourse. My iPod is somewhat in sync with my desktop, but everything else is a random jungle. The same problem with e-mail is solved by IMAP. Every device gets content and meta-information from one server, and stays in sync with it. Per device, I can set preferences (do or do not download the complete archive including junkmail). Can we implement the IMAP approach for podcasts? Or is there a better metaphore/standard to solve this problem? How will the adoption-strategy look like? (by the way: except for the Windows smartphone, I own a full Apple-stack of products. Even then, I run into this problem) UPDATE The RSS-to-Imap link to sourceforge looks promesting, but very alpha/experimental. UPDATE 2 The one thing RSS is missing is the command/method/parameter/attribute to delete/unread items. RSS can only add, not remove. If RSS(N+1) (3?) could add a value for unread="true|false", it would be solved. If I cache all my RSS-feeds on my own server, and add the attribute myself, I only would have to convince iTunes and every other client to respect that.

    Read the article

  • Caching Authentication Data

    - by PartlyCloudy
    Hi, I'm currently implementing a REST web service using CouchDB and RESTlet. The RESTlet layer is mainly for authentication and some minor filtering of the JSON data served by CouchDB: Clients <= HTTP = [ RESTlet <= HTTP = CouchDB ] I'm using CouchDB also to store user login data, because I don't want to add an additional database server for that purpose. Thus, each request to my service causes two CouchDB requests conducted by RESTlet (auth data + "real" request). In order to keep the service as efficent as possible, I want to reduce the number of requests, in this case redundant requests for login data. My idea now is to provide a cache (i.e.LRU-Cache via LinkedHashMap) within my RESTlet application that caches login data, because HTTP caching will probabily not be enough. But how do I invalidate the cache data, once a user changes the password, for instance. Thanks to REST, the application might run on several servers in parallel, and I don't want to create a central instance just to cache login data. Currently, I save requested auth data in the cache and try to auth new requests by using them. If a authentication fails or there is now entry available, I'll dispatch a GET request to my CouchDB storage in order to obtain the actual auth data. So in a worst case, users that have changed their data will perhaps still be able to login with their old credentials. How can I deal with that? Or what is a good strategy to keep the cache(s) up-to-date in general? Thanks in advance.

    Read the article

  • How do I create a good evaluation function for a new board game?

    - by A. Rex
    I write programs to play board game variants sometimes. The basic strategy is standard alpha-beta pruning or similar searches, sometimes augmented by the usual approaches to endgames or openings. I've mostly played around with chess variants, so when it comes time to pick my evaluation function, I use a basic chess evaluation function. However, now I am writing a program to play a completely new board game. How do I choose a good or even decent evaluation function? The main challenges are that the same pieces are always on the board, so a usual material function won't change based on position, and the game has been played less than a thousand times or so, so humans don't necessarily play it enough well yet to give insight. (PS. I considered a MoGo approach, but random games aren't likely to terminate.) Any ideas? Game details: The game is played on a 10-by-10 board with a fixed six pieces per side. The pieces have certain movement rules, and interact in certain ways, but no piece is ever captured. The goal of the game is to have enough of your pieces in certain special squares on the board. The goal of the computer program is to provide a player which is competitive with or better than current human players.

    Read the article

  • InstallExecuteSequence cache interferes with custom action operation

    - by Dima G
    I need to upgrade a product that could be installed in per-user context to a new version that is always in per-machine context. The requirements are: Whether the old version was installed in a Per-User (no matter who) or Per-Machine context should be completely seamless to an administrator user that performs the upgrade. The MSI upgrade should succeed without the need to know the password of the user that originally installed the previous version of the product in a Per-User context. The installation should be performed from a single .msi file (no setup.exe is allowed). The installation should be able to run in a silent (non-UI) mode. No reboots are allowed during installation. My strategy is to find in the beginning of the installation whether the product is already installed in per-user context, and if so, to transform the registry keys manually to Per-Machine context (I checked: no additional changes such as file system changes etc. are needed except this transform). I figured out how to move all appropriate keys in the registry from the user settings to the machine settings (pre-loaded appropriate user hive in case it didn't appear in HKEY_USERS) and created custom action that does it - and it does work when I run it as a stand-alone executable before running the MSI. The problem, however, is that when Windows Installer runs InstallExecuteSequence it first creates a 'cached product context' for all products. So when my custom action runs in the course of InstallExecuteSequence, this cache has already been created. Thus FindRelatedProducts action that determines if older product with same upgrade code exists looks on that cache and ignores the changes that my custom action applied. If before running the MSI the previous product was in per-user context, FindRelatedProducts will look at the cache and not apply the upgrade and remove the previous version, because the new product is in per-machine context, even though the previous product version is already configured to per-machine context in the registry by that time by my custom action. What can be done to solve this problem without violating the requirements stated above?

    Read the article

  • Consecutive Tables in Latex

    - by Tim
    Hi, I wonder how to place several tables consecutively in Latex? The page with the text right before the first table has a little space but not enough for the first table, so the first table is to be placed on the top of the next page, although I use "\begin{table}[!h]" for it. The second table does not fit into the place in the rest of the page of the first table, so I think I might use longtable for it to span the rest of the page and the top of the next page. Similarly, I use longtable for the third table. The latex code is as follows: ... % some text \begin{table}[!h] \caption{Table 1. \label{tab:1}} \begin{center} \begin{tabular}{c c} ... \end{tabular} \end{center} \end{table} \begin{center} \begin{longtable}{ c c } \caption{Table 2. \label{tab:2}}\\ ... \end{longtable} \end{center} \begin{center} \begin{longtable}{ c c } \caption{Table 3. \label{tab:3}}\\ ... \end{longtable} \end{center} ... % some text In the compiled pdf file it turns out that the order of the tables is messed up. The first table is placed behind the second and third one, and the second one spans the page with text before the tables and the next page with the third one following it. I would like to know how I can make the three tables appear consecutively in order, and there are no space left blank between them and between the text and the tables? Or if what I hope is not possible, what is the best strategy then? Thanks and regards! EDIT: Removing [!h] does not make improvement, the first table is still behind the second and the third.

    Read the article

  • Is a GWT app running on Google App Engine protected from CSRF

    - by gerdemb
    I'm developing a GWT app running on the Google App Engine and wondering if I need to worry about Cross-site request forgery or is that automatically taken care of for me? For every RPC request that requires authentication, I have the following code: public class BookServiceImpl extends RemoteServiceServlet implements BookService { public void deleteInventory(Key<Inventory> inventoryKey) throws NotLoggedInException, InvalidStateException, NotFoundException { DAO dao = new DAO(); // This will throw NotLoggedInException if user is not logged in User user = dao.getCurrentUser(); // Do deletion here } } public final class DAO extends DAOBase { public User getCurrentUser() throws NotLoggedInException { currentUser = UserServiceFactory.getUserService().getCurrentUser(); if(currentUser == null) { throw new NotLoggedInException(); } return currentUser; } I couldn't find any documentation on how the UserService checks authentication. Is it enough to rely on the code above or do I need to to more? I'm a beginner at this, but from what I understand to avoid CSRF attacks some of the strategies are: adding an authentication token in the request payload instead of just checking a cookie checking the HTTP Referer header I can see that I have cookies set from Google with what look like SID values, but I can't tell from the serialized Java objects in the payloads if tokens are being passed or not. I also don't know if the Referer header is being used or not. So, am I worrying about a non-issue? If not, what is the best strategy here? This is a common enough problem, that there must be standard solutions out there...

    Read the article

  • How to implement Horner's scheme for multivariate polynomials?

    - by gsreynolds
    Background I need to solve polynomials in multiple variables using Horner's scheme in Fortran90/95. The main reason for doing this is the increased efficiency and accuracy that occurs when using Horner's scheme to evaluate polynomials. I currently have an implementation of Horner's scheme for univariate/single variable polynomials. However, developing a function to evaluate multivariate polynomials using Horner's scheme is proving to be beyond me. An example bivariate polynomial would be: 12x^2y^2+8x^2y+6xy^2+4xy+2x+2y which would factorised to x(x(y(12y+8))+y(6y+4)+2)+2y and then evaluated for particular values of x & y. Research I've done my research and found a number of papers such as: staff.ustc.edu.cn/~xinmao/ISSAC05/pages/bulletins/articles/147/hornercorrected.pdf citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.40.8637&rep=rep1&type=pdf www.is.titech.ac.jp/~kojima/articles/B-433.pdf Problem However, I'm not a mathematician or computer scientist, so I'm having trouble with the mathematics used to convey the algorithms and ideas. As far as I can tell the basic strategy is to turn a multivariate polynomial into separate univariate polynomials and compute it that way. Can anyone help me? If anyone could help me turn the algorithms into pseudo-code that I can implement into Fortran myself, I would be very grateful.

    Read the article

  • WPF, notify a child in the element tree about an event in a parent

    - by jester
    I am developing a WPF app and I want an event in a parent to be notified to several of its children in the element tree, so that each of them can take an action accordingly. I know that a custom RoutedEvent can be used to signal in the other direction from a child to one of its ancestors by bubbling the event upwards, so that any of the ancestor elements can handle the event. What I want is the children to be notified about an event in the parent and they handle them appropriately. What is the best strategy to achieve this? EDIT: Clarifying the comments : Say I have a parent UserControl. It has a TabControl and its contents are several nested child UserControls. Now consider a scenario where I want the TabControl.SelectionChanged() event to cause some changes in each of the child UserControl. How to achieve this? (The contents of each tab is a UserControl which themselves may contain another few levels of children UserControls. I want the UserControl in the bottom level to know about the SelectionChanged() event and respond accordingly).

    Read the article

  • Javascript function to add class to a list element based on # in url.

    - by Jason
    I am trying to create a javascript function to add and remove a class to a list element based on the #tag at the end of the url on a page. The page has several different states, each with a different # in the url. I am currently using this script to change the style of a given element based on the # in the url when the user first loads the page, however if the user navigates to a different section of the page the style added on the page load stays, I would like it to change. <script type="text/javascript"> var hash=location.hash.substring(1); if (hash == 'strategy'){ document.getElementById('strategy_link').style.backgroundPosition ="-50px"; } if (hash == 'branding'){ document.getElementById('branding_link').style.backgroundPosition ="-50px"; } if (hash == 'marketing'){ document.getElementById('marketing_link').style.backgroundPosition ="-50px"; } if (hash == 'media'){ document.getElementById('media_link').style.backgroundPosition ="-50px"; } if (hash == 'management'){ document.getElementById('mangement_link').style.backgroundPosition ="-50px"; } if (hash == ''){ document.getElementById('shop1').style.display ="block"; } </script> Additionally, I am using a function to change the class of the element onClick, but when a user comes to a specific # on the page directly from another page and then clicks to a different location, two elements appear active. <script type="text/javascript"> function selectInList(obj) { $("#circularMenu").children("li").removeClass("highlight"); $(obj).addClass("highlight"); } </script> You can see this here: http://www.perksconsulting.com/dev/capabilities.php#branding Thanks.

    Read the article

  • @PrePersist with entity inheritance

    - by gerry
    I'm having some problems with inheritance and the @PrePersist annotation. My source code looks like the following: _the 'base' class with the annotated updateDates() method: @javax.persistence.Entity @Inheritance(strategy = InheritanceType.TABLE_PER_CLASS) public class Base implements Serializable{ ... @Id @GeneratedValue protected Long id; ... @Column(nullable=false) @Temporal(TemporalType.TIMESTAMP) private Date creationDate; @Column(nullable=false) @Temporal(TemporalType.TIMESTAMP) private Date lastModificationDate; ... public Date getCreationDate() { return creationDate; } public void setCreationDate(Date creationDate) { this.creationDate = creationDate; } public Date getLastModificationDate() { return lastModificationDate; } public void setLastModificationDate(Date lastModificationDate) { this.lastModificationDate = lastModificationDate; } ... @PrePersist protected void updateDates() { if (creationDate == null) { creationDate = new Date(); } lastModificationDate = new Date(); } } _ now the 'Child' class that should inherit all methods "and annotations" from the base class: @javax.persistence.Entity @NamedQueries({ @NamedQuery(name=Sensor.QUERY_FIND_ALL, query="SELECT s FROM Sensor s") }) public class Sensor extends Entity { ... // additional attributes @Column(nullable=false) protected String value; ... // additional getters, setters ... } If I store/persist instances of the Base class to the database, everything works fine. The dates are getting updated. But now, if I want to persist a child instance, the database throws the following exception: MySQLIntegrityConstraintViolationException: Column 'CREATIONDATE' cannot be null So, in my opinion, this is caused because in Child the method "@PrePersist protected void updateDates()" is not called/invoked before persisting the instances to the database. What is wrong with my code?

    Read the article

  • Is there a way to easily convert a series of tarballs of a source tree into a git repository?

    - by Hotei
    I'm new to git and I have a moderately large number of weekly tarballs from a long running project. Each tarball has on average a few hundred files in it. I'm looking for a git strategy that will allow me to add the expanded contents of each tarball to a new git repository, starting from version 1.001 and going through version 1.650. As of this stage of the project 99.5% of tarball(n) is just a copy of version(n-1) - in other words, a perfect candidate for git. The desired end result is to have only the master branch remaining at the end of the process. I think I know git well enough to do this "by hand". As I understand it there is no possibility of a merge conflict since there will be no opportunity to change the master before the next version is added and committed. A shell script is my first guess, but I'm not sure how well bash will like it when git checkout branch_n gets processed while bash is executing in branch_n-1. For the purposes of this project the host environment is Ubuntu 10.4, resources available are 8 Gig RAM, 500 Gig Disk space free and 4 CPU processor at 3.ghz . I don't need someone else to solve the problem but I could use a nudge in the right direction as to how a git expert would approach it. Any advice from someone who's "been there done that" would be appreciated. Hotei PS: I have looked at site's suggested "related questions" and found nothing relevant.

    Read the article

  • Hibernate design to speed up querying of large dataset

    - by paddydub
    I currently have the below tables representing a bus network mapped in hibernate, accessed from a Spring MVC based bus route planner I'm trying to make my route planner application perform faster, I load all the above tables into Lists to perform the route planner logic. I would appreciate if anyone has any ideas of how to speed my performace Or any suggestions of another method to approach this problem of handling a large set of data Coordinate Connections Table (INT,INT,INT, DOUBLE)( Containing 50,000 Coordinate Connections) ID, FROMCOORDID, TOCOORDID, DISTANCE 1 1 2 0.383657 2 1 17 0.173201 3 1 63 0.258781 4 1 64 0.013726 5 1 65 0.459829 6 1 95 0.458769 Coordinate Table (INT,DECIMAL, DECIMAL) (Containing 4700 Coordinates) ID , LAT, LNG 0 59.352669 -7.264341 1 59.352669 -7.264341 2 59.350012 -7.260653 3 59.337585 -7.189798 4 59.339221 -7.193582 5 59.341408 -7.205888 Bus Stop Table (INT, INT, INT)(Containing 15000 Stops) StopID RouteID COORDINATEID 1000100001 100 17 1000100002 100 18 1000100003 100 19 1000100004 100 20 1000100005 100 21 1000100006 100 22 1000100007 100 23 This is how long it takes to load all the data from each table: stop.findAll = 148ms, stops.size: 15670 Hibernate: select coordinate0_.COORDINATEID as COORDINA1_2_, coordinate0_.LAT as LAT2_, coordinate0_.LNG as LNG2_ from COORDINATES coordinate0_ coord.findAll = 51ms , coordinates.size: 4704 Hibernate: select coordconne0_.COORDCONNECTIONID as COORDCON1_3_, coordconne0_.DISTANCE as DISTANCE3_, coordconne0_.FROMCOORDID as FROMCOOR3_3_, coordconne0_.TOCOORDID as TOCOORDID3_ from COORDCONNECTIONS coordconne0_ coordinateConnectionDao.findAll = 238ms ; coordConnectioninates.size:48132 Hibernate Annotations @Entity @Table(name = "STOPS") public class Stop implements Serializable { @Id @GeneratedValue(strategy = GenerationType.AUTO) @Column(name = "STOPID") private int stopID; @Column(name = "ROUTEID", nullable = false) private int routeID; @ManyToOne(fetch = FetchType.LAZY) @JoinColumn(name = "COORDINATEID", nullable = false) private Coordinate coordinate; } @Table(name = "COORDINATES") public class Coordinate { @Id @GeneratedValue @Column(name = "COORDINATEID") private int CoordinateID; @Column(name = "LAT") private double latitude; @Column(name = "LNG") private double longitude; } @Entity @Table(name = "COORDCONNECTIONS") public class CoordConnection { @Id @GeneratedValue @Column(name = "COORDCONNECTIONID") private int CoordinateID; @ManyToOne(fetch = FetchType.LAZY) @JoinColumn(name = "FROMCOORDID", nullable = false) private Coordinate fromCoordID; @ManyToOne(fetch = FetchType.LAZY) @JoinColumn(name = "TOCOORDID", nullable = false) private Coordinate toCoordID; @Column(name = "DISTANCE", nullable = false) private double distance; }

    Read the article

  • Lost with hibernate - OneToMany resulting in the one being pulled back many times..

    - by Andy
    I have this DB design: CREATE TABLE report ( ID MEDIUMINT PRIMARY KEY NOT NULL AUTO_INCREMENT, user MEDIUMINT NOT NULL, created TIMESTAMP NOT NULL, state INT NOT NULL, FOREIGN KEY (user) REFERENCES user(ID) ON UPDATE CASCADE ON DELETE CASCADE ); CREATE TABLE reportProperties ( ID MEDIUMINT NOT NULL, k VARCHAR(128) NOT NULL, v TEXT NOT NULL, PRIMARY KEY( ID, k ), FOREIGN KEY (ID) REFERENCES report(ID) ON UPDATE CASCADE ON DELETE CASCADE ); and this Hibernate Markup: @Table(name="report") @Entity(name="ReportEntity") public class ReportEntity extends Report{ @Id @GeneratedValue(strategy = GenerationType.AUTO) @Column(name="ID") private Integer ID; @Column(name="user") private Integer user; @Column(name="created") private Timestamp created; @Column(name="state") private Integer state = ReportState.RUNNING.getLevel(); @OneToMany(mappedBy="pk.ID", fetch=FetchType.EAGER) @JoinColumns( @JoinColumn(name="ID", referencedColumnName="ID") ) @MapKey(name="pk.key") private Map<String, ReportPropertyEntity> reportProperties = new HashMap<String, ReportPropertyEntity>(); } and @Table(name="reportProperties") @Entity(name="ReportPropertyEntity") public class ReportPropertyEntity extends ReportProperty{ @Embeddable public static class ReportPropertyEntityPk implements Serializable{ /** * long#serialVersionUID */ private static final long serialVersionUID = 2545373078182672152L; @Column(name="ID") protected int ID; @Column(name="k") protected String key; } @EmbeddedId protected ReportPropertyEntityPk pk = new ReportPropertyEntityPk(); @Column(name="v") protected String value; } And i have inserted on Report and 4 Properties for that report. Now when i execute this: this.findByCriteria( Order.asc("created"), Restrictions.eq("user", user.getObject(UserField.ID)) ) ); I get back the report 4 times, instead of just the once with a Map with the 4 properties in. I'm not great at Hibernate to be honest, prefer straight SQL but I must learn, but i can't see what it is that is wrong.....? Any suggestions?

    Read the article

  • Handling selection and deselection of objects in a user interface

    - by Dennis
    I'm writing a small audio application (in Silverlight, but that's not really relevant I suppose), and I'm a struggling with a problem I've faced before and never solved properly. This time I want to get it right. In the application there is one Arrangement control, which contains several Track controls, and every Track can contain AudioObject controls (these are all custom user controls). The user needs to be able to select audio objects, and when these objects are selected they are rendered differently. I can make this happen by hooking into the MouseDown event of the AudioObject control and setting state accordingly. So far so good, but when an audio object is selected, all other audio objects need to be deselected (well, unless the user holds the shift key). Audio objects don't know about other audio objects though, so they have no way to tell the rest to deselect themselves. Now if I would approach this problem like I did the last time I would pass a reference to the Arrangement control in the constructor of the AudioObject control and give the Arrangement control a DeselectAll() method or something like that, which would tell all Track controls to deselect all their AudioObject controls. This feels wrong, and if I apply this strategy to similar issues I'm afraid I will soon end up with every object having a reference to every other object, creating one big tightly coupled mess. It feels like opening the floodgates for poorly designed code. Is there a better way to handle this?

    Read the article

  • Fluently setting C# properties and chaining methods

    - by John Feminella
    I'm using .NET 3.5. We have some complex third-party classes which are automatically generated and out of my control, but which we must work with for testing purposes. I see my team doing a lot of deeply-nested property getting/setting in our test code, and it's getting pretty cumbersome. To remedy the problem, I'd like to make a fluent interface for setting properties on the various objects in the hierarchical tree. There are a large number of properties and classes in this third-party library, and it would be too tedious to map everything manually. My initial thought was to just use object initializers. Red, Blue, and Green are properties, and Mix() is a method that sets a fourth property Color to the closest RGB-safe color with that mixed color. Paints must be homogenized with Stir() before they can be used. Bucket b = new Bucket() { Paint = new Paint() { Red = 0.4; Blue = 0.2; Green = 0.1; } }; That works to initialize the Paint, but I need to chain Mix() and other methods to it. Next attempt: Create<Bucket>(Create<Paint>() .SetRed(0.4) .SetBlue(0.2) .SetGreen(0.1) .Mix().Stir() ) But that doesn't scale well, because I'd have to define a method for each property I want to set, and there are hundreds of different properties in all the classes. Also, C# doesn't have a way to dynamically define methods prior to C# 4, so I don't think I can hook into things to do this automatically in some way. Third attempt: Create<Bucket>(Create<Paint>().Set(p => { p.Red = 0.4; p.Blue = 0.2; p.Green = 0.1; }).Mix().Stir() ) That doesn't look too bad, and seems like it'd be feasible. Is this an advisable approach? Is it possible to write a Set method that works this way? Or should I be pursuing an alternate strategy?

    Read the article

  • Saving Abstract and Sub classes to database

    - by bretddog
    Hi, I have an abstract class "StrategyBase", and a set of sub classes, StrategyA/B/C etc. The sub classes use some of the properties of the base class, and have some individual properties. My question is how to save this to a database. I'm currently using SqlCE, and Linq-To-Sql by creating entity classes automatically with SqlMetal.exe. I've seen there are three solutions shown in this question, but I'm not able to see how these solutions will work or not with SqlMetal/entity classes. Though it seems to me the "concrete table inheritance" would probably work without any manual modifying. What about the other two, would they be problematic? For "Single Table Inheritance" wouldn't all classes get all variables, even though they don't need them? And for the "Class table inheritance" solution I can't really see at all how that will map into the entity-classes for a useful purpose. I may note that I extend these partial entity classes for making the classes of my business objects. I may also consider moving to EntityFramework instead of SqlMetal/Linq2Sql, so would be nice also to know if that makes any difference to what schema is easy to implement. One likely important thing to note is that I will constantly be develop new strategies, which makes me have to modify the program code, and probably the database shcema; when adding a new strategy. Sorry the question is a bit "all over the place", but hopefully it's some clear advantages/disadvantages here that you may be able to advice. ? Cheers!

    Read the article

  • File based caching under PHP

    - by azatoth
    I've been using http://code.google.com/p/phpbrowscap/ for a project, and it usually works nice. But a few times it's cache, which is plain php-files (see http://code.google.com/p/phpbrowscap/source/browse/trunk/browscap/Browscap.php#372 et. al.), has been "zeroed", i.e. the whole cache file has become large blob of NULLs. Instead of trying to find out why the files become NULL, I though perhaps it might be better to change the caching strategy to something more resilient. So I do wonder if you has any good ideas what would be a good solution; I've been looking at http://www.jongales.com/blog/2009/02/18/simple-file-based-php-cache-class/ and http://www.phpclasses.org/package/313-PHP-Cache-arbitrary-data-in-files-.html and I also though of just saving an serialized array to the file instead of pure php as it's been doing now; But I'm uncertain what approach I should target here. I'm grateful for any insight into this area of technology, as I know it's complex from a performance point of view.

    Read the article

< Previous Page | 85 86 87 88 89 90 91 92 93 94 95 96  | Next Page >