Search Results

Search found 5987 results on 240 pages for 'ghost recon future intern'.

Page 218/240 | < Previous Page | 214 215 216 217 218 219 220 221 222 223 224 225  | Next Page >

  • Custom property editors do not work for request parameters in Spring MVC?

    - by dvd
    Hello, I'm trying to create a multiaction web controller using Spring annotations. This controller will be responsible for adding and removing user profiles and preparing reference data for the jsp page. @Controller public class ManageProfilesController { @InitBinder public void initBinder(WebDataBinder binder) { binder.registerCustomEditor(UserAccount.class,"account", new UserAccountPropertyEditor(userManager)); binder.registerCustomEditor(Profile.class, "profile", new ProfilePropertyEditor(profileManager)); logger.info("Editors registered"); } @RequestMapping("remove") public void up( @RequestParam("account") UserAccount account, @RequestParam("profile") Profile profile) { ... } @RequestMapping("") public ModelAndView defaultView(@RequestParam("account") UserAccount account) { logger.info("Default view handling"); ModelAndView mav = new ModelAndView(); logger.info(account.getLogin()); mav.addObject("account", account); mav.addObject("profiles", profileManager.getProfiles()); mav.setViewName(view); return mav; } ... } Here is the part of my webContext.xml file: <context:component-scan base-package="ru.mirea.rea.webapp.controllers" /> <context:annotation-config/> <bean class="org.springframework.web.servlet.handler.SimpleUrlHandlerMapping"> <property name="mappings"> <value> ... /home/users/manageProfiles=users.manageProfilesController </value> </property> </bean> <bean id="users.manageProfilesController" class="ru.mirea.rea.webapp.controllers.users.ManageProfilesController"> <property name="view" value="home\users\manageProfiles"/> </bean> <bean class="org.springframework.web.servlet.mvc.annotation.AnnotationMethodHandlerAdapter" /> However, when i open the mapped url, i get exception: java.lang.IllegalArgumentException: Cannot convert value of type [java.lang.String] to required type [ru.mirea.rea.model.UserAccount]: no matching editors or conversion strategy found I use spring 2.5.6 and plan to move to the Spring 3.0 in some not very distant future. However, according to this JIRA https://jira.springsource.org/browse/SPR-4182 it should be possible already in spring 2.5.1. The debug shows that the InitBinder method is correctly called. What am i doing wrong?

    Read the article

  • Best way to handle multiple tables to replace one big table in Rails? (e.g. 'Books1', 'Books2', etc.

    - by mikep
    Hello, I've decided to use multiple tables for an entity (e.g. Books1, Books2, Books3, etc.), instead of just one main table which could end up having a lot of rows (e.g. just Books). I'm doing this to try and to avoid a potential future performance drop that could come with having too many rows in one table. With that, I'm looking for a good way to handle this in Rails, mainly by trying to avoid loading a bunch of unused associations. (I know that I could use a partition for this, but, for now, I've decided to go the 'multiple tables' route.) Each user has their books placed into a specific table. The actual book table is chosen when the user is created, and all of their books go into the same table. I'm going to split the adds across the tables. The goal is to try and keep each table pretty much even -- but that's a different issue. One thing I don't particularly want to have is a bunch of unused associations in the User class. Right now, it looks like I'd have to do the following: class User < ActiveRecord::Base has_many :books1, :books2, :books3, :books4, :books5 end class Books1 < ActiveRecord::Base belongs_to :user end class Books2 < ActiveRecord::Base belongs_to :user end class Books3 < ActiveRecord::Base belongs_to :user end I'm assuming that the main performance hit would come in terms of memory and possibly some method call overhead for each User object, since it has to load all of those associations, which in turn creates all of those nice, dynamic model accessor methods like User.find_by_. But for each specific user, only one of the book tables would be usable/applicable, since all of a user's books are stored in the same table. So, only one of the associations would be in use at any time and any other has_many :bookX association that was loaded would be a waste. For example, with a user.id of 2, I'd only need books3.find_by_author('Author'), but the way I'm thinking of setting this up, I'd still have access to Books1..n. I don't really know Ruby/Rails does internally with all of those has_many associations though, so maybe it's not so bad. But right now I'm thinking that it's really wasteful, and that there may just be a better, more efficient way of doing this. So, a few questions: 1) Is there's some sort of special Ruby/Rails methodology that could be applied to this 'multiple tables to represent one entity' scheme? Are there any 'best practices' for this? 2) Is it really bad to have so many unused has_many associations for each object? Is there a better way to do this? 3) Does anyone have any advice on how to abstract the fact that there's multiple book tables behind a single books model/class? For example, so I can call books.find_by_author('Author') instead of books3.find_by_author('Author'). Thank you!

    Read the article

  • Automatically generate table of function pointers in C.

    - by jeremytrimble
    I'm looking for a way to automatically (as part of the compilation/build process) generate a "table" of function pointers in C. Specifically, I want to generate an array of structures something like: typedef struct { void (*p_func)(void); char * funcName; } funcRecord; /* Automatically generate the lines below: */ extern void func1(void); extern void func2(void); /* ... */ funcRecord funcTable[] = { { .p_func = &func1, .funcName = "func1" }, { .p_func = &func2, .funcName = "func2" } /* ... */ }; /* End automatically-generated code. */ ...where func1 and func2 are defined in other source files. So, given a set of source files, each of which which contain a single function that takes no arguments and returns void, how would one automatically (as part of the build process) generate an array like the one above that contains each of the functions from the files? I'd like to be able to add new files and have them automatically inserted into the table when I re-compile. I realize that this probably isn't achievable using the C language or preprocessor alone, so consider any common *nix-style tools fair game (e.g. make, perl, shell scripts (if you have to)). But Why? You're probably wondering why anyone would want to do this. I'm creating a small test framework for a library of common mathematical routines. Under this framework, there will be many small "test cases," each of which has only a few lines of code that will exercise each math function. I'd like each test case to live in its own source file as a short function. All of the test cases will get built into a single executable, and the test case(s) to be run can be specified on the command line when invoking the executable. The main() function will search through the table and, if it finds a match, jump to the test case function. Automating the process of building up the "catalog" of test cases ensures that test cases don't get left out (for instance, because someone forgets to add it to the table) and makes it very simple for maintainers to add new test cases in the future (just create a new source file in the correct directory, for instance). Hopefully someone out there has done something like this before. Thanks, StackOverflow community!

    Read the article

  • How to optimize Core Data query for full text search

    - by dk
    Can I optimize a Core Data query when searching for matching words in a text? (This question also pertains to the wisdom of custom SQL versus Core Data on an iPhone.) I'm working on a new (iPhone) app that is a handheld reference tool for a scientific database. The main interface is a standard searchable table view and I want as-you-type response as the user types new words. Words matches must be prefixes of words in the text. The text is composed of 100,000s of words. In my prototype I coded SQL directly. I created a separate "words" table containing every word in the text fields of the main entity. I indexed words and performed searches along the lines of SELECT id, * FROM textTable JOIN (SELECT DISTINCT textTableId FROM words WHERE word BETWEEN 'foo' AND 'fooz' ) ON id=textTableId LIMIT 50 This runs very fast. Using an IN would probably work just as well, i.e. SELECT * FROM textTable WHERE id IN (SELECT textTableId FROM words WHERE word BETWEEN 'foo' AND 'fooz' ) LIMIT 50 The LIMIT is crucial and allows me to display results quickly. I notify the user that there are too many to display if the limit is reached. This is kludgy. I've spent the last several days pondering the advantages of moving to Core Data, but I worry about the lack of control in the schema, indexing, and querying for an important query. Theoretically an NSPredicate of textField MATCHES '.*\bfoo.*' would just work, but I'm sure it will be slow. This sort of text search seems so common that I wonder what is the usual attack? Would you create a words entity as I did above and use a predicate of "word BEGINSWITH 'foo'"? Will that work as fast as my prototype? Will Core Data automatically create the right indexes? I can't find any explicit means of advising the persistent store about indexes. I see some nice advantages of Core Data in my iPhone app. The faulting and other memory considerations allow for efficient database retrievals for tableview queries without setting arbitrary limits. The object graph management allows me to easily traverse entities without writing lots of SQL. Migration features will be nice in the future. On the other hand, in a limited resource environment (iPhone) I worry that an automatically generated database will be bloated with metadata, unnecessary inverse relationships, inefficient attribute datatypes, etc. Should I dive in or proceed with caution?

    Read the article

  • How to detect page zoom level in all modern browsers?

    - by understack
    How can I detect page zoom level in all modern browsers? While this thread tells how to do it in IE7 and IE8, I can't find good solution for FF, Safari and Chrome. For FF one of the suggested solution FF stores page zoom level for future. So on first page load would I be able to get zoom level? Somewhere I read it works if you're changing zoom level on that page after its loaded. Is there a way to trap 'zoom' event? I need this because some of my calculations are based on no of pixels and they get changed on Zoom. Thanks. Modified sample given by tfl. This would alert different height based on zoom level. <html> <head> <script src="http://ajax.googleapis.com/ajax/libs/jquery/1.3.1/jquery.min.js" type="text/javascript"/></script> <title></title> </head> <body> <div id="xy" style="border:1px solid #f00; width:100px;"> Lorem ipsum dolor sit amet, consectetur adipiscing elit. Pellentesque sollicitudin tortor in lacus tincidunt volutpat. Integer dignissim imperdiet mollis. Suspendisse quis tortor velit, placerat tempor neque. Cum sociis natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Praesent bibendum auctor lorem vitae tempor. Nullam condimentum aliquam elementum. Nullam egestas gravida elementum. Maecenas mattis molestie nisl sit amet vehicula. Donec semper tristique blandit. Vestibulum adipiscing placerat mollis. </div> <div> <button onclick="alert($('#xy').height());">Show</button> </div> </body> </html>

    Read the article

  • LINQ InsertOnSubmit Required Fields needed for debugging

    - by Derek Hunziker
    Hi All, I've been using the ADO.NET Strogly-Typed DataSet model for about 2 years now for handling CRUD and stored procedure executions. This past year I built my first MVC app and I really enjoyed the ease and flexibility of LINQ. Perhaps the biggest selling point for me was that with LINQ I didn't have to create "Insert" stored procedures that would return the SCOPE_IDENTITY anymore (The auto-generated insert statements in the DataSet model were not capable of this without modification). Currently, I'm using LINQ with ASP.NET 3.5 WebForms. My inserts are looking like this: ProductsDataContext dc = new ProductsDataContext(); product p = new product { Title = "New Product", Price = 59.99, Archived = false }; dc.products.InsertOnSubmit(p); dc.SubmitChanges(); int productId = p.Id; So, this product example is pretty basic, right, and in the future, I'll probably be adding more fields to the database such as "InStock", "Quantity", etc... The way I understand it, I will need to add those fields to the database table and then delete and re-add the tables to the LINQ to SQL Class design view in order to refresh the DataContext. Does that sound right? The problem is that any new fields that are non-null are NOT caught by the ASP.NET build processes. For example, if I added a non-null field of "Quantity" to the database, the code above would still build. In the DataSet model, the stored procedure method would accept a certain amount of parameters and would warn me that my Insert would fail if I didn't include a quantity value. The same goes for LINQ stored procedure methods, however, to my knowledge, LINQ doesn't offer a way to auto generate the insert statements and that means I'm back to where I started. The bottom line is if I used insert statements like the one above and I add a non-null field to my database, it would break my app in about 10-20 places and there would be no way for me to detect it. Is my only option to do a solution-side search for the keyword "products.InsertOnSubmit" and make sure the new field is getting assigned? Is there a better way? Thanks!

    Read the article

  • Soon to be PhD in Computer Science - Which Path to Follow?

    - by mttr
    I am going to submit my PhD thesis within the next six months. My PhD is on managing the availabiity of large-scale distributed systems, so I have some experience actually building non-trivial systems (+ I have four years experience working as a programmer). I am now trying to figure out what I should do following the PhD. I enjoy research (a quick definition: identify problem, come up with solution, ask interesting questions, find ways to answer them, build system, experiment, contribute some new knowledge and publish). I also like teaching and supervising students. It would seem that a career in academia is the ideal thing to do (can work on non-trivial problems and contribute something of use to some or more people). However, a career in academia has two significant drawbacks. First, it can be difficult to gain access to real systems with real users which then display real problems. This creates the danger that you do work that seems important (to you and maybe to some of your colleagues), but is not really relevant to anything or anyone. Second, the pay is pretty sad. Apparently, you have to sacrifice this for the privilege of doing research. I enjoy programming, but don't just want to hack some web-based system for the rest of my life. That is, working in IT for a bank is not a future I see myself enjoying. I want to work on interesting problms (that's difficult to define clearly): things where you don't know how to start, that take some time to figure out and attack, that require a rigorous approach to demonstrate that the problem has been solved, and problems that need a solution in the real world. Give the experience of people on stackoverflow, what do you think suitable options are and why (or alternatively, what gaps in my thinking does the above reveal)? Is industrial research (aka IBM Research, Microsoft Research) the only alternative avenue to a career in academia? What other areas, companies, occupations, etc. could provide me with stimulating, inspiring work? Which regions, countries am I most likely to find such work? Please share your experience.

    Read the article

  • PHP - static DB class vs DB singleton object

    - by Marco Demaio
    I don't want to create a discussion about singleton better than static or better than global, etc. I read dozens of questions about it on SO, but I couldn't come up with an answer to this SPECIFIC question, so I hope someone could now illuminate me buy answering this question with one (or more) real simple EXAMPLES, and not theoretical discussions. In my app I have the typical DB class needed to perform tasks on DB without having to write everywhere in code mysql_connect/mysql_select_db/mysql... (moreover in future I might decide to use another type of DB engine in place of mySQL so obviously I need a class of abstration). I could write the class either as a static class: class DB { private static $connection = FALSE; //connection to be opened //DB connection values private static $server = NULL; private static $usr = NULL; private static $psw = NULL; private static $name = NULL; public static function init($db_server, $db_usr, $db_psw, $db_name) { //simply stores connections values, withour opening connection } public static function query($query_string) { //performs query over alerady opened connection, if not open, it opens connection 1st } ... } or as a Singletonm class: class DBSingleton { private $inst = NULL; private $connection = FALSE; //connection to be opened //DB connection values private $server = NULL; private $usr = NULL; private $psw = NULL; private $name = NULL; public static function getInstance($db_server, $db_usr, $db_psw, $db_name) { //simply stores connections values, withour opening connection if($inst === NULL) $this->inst = new DBSingleton(); return $this->inst; } private __construct()... public function query($query_string) { //performs query over already opened connection, if connection is not open, it opens connection 1st } ... } Then after in my app if I wanto to query the DB i could do //Performing query using static DB object DB:init(HOST, USR, PSW, DB_NAME); DB::query("SELECT..."); //Performing query using DB singleton $temp = DBSingleton::getInstance(HOST, USR, PSW, DB_NAME); $temp->query("SELECT..."); My simple brain sees Singleton has got the only advantage to avoid declaring as 'static' each method of the class. I'm sure some of you could give me an EXAMPLE of real advantage of singleton in this specific case. Thanks in advance.

    Read the article

  • NoSQL DB for .Net document-based database (ECM)

    - by Dane
    I'm halfway through coding a basic multi-tenant SaaS ECM solution. Each client has it's own instance of the database / datastore, but the .Net app is single instance. The documents are pretty much read only (i.e. an image archive of tiffs or PDFs) I've used MSSQL so far, but then started thinking this might be viable in a NoSQL DB (e.g. MongoDB, CouchDB). The basic premise is that it stores documents, each with their own particular indexes. Each tenant can have multiple document types. e.g. One tenant might have an invoice type, which has Customer ID, Invoice Number and Invoice Date. Another tenant might have an application form, which has Member Number, Application Number, Member Name, and Application Date. So far I've used the old method which Sharepoint (used?) to use, and created a document table which has int_field_1, int_field_2, date_field_1, date_field_2, etc. Then, I've got a "mapping" table which stores the customer specific index name, and the database field that will map to. I've avoided the key-value pair model in the DB due to volume of documents. This way, we can support multiple document types in the one table, and get reasonably high performance out of it, and allow for custom document type searches (i.e. user selects a document type, then they're presented with a list of search fields). However, a NoSQL DB might make this a lot simpler, as I don't need to worry about denormalizing the document. However, I've just got concerns about the rest of the data around a document. We store an "action history" against the document. This tracks views, whether someone emails the document from within the system, and other "future" functionality (e.g. faxing). We have control over the document load process, so we can manipulate the data however it needs to be to get it in the document store (e.g. assign unique IDs). Users will not be adding in their own documents, so we shouldn't need to worry about ACID compliance, as the documents are relatively static. So, my questions I guess : Is a NoSQL DB a good fit Is MongoDB the best for Asp.Net (I saw Raven and Velocity, but they're still kinda beta) Can I store a key for each document, and then store the action history in a MSSQL DB with this key? I don't need to do joins, it would be if a person clicks "View History" against a document. How would performance compare between the two (NoSQL DB vs denormalized "document" table) Volumes would be up to 200,000 new documents per month for a single tenant. My current scaling plan with the SQL DB involves moving the SQL DB into a cluster when certain thresholds are reached, and then reviewing partitioning and indexing structures.

    Read the article

  • Can Hudson branch promotion get based on project stability?

    - by Wayne
    Hudson CI server displays stability "weather" which is cool. And it allows one project build to kick off based on the successful build of another. However, how can you make that secondary project dependent additionally on the stability of multiple builds of the first project? Specifically, project "stable_deploy" needs to only kick off to promote a version to "stable" if project "integrate" with version 8.3.4.1233 has built and tested successfully at least 8 times--in a row. Until then, it's still in integration mode. IMPORTANT: A significant caveat to this is that a single set of Hudson projects gets used as a "pipeline" to process each new version through to release. So a project may have built successfully 8 times in a rolw but the latest version 8.3.4.1233 may be only the 2 most recent builds. The builds prior to that may be an earlier version. We're open to completely reorganizing this but the pipeline idea seemed to greatly reduce the amount of manually project creation and deletion. Is there a better way to track version release "pipeline"? In particular, we will have multiple versions in this pipeline simultaneously in the future due to fixes or patches to older versions. We don't see how to do that yet, except to create new pipeline projects for each version which is a real hassle. Here's some background details: The TickZoom application has some very complete unit tests some of which simulates real time trading environments. Add to that TickZoom makes elaborate use of parallelization for leveraging multi-core computers. Needless to say, during development of a new version, there can be stability issues during integration testing which get uncovered by running the build and auto tests repeatedly. A version which builds and tests cleanly 8 times in a row without change plus has undergone some real world testing by users can be deemed "stable" and promoted to the stable branch. Our Hudson projects look like this: test - Only for testing a build, zero user visibility. integrate_deploy - Promotes a test project build to integrate branch and makes it available to public for UA testing. integrate - Repeatedly builds the integrate branch to determine if it's stable enough to promote to stable branch. This runs the builds and test hourly throughout every night. stable_deploy - Promotes an integrate project build to the stable branch and makes it public for users who want the latest and greatest. stable - Builds the stable branch once every night. After 2 weeks of successful builds (14 builds) it can go to "release candidate". And so on... it continues with "release candidate" and then "release".

    Read the article

  • Register all GUI components as Observers or pass current object to next object as a constructor argu

    - by Jack
    First, I'd like to say that I think this is a common issue and there may be a simple or common solution that I am unaware of. Many have probably encountered a similar problem. Thanks for reading. I am creating a GUI where each component needs to communicate (or at least be updated) by multiple other components. Currently, I'm using a Singleton class to accomplish this goal. Each GUI component gets the instance of the singleton and registers itself. When updates need to be made, the singleton can call public methods in the registered class. I think this is similar to an Observer pattern, but the singleton has more control. Currently, the program is set up something like this: class c1 { CommClass cc; c1() { cc = CommClass.getCommClass(); cc.registerC1( this ); C2 c2 = new c2(); } } class c2 { CommClass cc; c2() { cc = CommClass.getCommClass(); cc.registerC2( this ); C3 c3 = new c3(); } } class c3 { CommClass cc; c3() { cc = CommClass.getCommClass(); cc.registerC3( this ); C4 c4 = new c4(); } } etc. Unfortunately, the singleton class keeps growing larger as more communication is required between the components. I was wondering if it's a good idea to instead of using this singleton, pass the higher order GUI components as arguments in the constructors of each GUI component: class c1 { c1() { C2 c2 = new c2( this ); } } class c2 { C1 c1; c2( C1 c1 ) { this.c1 = c1 C3 c3 = new c3( c1, this ); } } class c3 { C1 c1; C2 c2; c3( C1 c1, C2 c2 ) { this.c1 = c1; this.c2 = c2; C4 c4 = new c4( c1, c2, this ); } } etc. The second version relies less on the CommClass, but it's still very messy as the private member variables increase in number and the constructors grow in length. Each class contains GUI components that need to communicate through CommClass, but I can't think of a good way to do it. If this seems strange or horribly inefficient, please describe some method of communication between classes that will continue to work as the project grows. Also, if this doesn't make any sense to anyone, I'll try to give actual code snippets in the future and think of a better way to ask the question. Thanks.

    Read the article

  • How to actually use Swing Application Framework?

    - by Joonas Pulakka
    Hello, I'd like to learn how to effectively use Swing Application Framework. Most of the the examples I've found are blog entries that just explain how to great it is to extend SingleFrameApplication and override its startup method, but that's about it. Sun's article is almost two years old, as is the project's own introduction, and there has apparently been some evolution since then. Are there any recent and thorough tutorials/HOWTOs available anywhere? There is JavaDoc of course, but it's hard to get the big picture from there. Any pointers are appreciated. Update: I realized that there's a mailing list archive at the project's site. While somewhat clumsy (compared to StackOverflow ;) it seems to be quite active. Still it's a pity that there are no real tutorials anywhere. The information is scattered here and there. Update 2: Let me clarify - I'm not having trouble using Swing (the widget toolkit) itself, I'm talking about its Application Framework, which is supposed to ease things like application lifecycle (startup, exit and whatever happens between them), action management etc. - that is, things that most Swing applications will need. It's cool to get such framework to be standard part of Java. The only problem is to learn how it's intended to be used. Update 3: For the interested, there was just some discussion at the project's forum regarding the current state and future of JSR 296. Shortly: the current version 1.03 is considered to be quite usable, but the API is not stable and it will change to the final version in Java 7. The package name will also change so Java 7 will not break current applications made on SAF. Update 4: Karsten Lentzsch stated at the above mentioned forum: "I doubt that it can be included in Java 7; and I'll vote against it.". I would rather not question the sincerity of this great guru, and it's certainly wise not to let anything flawed to slip into the core JDK, but frankly it's a strange situation - he is the author of JGoodies Swing Suite which is partly a commercial competitor of JSR 296, and he is sitting in the committee that will decide whether this JSR will be included to standard Java. It was the same thing with JSR 295 Beans Binding which I wrote about earlier. Given the current state of SAF, I think the best solution is to wrap the current implementation into a "homebrew" framework, which can then accommodate possible changes to the existing API.

    Read the article

  • github like workflow on private server over ssh

    - by Jesse
    I have an server (available via ssh) on the internet that my friend and I use for working on projects together. We have started using git for source control. Our setup currently is as follows: Friend created repository on server with git init named project.friend.git I cloned project.friend.git on server to project.jesse.git I then cloned project.jesse.git on server to my local machine using git clone jesse@server:/git_repos/project.jesse.git I work on my local machine and commit to the local machine. When I want to push my changes to the project.jesse.git on server I use git push origin master. My friend is working on project.friend.git. When I want to get his changes I do pull jesse@server:/git_repos/project.friend.git. Everything seems to be working fine, however, I am now getting the following error when I do git push origin master: localpc:project.jesse jesse$ git push origin master Counting objects: 100, done. Delta compression using up to 2 threads. Compressing objects: 100% (76/76), done. Writing objects: 100% (76/76), 15.98 KiB, done. Total 76 (delta 50), reused 0 (delta 0) warning: updating the current branch warning: Updating the currently checked out branch may cause confusion, warning: as the index and work tree do not reflect changes that are in HEAD. warning: As a result, you may see the changes you just pushed into it warning: reverted when you run 'git diff' over there, and you may want warning: to run 'git reset --hard' before starting to work to recover. warning: warning: You can set 'receive.denyCurrentBranch' configuration variable to warning: 'refuse' in the remote repository to forbid pushing into its warning: current branch. warning: To allow pushing into the current branch, you can set it to 'ignore'; warning: but this is not recommended unless you arranged to update its work warning: tree to match what you pushed in some other way. warning: warning: To squelch this message, you can set it to 'warn'. warning: warning: Note that the default will change in a future version of git warning: to refuse updating the current branch unless you have the warning: configuration variable set to either 'ignore' or 'warn'. To jesse@server:/git_repos/project.jesse.git c455cb7..e9ec677 master -> master Is this warning anything I need to be worried about? Like I said, everything seems to be working. My friend is able to pull my changes in from my branch. I have the clone on the server so he can access it since he does not have access to my local machine. Is there something that could be done better? Thanks!

    Read the article

  • Which HTTP redirect status code is best for this REST API scenario?

    - by Aseem Kishore
    I'm working on a REST API. The key objects ("nouns") are "items", and each item has a unique ID. E.g. to get info on the item with ID foo: GET http://api.example.com/v1/item/foo New items can be created, but the client doesn't get to pick the ID. Instead, the client sends some info that represents that item. So to create a new item: POST http://api.example.com/v1/item/ hello=world&hokey=pokey With that command, the server checks if we already have an item for the info hello=world&hokey=pokey. So there are two cases here. Case 1: the item doesn't exist; it's created. This case is easy. 201 Created Location: http://api.example.com/v1/item/bar Case 2: the item already exists. Here's where I'm struggling... not sure what's the best redirect code to use. 301 Moved Permanently? 302 Found? 303 See Other? 307 Temporary Redirect? Location: http://api.example.com/v1/item/foo I've studied the Wikipedia descriptions and RFC 2616, and none of these seem to be perfect. Here are the specific characteristics I'm looking for in this case: The redirect is permanent, as the ID will never change. So for efficiency, the client can and should make all future requests to the ID endpoint directly. This suggests 301, as the other three are meant to be temporary. The redirect should use GET, even though this request is POST. This suggests 303, as all others are technically supposed to re-use the POST method. In practice, browsers will use GET for 301 and 302, but this is a REST API, not a website meant to be used by regular users in browsers. It should be broadly usable and easy to play with. Specifically, 303 is HTTP/1.1 whereas 301 and 302 are HTTP/1.0. I'm not sure how much of an issue this is. At this point, I'm leaning towards 303 just to be semantically correct (use GET, don't re-POST) and just suck it up on the "temporary" part. But I'm not sure if 302 would be better since in practice it's been the same behavior as 303, but without requiring HTTP/1.1. But if I go down that line, I wonder if 301 is even better for the same reason plus the "permanent" part. Thoughts appreciated!

    Read the article

  • Putting update logic in your migrations

    - by Daniel Abrahamsson
    A couple of times I've been in the situation where I've wanted to refactor the design of some model and have ended up putting update logic in migrations. However, as far as I've understood, this is not good practice (especially since you are encouraged to use your schema file for deployment, and not your migrations). How do you deal with these kind of problems? To clearify what I mean, say I have a User model. Since I thought there would only be two kinds of users, namely a "normal" user and an administrator, I chose to use a simple boolean field telling whether the user was an adminstrator or not. However, after I while I figured I needed some third kind of user, perhaps a moderator or something similar. In this case I add a UserType model (and the corresponding migration), and a second migration for removing the "admin" flag from the user table. And here comes the problem. In the "add_user_type_to_users" migration I have to map the admin flag value to a user type. Additionally, in order to do this, the user types have to exist, meaning I can not use the seeds file, but rather create the user types in the migration (also considered bad practice). Here comes some fictional code representing the situation: class CreateUserTypes < ActiveRecord::Migration def self.up create_table :user_types do |t| t.string :name, :nil => false, :unique => true end #Create basic types (can not put in seed, because of future migration dependency) UserType.create!(:name => "BASIC") UserType.create!(:name => "MODERATOR") UserType.create!(:name => "ADMINISTRATOR") end def self.down drop_table :user_types end end class AddTypeIdToUsers < ActiveRecord::Migration def self.up add_column :users, :type_id, :integer #Determine type via the admin flag basic = UserType.find_by_name("BASIC") admin = UserType.find_by_name("ADMINISTRATOR") User.all.each {|u| u.update_attribute(:type_id, (u.admin?) ? admin.id : basic.id)} #Remove the admin flag remove_column :users, :admin #Add foreign key execute "alter table users add constraint fk_user_type_id foreign key (type_id) references user_types (id)" end def self.down #Re-add the admin flag add_column :users, :admin, :boolean, :default => false #Reset the admin flag (this is the problematic update code) admin = UserType.find_by_name("ADMINISTRATOR") execute "update users set admin=true where type_id=#{admin.id}" #Remove foreign key constraint execute "alter table users drop foreign key fk_user_type_id" #Drop the type_id column remove_column :users, :type_id end end As you can see there are two problematic parts. First the row creation part in the first model, which is necessary if I would like to run all migrations in a row, then the "update" part in the second migration that maps the "admin" column to the "type_id" column. Any advice?

    Read the article

  • Rails "NoMethodError" with sub-resources

    - by Tchock
    Hi. I'm a newbie Rails developer who is getting the following error when trying to access the 'new' action on my CityController: undefined method `cities_path' for #<#<Class:0x104608c18>:0x104606f08> Extracted source (around line #2): 1: <h1>New City</h1> 2: <%= form_for(@city) do |f| %> 3: <%= f.error_messages %> 4: 5: <div class="field"> As some background, I have a State model with many Cities. I'm getting this error after clicking on the following link coming from a State show page: <p>Add a city: <%= link_to "Add city", new_state_city_path(@state) %></p> When I run 'rake:routes' it says this is a legit route... For more background, here is the CityController 'new' action: def new @city = City.new respond_to do |format| format.html # new.html.erb format.xml { render :xml => @city } end end Here is the (complete) form in the view: <%= form_for(@city) do |f| %> <%= f.error_messages %> <div class="field"> <%= f.label :name %><br /> <%= f.text_field :name %> </div> <div class="actions"> <%= f.submit %> </div> <% end %> This initially made me think that it's a resources/routes issue since it came back with a mention of 'cities_path' (in fact, that's what another person posting to Stack Overflow had wrong (http://stackoverflow.com/questions/845315/rails-error-nomethoderror-my-first-ruby-app). However, that doesn't seem to be the case from what I can see. Here are how my resources look in my routes file: resources :states do resources :cities end I can get it working when they are not sub-resources, but I really need to keep them as sub-resources for my future plans with the app. Any help would be very much appreciated, since I've been racking my brains on this for more hours than I would care to admit... Thanks! (Not sure this matters at all, but I'm running the very latest version of Rails 3 beta2).

    Read the article

  • BASH, multiple arrays and a loop.

    - by S1syphus
    At work, we 7 or 8 hardrives we dispatch over the country, each have unique labels which are not sequential. Ideally drives are plugged in our desktop, then gets folders from the server that correspond to the drive name. Sometimes, only one hard drive gets plugged in sometimes multiples, possibly in the future more will be added. Each is mounts to /Volumes/ and it's identifier; so for example /Volumes/f00, where f00 is the identifier. What I want to happen, scan volumes see if any any of the drives are plugged in, then checks the server to see if the folder exists, if ir does copy folder and recursive folders. Here is what I have so far, it checks if the drive exists in Volumes: #!/bin/sh #Declare drives in the array ARRAY=( foo bar long ) #Get the drives from the array DRIVES=${#ARRAY[@]} #Define base dir to check BaseDir="/Volumes" #Define shared server fold on local mount points #I plan to use AFP eventually, but for the sake of ease #using a local mount. ServerMount="BigBlue" #Define folder name for where files are to come from Dispatch="File-Dispatch" dir="$BaseDir/${ARRAY[${i}]}" #Loop through each item in the array and check if exists on /Volumes for (( i=0;i<$DRIVES;i++)); do dir="$BaseDir/${ARRAY[${i}]}" if [ -d "$dir" ]; then echo "$dir exists, you win." else echo "$dir is not attached." fi done What I can't figure out how to do, is how to check the volumes for the server while looping through the harddrive mount points. So I could do something like: #!/bin/sh #Declare drives, and folder location in arrays ARRAY=( foo bar long ) ARRAY1=($(ls ""$BaseDir"/"$ServerMount"/"$Dispatch"")) #Get the drives from the array DRIVES=${#ARRAY[@]} SERVERFOLDER=${#ARRAY1[@]} #Define base dir to check BaseDir="/Volumes" #Define shared server fold on local mount points ServerMount="BigBlue #Define folder name for where files are to come from Dispatch="File-Dispatch" dir="$BaseDir/${ARRAY[${i}]}" #List the contents from server directory into array ARRAY1=($(ls ""$BaseDir"/"$ServerMount"/"$Dispatch"")) echo ${list[@]} for (( i=0;i<$DRIVES;i++)); (( i=0;i<$SERVERFOLDER;i++)); do dir="$BaseDir/${ARRAY[${i}]}" ser="${ARRAY1[${i}]}" if [ "$dir" =~ "$sir" ]; then cp "$sir" "$dir" else echo "$dir is not attached." fi done I know, that is pretty wrong... well very, but I hope it gives you the idea of what I am trying to achieve. Any ideas or suggestions?

    Read the article

  • What was "The Next Big Thing" when you were just starting out in programming?

    - by Andrew
    I'm at the beginning of my career and there are lots of things which are being touted as "The Next Big Thing". For example: Dependency Injection (Spring, etc) MVC (Struts, ASP.NET MVC) ORMs (Linq To SQL, Hibernate) Agile Software Development These things have probably been around for some time, but I've only just started out. And don't get me wrong, I think these things are great! So, what was "The Next Big Thing" when you were starting out? When was it? Were people sceptical of it at first? Why? Did you think it would catch on? Did it pan out and become widely accepted/used? If not, why not? EDIT It's been nearly a week since I first posted this question and I can safely say that I did not expect such explosive interest. I asked the question so that I could gain a perspective of what kinds of innovations in programming people thought were most important when they were starting out. At the time of writing this I have read ~95% of all answers. To answer a few questions, the "Next Big Things" I listed are ones that I am currently really excited about and that I had not really been exposed to until I started working. I'm hoping to implement some or all of these in the near future at my current workplace. To many people they are probably old news. In regards to the "is this a real question" debate, I can see that obviously hasn't been settled yet. I feel bad whenever I read a comment saying that these kinds of questions take away from the real meaning of SO. I'm not wholly convinced that it doesn't. On the other hand, I have seen a lot of comments saying what a great question it is. Anyway, I have chosen "The Internet!" as my answer to this question. I don't think (in my very humble opinion, and, it seems many SOers opinions) that many things related to programming can compare. Nowadays every business and their dog has a website which can do anything from simply supplying information to purchasing goods halfway around the world to updating your blog. And of course, all these businesses need people like us. Thanks to everyone for all the great answers!

    Read the article

  • MySQL table organization and optimization (Rails)

    - by aguynamedloren
    I've been learning Ruby on Rails over the past few months with no prior programming experience. Lately, I've been thinking about database optimization and table organization. I know there are great books on the subject, but I typically learn by example / as I go. Here's a hypothetical situation: Let's say I am building a social network for a niche community with 250,000 members (users). The users have the ability to attend events. Let's say there are 50,000 past/present/future events. Much like Facebook events, a user can attend any number of events and an event can have any number of attendees. In the database, there would be a table for users and a table for events. Somehow I would have to create an association between the users and events. I could create an "events" column in the users table such that each user row would contain a hash of event IDs, or I could create an "attendees" column in the events table such that each event row would contain a hash of user IDs. Neither of these solutions seem ideal, however. On a users profile page, I want to display the list of events they are associated with, which would require scanning the 50,000 event rows for the user ID of said user if I include an "attendees" column in the events table. Likewise, on an event page, I want to display a list of attendees for the event, which would require scanning the 250,000 user rows for the event ID of said event if I include an "events" column in the users table. Option 3 would be to create a third table that contains the attendee information for each and every event - but I don't see how this would solve any problems. Are these non-issues? Rails makes accessing all of this information easy, but I guess I'm worried about scale. It is entirely possible that I am under-estimating the speed and processing power of modern databases / servers / etc. How long would it take to scan 250,000 user rows for specific event IDs - 10ms? 100ms? 1,000ms? I guess that's not that bad. Am I just over-thinking this?

    Read the article

  • Permutations of Varying Size

    - by waiwai933
    I'm trying to write a function in PHP that gets all permutations of all possible sizes. I think an example would be the best way to start off: $my_array = array(1,1,2,3); Possible permutations of varying size: 1 1 // * See Note 2 3 1,1 1,2 1,3 // And so forth, for all the sets of size 2 1,1,2 1,1,3 1,2,1 // And so forth, for all the sets of size 3 1,1,2,3 1,1,3,2 // And so forth, for all the sets of size 4 Note: I don't care if there's a duplicate or not. For the purposes of this example, all future duplicates have been omitted. What I have so far in PHP: function getPermutations($my_array){ $permutation_length = 1; $keep_going = true; while($keep_going){ while($there_are_still_permutations_with_this_length){ // Generate the next permutation and return it into an array // Of course, the actual important part of the code is what I'm having trouble with. } $permutation_length++; if($permutation_length>count($my_array)){ $keep_going = false; } else{ $keep_going = true; } } return $return_array; } The closest thing I can think of is shuffling the array, picking the first n elements, seeing if it's already in the results array, and if it's not, add it in, and then stop when there are mathematically no more possible permutations for that length. But it's ugly and resource-inefficient. Any pseudocode algorithms would be greatly appreciated. Also, for super-duper (worthless) bonus points, is there a way to get just 1 permutation with the function but make it so that it doesn't have to recalculate all previous permutations to get the next? For example, I pass it a parameter 3, which means it's already done 3 permutations, and it just generates number 4 without redoing the previous 3? (Passing it the parameter is not necessary, it could keep track in a global or static). The reason I ask this is because as the array grows, so does the number of possible combinations. Suffice it to say that one small data set with only a dozen elements grows quickly into the trillions of possible combinations and I don't want to task PHP with holding trillions of permutations in its memory at once.

    Read the article

  • get_or_create generic relations in Django & python debugging in general

    - by rabidpebble
    I ran the code to create the generically related objects from this demo: http://www.djangoproject.com/documentation/models/generic_relations/ Everything is good intially: >>> bacon.tags.create(tag="fatty") <TaggedItem: fatty> >>> tag, newtag = bacon.tags.get_or_create(tag="fatty") >>> tag <TaggedItem: fatty> >>> newtag False But then the use case that I'm interested in for my app: >>> tag, newtag = bacon.tags.get_or_create(tag="wholesome") Traceback (most recent call last): File "<console>", line 1, in <module> File "/usr/local/lib/python2.6/dist-packages/django/db/models/manager.py", line 123, in get_or_create return self.get_query_set().get_or_create(**kwargs) File "/usr/local/lib/python2.6/dist-packages/django/db/models/query.py", line 343, in get_or_create raise e IntegrityError: app_taggeditem.content_type_id may not be NULL I tried a bunch of random things after looking at other code: >>> tag, newtag = bacon.tags.get_or_create(tag="wholesome", content_type=TaggedItem) ValueError: Cannot assign "<class 'generics.app.models.TaggedItem'>": "TaggedItem.content_type" must be a "ContentType" instance. or: >>> tag, newtag = bacon.tags.get_or_create(tag="wholesome", content_type=TaggedItem.content_type) InterfaceError: Error binding parameter 3 - probably unsupported type. etc. I'm sure somebody can give me the correct syntax, but the real problem here is that I have no idea what is going on. I have developed in strongly typed languages for over ten years (x86 assembly, C++ and C#) but am new to Python. I find it really difficult to follow what is going on in Python when things like this break. In the languages I mentioned previously it's fairly straightforward to figure things like this out -- check the method signature and check your parameters. Looking at the Django documentation for half an hour left me just as lost. Looking at the source for get_or_create(self, **kwargs) didn't help either since there is no method signature and the code appears very generic. A next step would be to debug the method and try to figure out what is happening, but this seems a bit extreme... I seem to be missing some fundamental operating principle here... what is it? How do I resolve issues like this on my own in the future?

    Read the article

  • Is it too early to start designing for Task Parallel Library?

    - by Joe Erickson
    I have been following the development of the .NET Task Parallel Library (TPL) with great interest since Microsoft first announced it. There is no doubt in my mind that we will eventually take advantage of TPL. What I am questioning is whether it makes sense to start taking advantage of TPL when Visual Studio 2010 and .NET 4.0 are released, or whether it makes sense to wait a while longer. Why Start Now? The .NET 4.0 Task Parallel Library appears to be well designed and some relatively simple tests demonstrate that it works well on today's multi-core CPUs. I have been very interested in the potential advantages of using multiple lightweight threads to speed up our software since buying my first quad processor Dell Poweredge 6400 about seven years ago. Experiments at that time indicated that it was not worth the effort, which I attributed largely to the overhead of moving data between each CPU's cache (there was no shared cache back then) and RAM. Competitive advantage - some of our customers can never get enough performance and there is no doubt that we can build a faster product using TPL today. It sounds fun. Yes, I realize that some developers would rather poke themselves in the eye with a sharp stick, but we really enjoy maximizing performance. Why Wait? Are today's Intel Nehalem CPUs representative of where we are going as multi-core support matures? You can purchase a Nehalem CPU with 4 cores which share a single level 3 cache today, and most likely a 6 core CPU sharing a single level 3 cache by the time Visual Studio 2010 / .NET 4.0 are released. Obviously, the number of cores will go up over time, but what about the architecture? As the number of cores goes up, will they still share a cache? One issue with Nehalem is the fact that, even though there is a very fast interconnect between the cores, they have non-uniform memory access (NUMA) which can lead to lower performance and less predictable results. Will future multi-core architectures be able to do away with NUMA? Similarly, will the .NET Task Parallel Library change as it matures, requiring modifications to code to fully take advantage of it? Limitations Our core engine is 100% C# and has to run without full trust, so we are limited to using .NET APIs.

    Read the article

  • Complex relationship between tables in NHibernate

    - by Ilya Kogan
    Hi all, I'm writing a Fluent NHibernate mapping for a legacy Oracle database. The challenge is that the tables have composite primary keys. If I were at total freedom, I would redesign the relationships and auto-generate primary keys, but other applications must write to the same database and read from it, so I cannot do it. These are the two tables I'll focus on: Example data Trips table: 1, 10:00, 11:00 ... 1, 12:00, 15:00 ... 1, 16:00, 19:00 ... 2, 12:00, 13:00 ... 3, 9:00, 18:00 ... Faults table: 1, 13:00 ... 1, 23:00 ... 2, 12:30 ... In this case, vehicle 1 made three trips and has two faults. The first fault happened during the second trip, and the second fault happened while the vehicle was resting. Vehicle 2 had one trip, during which a fault happened. Constraints Trips of the same vehicle never overlap. So the tables have an optional one-to-many relationship, because every fault either happens during a trip or it doesn't. If I wanted to join them in SQL, I would write: select ... from Faults left outer join Trips on Faults.VehicleId = Trips.VehicleId and Faults.FaultTime between Trips.TripStartTime and Trips.TripEndTime and then I'd get a dataset where every fault appears exactly once (one-to-many as I said). Note that there is no Vehicles table, and I don't need one. But I did create a view that contains all VehicleIds from both tables, so I can use it as a junction table. What am I actually looking for? The tables are huge because they cover years of data, and every time I only need to fetch a range of a few hours. So I need a mapping and a criteria that will run something like the following SQL underneath: select ... from Faults left outer join Trips on Faults.VehicleId = Trips.VehicleId and Faults.FaultTime between Trips.TripStartTime and Trips.TripEndTime where Faults.FaultTime between :p0 and :p1 Do you have any ideas how to achieve it? Note 1: Currently the application shouldn't write to the database, so persistence is not a must, although if the mapping supports persistence, it may help at some point in the future. Note 2: I know it's a tough one, so if you give me a great answer, you will be properly rewarded :) Thank you for reading this long question, and now I only hope for the best :)

    Read the article

  • Silverlight with MVVM Inheritance: ModelView and View matching the Model

    - by moonground.de
    Hello Stackoverflowers! :) Today I have a special question on Silverlight (4 RC) MVVM and inheritance concepts and looking for a best practice solution... I think that i understand the basic idea and concepts behind MVVM. My Model doesn't know anything about the ViewModel as the ViewModel itself doesn't know about the View. The ViewModel knows the Model and the Views know the ViewModels. Imagine the following basic (example) scenario (I'm trying to keep anything short and simple): My Model contains a ProductBase class with a few basic properties, a SimpleProduct : ProductBase adding a few more Properties and ExtendedProduct : ProductBase adding another properties. According to this Model I have several ViewModels, most essential SimpleProductViewModel : ViewModelBase and ExtendedProductViewModel : ViewModelBase. Last but not least, according Views SimpleProductView and ExtendedProductView. In future, I might add many product types (and matching Views + VMs). 1. How do i know which ViewModel to create when receiving a Model collection? After calling my data provider method, it will finally end up having a List<ProductBase>. It containts, for example, one SimpleProduct and two ExtendedProducts. How can I transform the results to an ObservableCollection<ViewModelBase> having the proper ViewModel types (one SimpleProductViewModel and two ExtendedProductViewModels) in it? I might check for Model type and construct the ViewModel accordingly, i.e. foreach(ProductBase currentProductBase in resultList) if (currentProductBase is SimpleProduct) viewModels.Add( new SimpleProductViewModel((SimpleProduct)currentProductBase)); else if (currentProductBase is ExtendedProduct) viewModels.Add( new ExtendedProductViewModels((ExtendedProduct)currentProductBase)); ... } ...but I consider this very bad practice as this code doesn't follow the object oriented design. The other way round, providing abstract Factory methods would reduce the code to: foreach(ProductBase currentProductBase in resultList) viewModels.Add(currentProductBase.CreateViewModel()) and would be perfectly extensible but since the Model doesn't know the ViewModels, that's not possible. I might bring interfaces into game here, but I haven't seen such approach proven yet. 2. How do i know which View to display when selecting a ViewModel? This is pretty the same problem, but on a higher level. Ended up finally having the desired ObservableCollection<ViewModelBase> collection would require the main view to choose a matching View for the ViewModel. In WPF, there is a DataTemplate concept which can supply a View upon a defined DataType. Unfortunately, this doesn't work in Silverlight and the only replacement I've found was the ResourceSelector of the SLExtensions toolkit which is buggy and not satisfying. Beside that, all problems from Question 1 apply as well. Do you have some hints or even a solution for the problems I describe, which you hopefully can understand from my explanation? Thank you in advance! Thomas

    Read the article

  • What Can I Do To One Of My Team Number (Good Friend As Well) Who Lost His Passion.

    - by skyflyer
    It seems this question is not program related, but there are lot of similar questions. So please bear with me! By the way, I am programmer and my team is also charging a software project. And SO is the only place which solved me lot of thorny troubles!THANK YOU GUYS! I joined my company with him years ago. At that time he was quite passionate on his job which is a front-end development. He gave us lot of useful suggestions concerning his work like design. And I believed he was a smart guy. I believe he still is smart too by the way. One years later, however, he seemed lost his passion and fooling around every day, did not care about his work any more and produced poorwork. Even worse he literally stopped learning new skills and honing his work related skills. For me it is horrible, we got to keep abreast with new technology development, otherwise we will be throw out. Since we were just coworkers, I did not care about it too much except mentioned my thoughts several times. But last month, we resembled a new group and assigned very important project. And I am the team leader, sadly! My boss gave me lot of support and expectation as well. I did a pretty good job before and I am very optimism to our future. But as a team, if my team does not work hard, we will be doomed to failure no matter how hard I work and push. In order to revitalize his passion, I tried couple of ways like talking to him about my concern and my boss's angry. I offered his new task which is quite new to him. I even persuaded my boss to give him new incentive package. But all of them knocked wall. His reaction was just he did not care. Even worse he did not want to talk about his situation. I want to be hard on him, but since we are friends and coworkers, I really can not see it will work. Even it works, I can not so quickly change my self from friend and coworker into manager. As a novice in management, I am really overwhelmed! I do not want get him fired, we are friends and I do not see him fired as my team number. What can I do? Thank you guys!

    Read the article

< Previous Page | 214 215 216 217 218 219 220 221 222 223 224 225  | Next Page >