Search Results

Search found 12397 results on 496 pages for 'maybe'.

Page 463/496 | < Previous Page | 459 460 461 462 463 464 465 466 467 468 469 470  | Next Page >

  • derby + hibernate ConstraintViolationException using manytomany relationships

    - by user364470
    Hi, I'm new to Hibernate+Derby... I've seen this issue mentioned throughout the google, but have not seen a proper resolution. This following code works fine with mysql, but when I try this on derby i get exceptions: ( each Tag has two sets of files and vise-versa - manytomany) Tags.java @Entity @Table(name="TAGS") public class Tags implements Serializable { @Id @GeneratedValue(strategy=GenerationType.AUTO) public long getId() { return id; } @ManyToMany(targetEntity=Files.class ) @ForeignKey(name="USER_TAGS_FILES",inverseName="USER_FILES_TAGS") @JoinTable(name="USERTAGS_FILES", joinColumns=@JoinColumn(name="TAGS_ID"), inverseJoinColumns=@JoinColumn(name="FILES_ID")) public Set<data.Files> getUserFiles() { return userFiles; } @ManyToMany(mappedBy="autoTags", targetEntity=data.Files.class) public Set<data.Files> getAutoFiles() { return autoFiles; } Files.java @Entity @Table(name="FILES") public class Files implements Serializable { @Id @GeneratedValue(strategy=GenerationType.AUTO) public long getId() { return id; } @ManyToMany(mappedBy="userFiles", targetEntity=data.Tags.class) public Set getUserTags() { return userTags; } @ManyToMany(targetEntity=Tags.class ) @ForeignKey(name="AUTO_FILES_TAGS",inverseName="AUTO_TAGS_FILES") @JoinTable(name="AUTOTAGS_FILES", joinColumns=@JoinColumn(name="FILES_ID"), inverseJoinColumns=@JoinColumn(name="TAGS_ID")) public Set getAutoTags() { return autoTags; } I add some data to the DB, but when running over Derby these exception turn up (the don't using mysql) Exceptions SEVERE: DELETE on table 'FILES' caused a violation of foreign key constraint 'USER_FILES_TAGS' for key (3). The statement has been rolled back. Jun 10, 2010 9:49:52 AM org.hibernate.event.def.AbstractFlushingEventListener performExecutions SEVERE: Could not synchronize database state with session org.hibernate.exception.ConstraintViolationException: could not delete: [data.Files#3] at org.hibernate.exception.SQLStateConverter.convert(SQLStateConverter.java:96) at org.hibernate.exception.JDBCExceptionHelper.convert(JDBCExceptionHelper.java:66) at org.hibernate.persister.entity.AbstractEntityPersister.delete(AbstractEntityPersister.java:2712) at org.hibernate.persister.entity.AbstractEntityPersister.delete(AbstractEntityPersister.java:2895) at org.hibernate.action.EntityDeleteAction.execute(EntityDeleteAction.java:97) at org.hibernate.engine.ActionQueue.execute(ActionQueue.java:268) at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:260) at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:184) at org.hibernate.event.def.AbstractFlushingEventListener.performExecutions(AbstractFlushingEventListener.java:321) at org.hibernate.event.def.DefaultFlushEventListener.onFlush(DefaultFlushEventListener.java:51) at org.hibernate.impl.SessionImpl.flush(SessionImpl.java:1206) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:613) at org.hibernate.context.ThreadLocalSessionContext$TransactionProtectionWrapper.invoke(ThreadLocalSessionContext.java:344) at $Proxy13.flush(Unknown Source) at data.HibernateORM.removeFile(HibernateORM.java:285) at data.DataImp.removeFile(DataImp.java:195) at booting.DemoBootForTestUntilTestClassesExist.main(DemoBootForTestUntilTestClassesExist.java:62) I have never used derby before so maybe there is something crutal that i'm missing 1) what am I doing wrong? 2) is there any way of cascading properly when I have 2 many-to-many relationships between two classes? Thanks!

    Read the article

  • What's the standard algorithm for syncing two lists of objects?

    - by Oliver Giesen
    I'm pretty sure this must be in some kind of text book (or more likely in all of them) but I seem to be using the wrong keywords to search for it... :( A common task I'm facing while programming is that I am dealing with lists of objects from different sources which I need to keep in sync somehow. Typically there's some sort of "master list" e.g. returned by some external API and then a list of objects I create myself each of which corresponds to an object in the master list. Sometimes the nature of the external API will not allow me to do a live sync: For instance the external list might not implement notifications about items being added or removed or it might notify me but not give me a reference to the actual item that was added or removed. Furthermore, refreshing the external list might return a completely new set of instances even though they still represent the same information so simply storing references to the external objects might also not always be feasible. Another characteristic of the problem is that both lists cannot be sorted in any meaningful way. You should also assume that initializing new objects in the "slave list" is expensive, i.e. simply clearing and rebuilding it from scratch is not an option. So how would I typically tackle this? What's the name of the algorithm I should google for? In the past I have implemented this in various ways (see below for an example) but it always felt like there should be a cleaner and more efficient way. Here's an example approach: Iterate over the master list Look up each item in the "slave list" Add items that do not yet exist Somehow keep track of items that already exist in both lists (e.g. by tagging them or keeping yet another list) When done iterate once more over the slave list Remove all objects that have not been tagged (see 4.) Update Thanks for all your responses so far! I will need some time to look at the links. Maybe one more thing worthy of note: In many of the situations where I needed this the implementation of the "master list" is completely hidden from me. In the most extreme cases the only access I might have to the master list might be a COM-interface that exposes nothing but GetFirst-, GetNext-style methods. I'm mentioning this because of the suggestions to either sort the list or to subclass it both of which is unfortunately not practical in these cases unless I copy the elements into a list of my own and I don't think that would be very efficient. I also might not have made it clear enough that the elements in the two lists are of different types, i.e. not assignment-compatible: Especially, the elements in the master list might be available as interface references only.

    Read the article

  • Which MarkDown (WMD) javascript editor should I use?

    - by Edan Maor
    Background I'm working on an application which requires user-entered content, and I've decided to use a StackOverflow-style MarkDown editor. After researching this topic for the last few days, I realize there are numerous forks of the base WMD editor, some with a few basic enhancements and some with serious differences from the StackOverflow one. Since this will be the heart of the application, I'd like to start with the best code base I can. I'd be happy if anyone can recommend which one of the many solutions out there best fits my needs. Below is requirements, plus what I've managed to find already. I'm hoping this question will help me decide which version to go with, and maybe help me discover a port out there that's an even better fit for my needs. The requirements for my project Live Preview Multiple editors on the same page (not know how many in advance, since the user can dynamically add another editing box). Ability to extend with extra buttons (I'd like a button to upload a picture, instead of just adding an img url). Ability to dynamically show/hide the edit box (and only see the preview box). Not an absolute must, but I'd prefer to stick as close to StackOverflow's look and feel, since it's well known. Don't know if this matters, but the backend is written in Django. Editors I've looked at Here are a few of the code bases I've looked at, with thoughts. Obviously, I might be missing another solution out there. The derobins version. From what I can tell, this is the official StackOverflow version. Seems like it doesn't support multiple editors on one page. JQuery.MarkEdit. Looks very good, but is pretty different from the StackOverflow version. MooWMD. Looks like the winner right now, but I'm a little concerned since it looks less active/hackable than MarkEdit. The wmd-new version. Not sure, looks like an old codebase without much use. The SocialSite branch. Seems like it's not for public use.

    Read the article

  • a question about rails general practice with REST, json, and ajax

    - by Nik
    Hi all, I have this question concerning REST I think: I have read a few rest tutorials and the feeling I get from them is that each action in a restful controller tends to be lean and almost single purpose: "Index gives off a collection of a model show gives off one model edit/new a prep place for changing/create a model update/create changes and makes new model deletes removes one model" After reading all these tutorials, rest seems to be to be a means to create an interface for a model, much like active resource type of thing. the mantra seems to be "controller provides data and data only and is also pretty convention over configuration, so expect projects_path to return a bunch of projects" I can understand that, and I like the cleanliness. But here's when I run into some trouble in reality in applying these guidelines: say three models, Project with attrib title, User with attrib name, and Location with attrib address. Say in views/users/index.html.erb, I want to use Ajax to fetch and display a project in a div#project_display when the user clicks on a project element, I know that I can use views/projects/show.js.rjs like this: page.replace_html 'project_display' "#{@project.name}" where in the projects_controller.rb def show @project = Project.find(params[:id]) repsond_to do |format| format.js and other formats... end end I have no problem in doing that for a couple of years now. BUT doesn't that mean that my JS response for the project#show action is LOCkED to present data to div#project_display element and show only whatever I that rjs template says it should show? That's very limiting and doesn't sound very "interface" like. I have never used JSON before or much XML, so I thought, maybe the JS response should send back raw stuff, like JSON and somehow the page on which the ajax request was called has the instruction on what do to with these raw data. That sounds a lot more flexible, doesn't it? Because look back at that exmpale, what if in the views/locations/index.html.erb, I want to do the exact same thing except I want to put the response in div#project_goes_here and the response should be #{project.name} I know this is a trivial change but that's the point: the RJS only allows one template at a time. So I think the JSON route is the way to go, but how does the already loaded page, the one that the ajax call came from, know when or how to "look forward" to incoming data? I read that PrototypeJS has this template thing, I wouldn't mind using it with JSON, but if you can demonstrate this or other means for displaying received-from-ajax data, I am all attention. Thank You

    Read the article

  • How to combine designable components with dependency injection

    - by Wim Coenen
    When creating a designable .NET component, you are required to provide a default constructor. From the IComponent documentation: To be a component, a class must implement the IComponent interface and provide a basic constructor that requires no parameters or a single parameter of type IContainer. This makes it impossible to do dependency injection via constructor arguments. (Extra constructors could be provided, but the designer would ignore them.) Some alternatives we're considering: Service Locator Don't use dependency injection, instead use the service locator pattern to acquire dependencies. This seems to be what IComponent.Site.GetService is for. I guess we could create a reusable ISite implementation (ConfigurableServiceLocator?) which can be configured with the necessary dependencies. But how does this work in a designer context? Dependency Injection via properties Inject dependencies via properties. Provide default instances if they are necessary to show the component in a designer. Document which properties need to be injected. Inject dependencies with an Initialize method This is much like injection via properties but it keeps the list of dependencies that need to be injected in one place. This way the list of required dependencies is documented implicitly, and the compiler will assists you with errors when the list changes. Any idea what the best practice is here? How do you do it? edit: I have removed "(e.g. a WinForms UserControl)" since I intended the question to be about components in general. Components are all about inversion of control (see section 8.3.1 of the UMLv2 specification) so I don't think that "you shouldn't inject any services" is a good answer. edit 2: It took some playing with WPF and the MVVM pattern to finally "get" Mark's answer. I see now that visual controls are indeed a special case. As for using non-visual components on designer surfaces, I think the .NET component model is fundamentally incompatible with dependency injection. It appears to be designed around the service locator pattern instead. Maybe this will start to change with the infrastructure that was added in .NET 4.0 in the System.ComponentModel.Composition namespace.

    Read the article

  • Given a trace of packets, how would you group them into flows?

    - by zxcvbnm
    I've tried it these ways so far: 1) Make a hash with the source IP/port and destination IP/port as keys. Each position in the hash is a list of packets. The hash is then saved in a file, with each flow separated by some special characters/line. Problem: Not enough memory for large traces. 2) Make a hash with the same key as above, but only keep in memory the file handles. Each packet is then put into the hash[key] that points to the right file. Problems: Too many flows/files (~200k) and it might run out of memory as well. 3) Hash the source IP/port and destination IP/port, then put the info inside a file. The difference between 2 and 3 is that here the files are opened and closed for each operation, so I don't have to worry about running out of memory because I opened too many at the same time. Problems: WAY too slow, same number of files as 2 so also impractical. 4) Make a hash of the source IP/port pairs and then iterate over the whole trace for each flow. Take the packets that are part of that flow and place them into the output file. Problem: Suppose I have a 60 MB trace that has 200k flows. This way, I would process, say, a 60 MB file 200k times. Maybe removing the packets as I iterate would make it not so painful, but so far I'm not sure this would be a good solution. 5) Split them by IP source/destination and then create a single file for each one, separating the flows by special characters. Still too many files (+50k). Right now I'm using Ruby to do it, which might've been a bad idea, I guess. Currently I've filtered the traces with tshark so that they only have relevant info, so I can't really make them any smaller. I thought about loading everything in memory as described in 1) using C#/Java/C++, but I was wondering if there wouldn't be a better approach here, especially since I might also run out of memory later on even with a more efficient language if I have to use larger traces. In summary, the problem I'm facing is that I either have too many files or that I run out of memory. I've also tried searching for some tool to filter the info, but I don't think there is one. The ones I've found only return some statistics and wouldn't scan for every flow as I need.

    Read the article

  • Zaypay alternatives for payments using call or sms

    - by JohannesH
    We are currently trying to implement a payment provider in zaypay for paying for services using sms or by calling a number. We already have google checkout and paypal working for regular payments but zaypay is rather inflexible, poorly documented and a pain to setup when you have hundreds of products with varying prices. So my question is, do you know of any other european payment providers that take sms and call payments? As a response to Roberts answer/question Hi Robert, I must say that the Zaypay solution is the best and only I've seen thus far regarding phoned payments. However, since its now 2 months ago I finished the implementation of our custom Zaypay UI I can't remember much of the the details of the problems we were having. I'll try to give a brief of them anyways the best I can. First of all I would like to see a redirection type scenario for payalogues. From what I remember you guys are using the JS framework "Prototype" which doesn't play nice with jQuery which we are using so we weren't able to use the popup-type scenario supported by payalogues. Furthermore when implementing our custom interface I remember a lot of missing translations, like words that were codes instead of a word or a phrase. This meant we ended up writing/translating all the messages we needed ourselves. Also, another point of annoyance was the setup of prices and items. I wish we could just send in the order items/prices as a part of the interface like you can in Google Checkout or PayPal (not that they're flawless either), instead of having to define ALL the items you will ever sell through your admin interface beforehand. As far as I can remember it is virtually impossible to use Zaypay for a multi-item order in its current form. Finally there are, as far as I can tell, some security issues that you have to think about when you implement a custom solution... especially a ajax driven one. As I said in my original post you do mention this in the documentation but I believe the documentation wasn't that comprehensive regarding security issues. Again I wish I could give more details but the code & client is long since gone, so I can't look up the comments I wrote. Sorry! Oh yeah, the general API documentation weren't exactly comprehensive and 100% correct either. Again, I don't want to advice people against using Zaypay, I just want to advice that they should try it out first on a realistic prototype and think about their implementation before releasing to production. Maybe its just me who misunderstood a lot of things but I generally had a difficult time using your framework and I was left with a feeling that the API was very new and not thought through from the beginning.

    Read the article

  • Choosing the right .NET architecture. WCF? WPF/Forms, ASP.NET (MVC)?

    - by Tommy Jakobsen
    I’m in the situation that I have to design and implement a rather big system from the bottom. I’m having some (a lot actually) questions about the architecture that I would like your comments and thoughts on. I don’t hope that I’ve written too much here, but I wanted to give you all an idea of what the system is. Quick info about the applications, read if you want: I can’t share much detail about the project, but basically it’s a system where we offer our customer a service to manage their users. We have a hotline where the users call and our hotline uses an (windows) application (intranet) to manage the user’s data, etc. The customer also has a web application where they can see reports, information about their business and users, and the ability to modify their data. Modifying data is not just user data like address and so, but also information about the products/services the user has, which can be complicated. The applications will be built on Microsoft .NET Framework 4, with a MS SQL Server 2008 database. There will be a few applications that have to access this database, such as: Intranet application (used by us and our hotline) Customer web application type 1 Customer web application type 2 Customer web application type n different applications) … Now my big problem is what .NET parts I should use for such a system. For the “backend” I’ve considered using Windows Communication Foundation: Would WCF be a good choice? The intranet application will be an application that has to edit lots of records in the database. It has to be easy to navigate using the keyboard (fast to work with). Has a feature such as “find customer, find that, lookup this, choose this and update that”. What would be the best choice to develop this application in? Would it be WPF or good old Windows Forms? I don’t need all of the fancy graphics features in WPF, like 3D, but the application has to look nice (maybe something like the new Visual Studio/Office tools). And the same question goes for the web pages. They have much the same work to do, but not as many features as the intranet application, and not the same amount of data (much less). That is my questions for now. I’m hoping to get a discussion going that will open my eyes to some of these technologies, helping me decide architecture to go with. I would like to say thanks in advance, and let you all know that any thoughts will be much appreciated.

    Read the article

  • Direct show video renderers suck?

    - by Daniel
    So I've been looking into the world of media playback for windows and I've started making a C# Media Player using DirectShow. I started off using the VRM-7 windowed video renderer and it was brilliant except it had a couple of small problems (multi monitors, fullscreen). But after some research I found that it's deprecated and I should be using VRM9. So I changed it to use VRM9 windowless then found out that was an old post rofl _< so finally I'm using Vista/Win7 (or XP .net 3) Enhanced Video Renderer (EVR) which is apparently the most up to date Microsoft video renderer and has all the flashy performance/quality things added to it. (tbh I haven't noticed any difference but maybe I need a blue-ray or HQ video to notice it). With using EVR everything is working fine except resizing the video. Its really laggy/choppy/teary and problem something to do with its frame queueing mechanism. To demonstrate my problem open up windows media player classic. View - Options - Playback - output Chose the "EVR" DirectShow Video renderer Now restart wmp class and play a video, while it's playing click and drag a corner to resize it. You'll notice its horribly laggy. This is the exact same problem i am having. But if you chose "EVR Custom Pres. *" or EVR Sync *" resizing works beautifully! So i tried googling around for anything about EVR resizing issues and how to fix it but i couldn't believe how little i could find. I'm guessing "Custom Pres." stands for "Custom Presenter" which sounds like they made their own. Also you'll notice on the right hand size when you swap between EVR and the other EVR's the Resizer drop down on the right greys out. So basically I wan't to know how I can fix this retarded resizing problem and is there any decent documentation out there? There is a fair bit for VMR7/9 but not much for EVR. I downloaded the DirectX SDK which apparently has samples but it was a waste of 500mb of bandwidth as it had nothing relevant. Perhaps there is some way to force it not queueing up frames if that is the problem? If you want code say the word and I'll paste some in. But it's really quite simple and nothing much happens, i'm convinced it's a problem with the EVR renderer. EDIT: Oh and one other thing, what does VLC use? If you go into vlc options and change the renderer to anything but default, they all suck. So is it using VMR7? Or its own?

    Read the article

  • DirectShow EVR resizing window problem

    - by Daniel
    So I've been looking into the world of media playback for windows and I've started making a C# Media Player using DirectShow. I started off using the VRM-7 windowed video renderer and it was brilliant except it had a couple of small problems (multi monitors, fullscreen). But after some research I found that it's deprecated and I should be using VRM9. So I changed it to use VRM9 windowless then found out that was an old post rofl _< so finally I'm using Vista/Win7 (or XP .net 3) Enhanced Video Renderer (EVR) which is apparently the most up to date Microsoft video renderer and has all the flashy performance/quality things added to it. (tbh I haven't noticed any difference but maybe I need a blue-ray or HQ video to notice it). With using EVR everything is working fine except resizing the video. Its really laggy/choppy/teary and probably something to do with its frame queueing mechanism. To demonstrate my problem open up windows media player classic. View - Options - Playback - output Chose the "EVR" DirectShow Video renderer Now restart wmp class and play a video, while it's playing click and drag a corner to resize it. You'll notice its horribly laggy. This is the exact same problem i am having. But if you chose "EVR Custom Pres. *" or EVR Sync *" resizing works beautifully! So i tried googling around for anything about EVR resizing issues and how to fix it but i couldn't believe how little i could find. I'm guessing "Custom Pres." stands for "Custom Presenter" which sounds like they made their own. Also you'll notice on the right hand size when you swap between EVR and the other EVR's the Resizer drop down on the right greys out. So basically I wan't to know how I can fix this retarded resizing problem and is there any decent documentation out there? There is a fair bit for VMR7/9 but not much for EVR. I downloaded the DirectX SDK which apparently has samples but it was a waste of 500mb of bandwidth as it had nothing relevant. Perhaps there is some way to force it not queueing up frames if that is the problem? If you want code say the word and I'll paste some in. But it's really quite simple and nothing much happens, i'm convinced it's a problem with the EVR renderer. EDIT: Oh and one other thing, what does VLC use? If you go into vlc options and change the renderer to anything but default, they all suck. So is it using VMR7? Or its own?

    Read the article

  • c++ exceptions and program execution logic

    - by Andrew
    Hello everyone, I have been thru a few questions but did not find an answer. I wonder how should the exception handling be implemented in a C++ software so it is centralized and it is tracking the software progress? For example, I want to process exceptions at four stages of the program and know that exception happened at that specific stage: 1. Initialization 2. Script processing 3. Computation 4. Wrap-up. At this point, I tried this: int main (...) { ... // start logging system try { ... } catch (exception &e) { cerr << "Error: " << e.what() << endl; cerr << "Could not start the logging system. Application terminated!\n"; return -1; } catch(...) { cerr << "Unknown error occured.\n"; cerr << "Could not start the logging system. Application terminated!\n"; return -2; } // open script file and acquire data try { ... } catch (exception &e) { cerr << "Error: " << e.what() << endl; cerr << "Could not acquire input parameters. Application terminated!\n"; return -1; } catch(...) { cerr << "Unknown error occured.\n"; cerr << "Could not acquire input parameters. Application terminated!\n"; return -2; } // computation try { ... } ... This is definitely not centralized and seems stupid. Or maybe it is not a good concept at all?

    Read the article

  • Job queue manager with RPC interface

    - by admr
    I need a job queue manager that I can control over the Internet. It should be able to execute and stop processes, check on their status (ideally notice and execute some code when a process exits), respond to commands and also be able to report back to a server. Background: I have a GWT application that allows to create jobs to execute on a cloud instance (currently EC2). I want to push a "job packet" (data for a process to operate on etc) to S3, start a Linux EC2 instance (or use one that's already running), and tell a job manager on the instance to execute that job (possibly parallel to other jobs). It should then pull the "job packet" from S3, run a process that operates on that data and report back to the server that is running the server part of my GWT application with some information (e.g. exit code, stdout, stderr). If I have to write e.g. stdour/err to a file from the process and read that file, that's OK too. I would really like the manager to be "close" to the processes it runs, meaning I want to avoid using something like Runtime.exec from the JDK. It seems like I would have to do that if I used Quartz for example. I'm fine with the calls in both directions being asynchronous. I'm fine with any reasonable technology for the calls as long as I can easily build an interface for that in my GWT server side (e.g. HTTP requests to a servlet over SSL would be nice and trivial). The job manager does not need to have a very sophisticated queueing system. Running several processes either sequentially or in parallel should be fine. Determining how much compute time a process received during its lifetime would be nice (AFAIK, this might be challenging). I did not yet find any existing software that does this, including http://java-source.net/open-source/job-schedulers. I suspect I might have to build an RPC interface (with authentication etc, of course) around a job manager; maybe use something like Apache Commons Exec. In that case, I would prefer Java or Python for the job manager part. I would be happy to hear suggestions for either the former or latter scenario!

    Read the article

  • Error trying to run rails server

    - by David87
    I am trying to get a basic Rails application to run on my Mac OS X 10.6.5. I created a new app called demo (rails new demo), then went into the demo directory and tried to start the app with rails server. Here is the error message I received: "/Users/dpetrovi/.gem/ruby/1.8/gems/sqlite3-ruby-1.3.2/lib/sqlite3/sqlite3_native.bundle: [BUG] Segmentation fault ruby 1.8.7 (2010-12-23 patchlevel 330) [i686-darwin10] Abort trap" I checked bundle install in the demo folder: "Using rake (0.8.7) Using abstract (1.0.0) Using activesupport (3.0.3) Using builder (2.1.2) Using i18n (0.5.0) Using activemodel (3.0.3) Using erubis (2.6.6) Using rack (1.2.1) Using rack-mount (0.6.13) Using rack-test (0.5.6) Using tzinfo (0.3.23) Using actionpack (3.0.3) Using mime-types (1.16) Using polyglot (0.3.1) Using treetop (1.4.9) Using mail (2.2.13) Using actionmailer (3.0.3) Using arel (2.0.6) Using activerecord (3.0.3) Using activeresource (3.0.3) Using bundler (1.0.7) Using thor (0.14.6) Using railties (3.0.3) Using rails (3.0.3) Using sqlite3-ruby (1.3.2) Your bundle is complete! Use bundle show [gemname] to see where a bundled gem is installed." Ruby, RubyGems, and sqlite3 were installed using MacPorts. Then I used gem to try to install the sqlite3-ruby interface. (sudo gem install sqlite3-ruby). Here is where I first noticed something could be off: "Successfully installed sqlite3-ruby-1.3.2 1 gem installed Installing ri documentation for sqlite3-ruby-1.3.2... No definition for libversion Enclosing class/module 'mSqlite3' for class Statement not known Installing RDoc documentation for sqlite3-ruby-1.3.2... No definition for libversion Enclosing class/module 'mSqlite3' for class Statement not known " I had rails running well on my system a few months ago, so I figured maybe I had some duplicates and it was trying to use the wrong one. I ran: "for cmd in ruby irb gem rake; do which $cmd; done" and got: "/opt/local/bin/ruby /opt/local/bin/irb /opt/local/bin/gem /opt/local/bin/rake" Checking where sqlite3 also gets me: "/opt/local/bin/sqlite3" so they all seem to be in the right place. Obviously /opt/local/bin is in my system path. If I check gems server, it shows that I have installed sqlite3-ruby 1.3.2 gem. Not sure what the problem could be? I am using ruby 1.8.7 (2010-12-23 patchlevel 330) [i686-darwin10]. Macports claims this is the latest (although ive seen 1.9.1) One more thing-- in irb, I tried to check which version of sqlite3 my sqlite3-ruby is bound to, but I can only get this far: ":irb(main):001:0 require 'rubygems' = true irb(main):002:0 require 'sqlite3' /Users/dpetrovi/.gem/ruby/1.8/gems/sqlite3-ruby-1.3.2/lib/sqlite3/sqlite3_native.bundle: [BUG] Segmentation fault ruby 1.8.7 (2010-12-23 patchlevel 330) [i686-darwin10] Abort trap" Any suggestions? Im hoping I overlooked something obvious. Thanks

    Read the article

  • Looking for Fiddler2 help. connection to gateway refused? Just got rid of a virus

    - by John Mackey
    I use Fiddler2 for facebook game items, and it's been a great success. I accessed a website to download some dat files I needed. I think it was eshare, ziddu or megaupload, one of those. Anyway, even before the rar file had downloaded, I got this weird green shield in the bottom right hand corner of my computer. It said a Trojan was trying to access my computer, or something to that extent. It prompted me to click the shield to begin anti-virus scanning. It turns out this rogue program is called Antivirus System Pro and is pretty hard to get rid of. After discovering the rogue program, I tried using Fiddler and got the following error: [Fiddler] Connection to Gateway failed.Exception Text: No connection could be made because the target machine actively refused it 127.0.0.1:5555 I ended up purchasing SpyDoctor + Antivirus, which I'm told is designed specifically for getting rid of these types of programs. Anyway, I did a quick-scan last night with spydoctor and malware bytes. Malware picked up 2 files, and Spydoctor found 4. Most were insignificant, but it did find a worm called Worm.Alcra.F, which was labeled high-priority. I don’t know if that’s the Anti-Virus Pro or not, but SpyDoctor said it got rid of all of those successfully. I tried to run Fiddler again before leaving home, but was still getting the "gateway failed" error. Im using the newest version of firefox. When I initially set up the Fiddler 2.2.8.6, I couldn’t get it to run at first, so I found this faq on the internet that said I needed to go through ToolsOptionsSettings and set up an HTTP Proxy to 127.0.0.1 and my Port to 8888. Once I set that up and downloaded this fiddler helper as a firefox add-on, it worked fine. When I turn on fiddler, it automatically takes my proxy setting from no proxy (default) to the 127.0.0.1 with Port 8888 set up. It worked fine until my computer detected this virus. Anyway, hopefully I've given you sufficient information to offer me your best advice here. Like I said, Spydoctor says the bad stuff is gone, so maybe the rogue program made some type of change in my fiddler that I could just reset or uncheck or something like that? Or will I need to completely remove fiddler and those dat files and rar files I downloaded? Any help would be greatly appreciated. Thanks for your time.

    Read the article

  • C# Extend array type to overload operators

    - by Episodex
    I'd like to create my own class extending array of ints. Is that possible? What I need is array of ints that can be added by "+" operator to another array (each element added to each), and compared by "==", so it could (hopefully) be used as a key in dictionary. The thing is I don't want to implement whole IList interface to my new class, but only add those two operators to existing array class. I'm trying to do something like this: class MyArray : Array<int> But it's not working that way obviously ;). Sorry if I'm unclear but I'm searching solution for hours now... UPDATE: I tried something like this: class Zmienne : IEquatable<Zmienne> { public int[] x; public Zmienne(int ilosc) { x = new int[ilosc]; } public override bool Equals(object obj) { if (obj == null || GetType() != obj.GetType()) { return false; } return base.Equals((Zmienne)obj); } public bool Equals(Zmienne drugie) { if (x.Length != drugie.x.Length) return false; else { for (int i = 0; i < x.Length; i++) { if (x[i] != drugie.x[i]) return false; } } return true; } public override int GetHashCode() { int hash = x[0].GetHashCode(); for (int i = 1; i < x.Length; i++) hash = hash ^ x[i].GetHashCode(); return hash; } } Then use it like this: Zmienne tab1 = new Zmienne(2); Zmienne tab2 = new Zmienne(2); tab1.x[0] = 1; tab1.x[1] = 1; tab2.x[0] = 1; tab2.x[1] = 1; if (tab1 == tab2) Console.WriteLine("Works!"); And no effect. I'm not good with interfaces and overriding methods unfortunately :(. As for reason I'm trying to do it. I have some equations like: x1 + x2 = 0.45 x1 + x4 = 0.2 x2 + x4 = 0.11 There are a lot more of them, and I need to for example add first equation to second and search all others to find out if there is any that matches the combination of x'es resulting in that adding. Maybe I'm going in totally wrong direction?

    Read the article

  • Good object/DB set-up for CMS-esque app for managing content and user permissions?

    - by sah302
    Hi all, so I am writing a big CMS-esque app to allow users to manage web content through web applications, I've got a pretty good db-driven user permission system going, but am having trouble coming up with a good way to handle content groups and pages, I've got a couple options and not sure which one to take. Furthermore, I am not sure how to handle static page updates that have no 'widgets' in them. My current set-up for permissions is this: Objects: User, UserGroup, UserUserGroup, UserGroupType Standard many to many relationship User -> UserUserGroup <- UserGroup each Usergroup has a UserGroupType, which could be anything from Title, Department, to PermissionGroup. PermissionGroup manages the permissions. Right now on a per page basis I check permissions based on their PermissionsGroups. So for a page which has CMS features for a news widget, I check for permission groups of "Site Admin" and "News Admin". Now the issue I am coming to is, the site has many different departments involved. No problem I think, I can just have a EntityContentGroup so any widget app can be used for any departments. So my HR department, each of their news items would be in the EntityContentGroup with the news item ID, and content group of "HR" or "HR News". But maybe this isn't the most efficient way to go about it? I don't want to put the content group simply as a NewsItemType because some news items could apply to multiple areas, so I want to be able to assign them to as many areas as I want. Likewise, all of my widget apps have this, so that's why I decided to choose EntityContentGroup and not just NewsItemContentGroup. I was also thinking well instead of doing a contentGroup do a Page object that says which page some entity should be on. It seems almost like the same thing, but would I want to use Page for something else? I was thinking Page would be used for static pages with no widgets, a simple Rich Text Editor can edit the content of that page and I save that item to a page?? And then instead of doing a page level check for UserGroup permissions, would it be better to associate a usergroup to a contentgroup, and then just depending on what contentGroup content on the page is displayed, determine the permissions through that relationship? Is that better? I am not sure at this point. I guess I am just getting a tad overwhelmed at this is the largest app in scope and size that I have ever written. What is the best approach for this based on my current user permission set-up?

    Read the article

  • Adding NavigationControl to a TabBar Application containing UITableViews

    - by kungfuslippers
    Hi, I'm new to iPhone dev and wanted to get advice on the general design pattern / guide for putting a certain kind of app together. I'm trying to build a TabBar type application. One of the tabs needs to display a TableView and selecting a cell from within the table view will do something else - maybe show another table view or a web page. I need a Navigation Bar to be able to take me back from the table view/web page. The approach I've taken so far is to: Create an app based around UITabBarController as the rootcontroller i.e. @interface MyAppDelegate : NSObject <UIApplicationDelegate> { IBOutlet UIWindow *window; IBOutlet UITabBarController *rootController; } Create a load of UIViewController derived classes and associated NIBs and wire everything up in IB so when I run the app I get the basic tabs working. I then take the UIViewController derived class and modify it to the following: @interface MyViewController : UIViewController<UITableViewDataSource, UITableViewDelegate> { } and I add the delegate methods to the implementation of MyViewController - (NSInteger)tableView:(UITableView *)tableView numberOfRowsInSection:(NSInteger)section { return 2; } - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { static NSString *CellIdentifier = @"Cell"; UITableViewCell *cell = [tableView dequeueReusableCellWithIdentifier:CellIdentifier]; if (cell == nil) { cell = [[[UITableViewCell alloc] initWithStyle:UITableViewCellStyleDefault reuseIdentifier:CellIdentifier] autorelease]; } if (indexPath.row == 0) { cell.textLabel.text = @"Mummy"; } else { cell.textLabel.text = @"Daddy"; } return cell; } Go back to IB , open MyViewController.xib and drop a UITableView onto it. Set the Files Owner to be MyViewController and then set the delegate and datasource of the UITableView to be MyViewController. If I run the app now, I get the table view appearing with mummy and daddy working nicely. So far so good. The question is how do I go about incorporating a Navigation Bar into my current code for when I implement: - (void)tableView:(UITableView *)tableView didSelectRowAtIndexPath:(NSIndexPath *)indexPath() { // get row selected NSUInteger row = [indexPath row]; if (row == 0) { // Show another table } else if (row == 1) { // Show a web view } } Do I drop a NavigationBar UI control onto MyControllerView.xib ? Should I create it programmatically? Should I be using a UINavigationController somewhere ? I've tried dropping a NavigationBar onto my MyControllerView.xib in IB but its not shown when I run the app, only the TableView is displayed.

    Read the article

  • NSMutableURLRequest and a Password with special Characters won't work

    - by twickl
    I'm writing a small program (using Cocoa Touch), which communicates with a webservice. The code for calling the webservice, is the following: - (IBAction)send:(id)sender { if ([number.text length] > 0) { [[UIApplication sharedApplication] beginIgnoringInteractionEvents]; [activityIndicator startAnimating]; NSString *modded; modded = [self computeNumber]; NSMutableURLRequest *theRequest = [NSMutableURLRequest requestWithURL: [NSURL URLWithString:@"https://tester:%=&[email protected]/RPC2"]]; [theRequest setHTTPMethod:@"POST"]; [theRequest addValue:@"text/xml" forHTTPHeaderField:@"content-type"]; [theRequest setCachePolicy:NSURLCacheStorageNotAllowed]; [theRequest setTimeoutInterval:5.0]; NSString* pStr = [[NSString alloc] initWithFormat:@"<?xml version=\"1.0\" encoding=\"UTF-8\"?><methodCall><methodName>samurai.SessionInitiate</methodName><params><param><value><struct><member><name>LocalUri</name><value><string></string></value></member><member><name>RemoteUri</name><value><string>sip:%@@sipgate.net</string></value></member><member><name>TOS</name><value><string>text</string></value></member><member><name>Content</name><value><string>%@</string></value></member><member><name>Schedule</name><value><string></string></value></member></struct></value></param></params></methodCall>", modded, TextView.text]; NSData* pBody = [pStr dataUsingEncoding:NSUTF8StringEncoding]; [theRequest setHTTPBody:pBody]; NSURLConnection *theConnection = [[NSURLConnection alloc] initWithRequest:theRequest delegate:self]; if (!theConnection) { UIAlertView *alert = [[UIAlertView alloc] initWithTitle:@"Error" message:@"A Connection could not be established!" delegate:nil cancelButtonTitle:@"OK" otherButtonTitles: nil]; [alert show]; [alert release]; sendButton.enabled = TRUE; return; } [pStr release]; [pBody release]; } } The Username and Password have to be in the URL, and it works in most cases, but when the password consists of special characters like in the example "%=&=-Test2009", the Webservice does not respond. If it is something like "Test2009" it works fine. Does anyone have an idea why, and maybe a solution for that?

    Read the article

  • What is the best way to read and write cXML documents in C# ?

    - by tetranz
    I know this is a vague open ended question. I'm hoping to get some general direction. I need to add cXML punchout to an ASP.NET C# site / application. This is replacing something that I wrote years ago in ColdFusion. I'm a reasonably experienced C# developer but I haven't done much with XML. There seems to be lots of different options for processing XML in .NET. Here's the open ended question: Assuming that I have an XML document in some form, eg a file or a string, what is the best way to read it into my code? I want to get the data and then query databases etc. The cXML document size and our traffic volumes are easily small enough so that loading the a cXML document into memory is not a problem. Should I: 1) Manually build classes based on the dtd and use the XML Serializer? 2) Use a tool to generate classes. There are sample cXML files downloadable from Ariba.com. I tried xsd.exe to generate an xsd and then xsd.exe /c to generate classes. When I try to deserialize I get errors because there seems to be "confusion" around whether some elements should be single values or arrays. I tried the CodeXS online tool but that gives errors in it's log and errors if I try to deserialize a sample document. 2) Create a dataset and ReadXml()? 3) Create a typed dataset and ReadXml()? 4) Use Linq to XML. I often use Linq to Objects so I'm familiar with Linq in general but I'm struggling to see what it gives me in this situation. 5) Some other means. I guess I need to improve my understanding of XML in general but even so ... am I missing some obvious way of doing this? In the old ColdFusion site I found a free component ("tag") which basically ignored any schema and read the XML into a "structure" which is essentially a series of nested hash tables which was then easy to read in code. That was probably quite sloppy but it worked. I also need to generate XML files from my C# objects. Maybe Linq to XML will be good for that. I could start with a default "template" document and manipulate it before saving. Thanks for any pointers ...

    Read the article

  • JPanel components paint-time problem

    - by Tom Brito
    I'm having a problem that when my frame is shown (after a login dialog) the buttons are not on correct position, then in some miliseconds they go to the right position (the center of the panel with border layout). When I make a SSCCE, it works correct, but when I run my whole code I have this fast-miliseconds delay to the buttons to go to the correct place. Unfortunately, I can't post the whole code, but the method that shows the frame is: public void login(JComponent userView) { centerPanel.removeAll(); centerPanel.add(userView); centerPanel.revalidate(); centerPanel.repaint(); frame.setVisible(true); } What would cause this delay to the panel layout? (I'm running everything in the EDT) -- update In my machine, this SSCCE shows the layout problem in 2 of 10 times I run it: import java.awt.BorderLayout; import javax.swing.JButton; import javax.swing.JFrame; import javax.swing.JPanel; import javax.swing.SwingUtilities; public class TEST { public static void main(String[] args) throws Exception { SwingUtilities.invokeAndWait(new Runnable() { @Override public void run() { System.out.println("Debug test..."); JPanel btnPnl = new JPanel(); btnPnl.add(new JButton("TEST")); JFrame f = new JFrame("TEST"); f.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); f.getContentPane().setLayout(new BorderLayout()); f.getContentPane().add(btnPnl); f.pack(); f.setSize(800, 600); f.setVisible(true); System.out.println("End debug test!"); } }); } } The button first appers in the up-left, and then it goes to the center. Please, note that I'm understand, not just correct. Is it a java bug? --update OK, so the SSCCE don't show the problem with you that tried till now. Maybe it's my computer performance problem. But this don't answer the question, I still think Java Swing is creating new threads for make the layout behind the scenes.

    Read the article

  • What Amazon S3 .NET Library is most useful and efficient?

    - by Geo
    There are two main open source .net Amazon S3 libraries. Three Sharp LitS3 I am currently using LitS3 in our MVC demo project, but there is some criticism about it. Has anyone here used both libraries so they can give an objective point of view. Below some sample calls using LitS3: On demo controller: private S3Service s3 = new S3Service() { AccessKeyID = "Thekey", SecretAccessKey = "testing" }; public ActionResult Index() { ViewData["Message"] = "Welcome to ASP.NET MVC!"; return View("Index",s3.GetAllBuckets()); } On demo view: <% foreach (var item in Model) { %> <p> <%= Html.Encode(item.Name) %> </p> <% } %> EDIT 1: Since I have to keep moving and there is no clear indication of what library is more effective and kept more up to date, I have implemented a repository pattern with an interface that will allow me to change library if I need to in the future. Below is a section of the S3Repository that I have created and will let me change libraries in case I need to: using LitS3; namespace S3Helper.Models { public class S3Repository : IS3Repository { private S3Service _repository; #region IS3Repository Members public IQueryable<Bucket> FindAllBuckets() { return _repository.GetAllBuckets().AsQueryable(); } public IQueryable<ListEntry> FindAllObjects(string BucketName) { return _repository.ListAllObjects(BucketName).AsQueryable(); } #endregion If you have any information about this question please let me know in a comment, and I will get back and edit the question. EDIT 2: Since this question is not getting attention, I integrated both libraries in my web app to see the differences in design, I know this is probably a waist of time, but I really want a good long run solution. Below you will see two samples of the same action with the two libraries, maybe this will motivate some of you to let me know your thoughts. WITH THREE SHARP LIBRARY: public IQueryable<T> FindAllBuckets<T>() { List<string> list = new List<string>(); using (BucketListRequest request = new BucketListRequest(null)) using (BucketListResponse response = service.BucketList(request)) { XmlDocument bucketXml = response.StreamResponseToXmlDocument(); XmlNodeList buckets = bucketXml.SelectNodes("//*[local-name()='Name']"); foreach (XmlNode bucket in buckets) { list.Add(bucket.InnerXml); } } return list.Cast<T>().AsQueryable(); } WITH LITS3 LIBRARY: public IQueryable<T> FindAllBuckets<T>() { return _repository.GetAllBuckets() .Cast<T>() .AsQueryable(); }

    Read the article

  • Special scheduling Algorithm (pattern expansion)

    - by tovare
    Question Do you think genetic algorithms worth trying out for the problem below, or will I hit local-minima issues? I think maybe aspects of the problem is great for a generator / fitness-function style setup. (If you've botched a similar project I would love hear from you, and not do something similar) Thank you for any tips on how to structure things and nail this right. The problem I'm searching a good scheduling algorithm to use for the following real-world problem. I have a sequence with 15 slots like this (The digits may vary from 0 to 20) : 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 (And there are in total 10 different sequences of this type) Each sequence needs to expand into an array, where each slot can take 1 position. 1 1 0 0 1 1 1 0 0 0 1 1 1 0 0 1 1 0 0 1 1 1 0 0 0 1 1 1 0 0 0 0 1 1 0 0 0 1 1 1 0 0 0 1 1 0 0 1 1 0 0 0 1 1 1 0 0 0 1 1 The constraints on the matrix is that: [row-wise, i.e. horizontally] The number of ones placed, must either be 11 or 111 [row-wise] The distance between two sequences of 1 needs to be a minimum of 00 The sum of each column should match the original array. The number of rows in the matrix should be optimized. The array then needs to allocate one of 4 different matrixes, which may have different number of rows: A, B, C, D A, B, C and D are real-world departments. The load needs to be placed reasonably fair during the course of a 10-day period, not to interfere with other department goals. Each of the matrix is compared with expansion of 10 different original sequences so you have: A1, A2, A3, A4, A5, A6, A7, A8, A9, A10 B1, B2, B3, B4, B5, B6, B7, B8, B9, B10 C1, C2, C3, C4, C5, C6, C7, C8, C9, C10 D1, D2, D3, D4, D5, D6, D7, D8, D9, D10 Certain spots on these may be reserved (Not sure if I should make it just reserved/not reserved or function-based). The reserved spots might be meetings and other events The sum of each row (for instance all the A's) should be approximately the same within 2%. i.e. sum(A1 through A10) should be approximately the same as (B1 through B10) etc. The number of rows can vary, so you have for instance: A1: 5 rows A2: 5 rows A3: 1 row, where that single row could for instance be: 0 0 1 1 1 0 0 0 0 0 0 0 0 0 0 etc.. Sub problem* I'de be very happy to solve only part of the problem. For instance being able to input: 1 1 2 3 4 2 2 3 4 2 2 3 3 2 3 And get an appropriate array of sequences with 1's and 0's minimized on the number of rows following th constraints above.

    Read the article

  • How can I read/write data from a file?

    - by samy
    I'm writing a simple chrome extension. I need to create the ability to add sites URLs to a list, or read from the list. I use the list to open the sites in the new tabs. I'm looking for a way to have a data file I can write to, and read from. I was thinking on XML. I read there is a problem changing the content of files with Javascript. Is XML the right choice for this kinda thing? I should add that there is no web server, and the app will run locally, so maybe the problem websites are having are not same as this. Before I wrote this question, I tried one thing, and started to feel insecure because it didn't work. I made a XML file called Site.xml: <?xml version="1.0" encoding="utf-8" ?> <Sites> <site> <url> http://www.sulamacademy.com/AddMsgForum.asp?FType=273171&SBLang=0&WSUAccess=0&LocSBID=20375 </url> </site> <site> <url> http://www.wow.co.il </url> </site> <site> <url> http://www.Google.co.il </url> </site> I made this script to read the data from him, and put in on the html file. function LoadXML() { var ajaxObj = new XMLHttpRequest(); ajaxObj.open('GET', 'Sites.xml', false); ajaxObj.send(); var myXML = ajaxObj.responseXML; document.write('<table border="2">'); var prs = myXML.getElementsByTagName("site"); for (i = 0; i < prs.length; i++) { document.write("<tr><td>"); document.write(prs[i].getElementsByName("url")[0].childNode[0].nodeValue); document.write("</td></tr>"); } document.write("</table"); }

    Read the article

  • c#: Design advice. Using DataTable or List<MyObject> for a generic rule checker

    - by Andrew White
    Hi, I have about 100,000 lines of generic data. Columns/Properties of this data are user definable and are of the usual data types (string, int, double, date). There will be about 50 columns/properties. I have 2 needs: To be able to calculate new columns/properties using an expression e.g. Column3 = Column1 * Column2. Ultimately I would like to be able to use external data using a callback, e.g. Column3 = Column1 * GetTemperature The expression is relatively simple, maths operations, sum, count & IF are the only necessary functions. To be able to filter/group the data and perform aggregations e.g. Sum(Data.Column1) Where(Data.Column2 == "blah") As far as I can see I have two options: 1. Using a DataTable. = Point 1 above is achieved by using DataColumn.Expression = Point 2 above is acheived by using DataTable.DefaultView.RowFilter & C# code 2. Using a List of generic Objects each with a Dictionary< string, object to store the values. = Point 1 could be achieved by something like NCalc = Point 2 is achieved using LINQ DataTable: Pros: DataColumn.Expression is inbuilt Cons: RowFilter & coding c# is not as "nice" as LINQ, DataColumn.Expression does not support callbacks(?) = workaround could be to get & replace external value when creating the calculated column GenericList: Pros: LINQ syntax, NCalc supports callbacks Cons: Implementing NCalc/generic calc engine Based on the above I would think a GenericList approach would win, but something I have not factored in is the performance which for some reason I think would be better with a datatable. Does anyone have a gut feeling / experience with LINQ vs. DataTable performance? How about NCalc? As I said there are about 100,000 rows of data, with 50 columns, of which maybe 20 are calculated. In total about 50 rules will be run against the data, so in total there will be 5 million row/object scans. Would really appreciate any insights. Thx. ps. Of course using a database + SQL & Views etc. would be the easiest solution, but for various reasons can't be implemented.

    Read the article

  • bundler/capistrano is not installing gems with correct ruby version

    - by Douglas
    I'm trying to deploy my first app on a server with Capistrano, and I'm a bit lost with managing gemsets and ruby version. These are my (server and workstation) versions : Rails 3.2.8 RVM 1.16.17 Gem 1.8.24 Bundler 1.2.1 pg gem 0.14.1 My gemset are : Gemsets for ruby-1.9.3-p194 (found in /usr/local/rvm/gems/ruby-1.9.3-p194) (default) global = rail3dev20120606 I set the default gemset with : rvm use 1.9.3-p194@rail3dev20120606 --default --passenger When I run a : cap bundle:install The task end with success, but when I do a : gem list There are many missing gems though they are present in my Gemfile. When I go to check my gems in /var/www/opf/shared/bundle/ruby/ I find a folder called 1.9.1 and in /var/www/opf/shared/bundle/ruby/1.9.1/gems/ I can fond all of my needed gems (specified in Gemfile). I'm sure there is a problem with ruby version, but how do I solve this ? At the moment, if I do any rake command, I got a ruby crash [Bug] Segmentation fault, as it try to access the db and using postgresql_adapter. I think as many gems are missing there must have some gem dependencies not verified, and maybe a gem is using an incompatible ruby version 1.9.1 though it expect a 1.9.3. I think the issue is around managing ruby versions and gems. I'm certainly doing some mix with gemset and my capistrano deployement. I'm missing experience and info. Could anybody advise me how to handle this on the server ? What are the best practices ? How am I suppose to update my ruby version ? with Capistrano deploy.rb ? manually ? with/without rvm ? I saw a new version of ruby 1.9.3-p327 has just released. Should I use gemset or not ? What about the :rvm_ruby_string in my deploy.rb. Is it correctly spelled or should I remove the p194 part ? Should I Remove the :rvm_ruby_string ? Keep it ? Use a .rvmrc file ??? I'm really lost and some kind help would be welcome. This is my config/deploy.rb in any case : require 'bundler/capistrano' require File.join(File.dirname(__FILE__), 'deploy') + '/capistrano_database' set :rvm_type, :system set :rvm_ruby_string, 'ruby-1.9.3-p194@rail3dev20120606' require 'rvm/capistrano' set :application, 'opf' set :deploy_to, '/var/www/opf' set :rails_env, 'production' set :user, 'the_user' set :use_sudo, false set :group_writable, false set :scm, :git set :repository, '[email protected]:user/opf.git' set :branch, 'master' default_run_options[:pty] = true set :deploy_via, :remote_cache server '192.168.5.200', :web, :app, :db, :primary => true # If you are using Passenger mod_rails uncomment this: namespace :deploy do task :start do ; end task :stop do ; end task :restart, :roles => :app, :except => { :no_release => true } do run "#{try_sudo} touch #{File.join(current_path,'tmp','restart.txt')}" end end Thanks for any help

    Read the article

< Previous Page | 459 460 461 462 463 464 465 466 467 468 469 470  | Next Page >