Search Results

Search found 7568 results on 303 pages for 'rails i18n'.

Page 284/303 | < Previous Page | 280 281 282 283 284 285 286 287 288 289 290 291  | Next Page >

  • Does complex JOINs causes high coupling and maintenance problems ?

    - by ashkan.kh.nazary
    Our project has ~40 tables with complex relations.A colleague believes in using long join queries which enforces me to learn about tables outside of my module but I think I should not concern about tables not directly related to my module and use data access functions (written by those responsible for other modules) when I need data from them. Let me clarify: I am responsible for the ContactVendor module which enables the customers to contact the vendor and start a conversation about some specific product. Products module has it's own complex tables and relations with functions that encapsulate details (for example i18n, activation, product availability etc ...). Now I need to show the product title of some product related to some conversation between the vendor and customers. I may either write a long query that retrieves the product info along with conversation stuff in one shot (which enforces me to learn about Product tables) OR I may pass the relevant product_id to the get_product_info(int) function. First approach is obviously demanding and introduces many bad practices and things I normally consider fault in programming. The problem with the second approach seems to be the countless mini queries these access functions cause and performance loss is a concern when a loop tries to fetch product titles for 100 products using functions that each perform a separate query. So I'm stuck between "don't code to the implementation, code to interface" and performance. What is the right way of doing things ? UPDATE: I'm specially concerned about possible future modifications to those tables outside of my module. What if the Products module decided to change the way they are doing things? or for some reason modify the schema? It means some other modules would break or malfunction until the change is integrated to them. The usual ripple effect problem.

    Read the article

  • Firefox handles xxx.submit(), Safari doesn't ... what can be done?

    - by Prairiedogg
    I'm trying to make a pull down menu post a form when the user selects (releases the mouse) on one of the options from the menu. This code works fine in FF but Safari, for some reason, doesn't submit the form. I re-wrote the code using jquery to see if jquery's .submit() implementation handled the browser quirks better. Same result, works in FF doesn't work in safari. The following snippets are from the same page, which has some django template language mixed in. Here's the vanilla js attempt: function formSubmit(lang) { if (lang != '{{ LANGUAGE_CODE }}') { document.getElementById("setlang_form").submit(); } } Here's the jquery attempt: $(document).ready(function() { $('#lang_submit').hide() $('#setlang_form option').mouseup(function () { if ($(this).attr('value') != '{{ LANGUAGE_CODE }}') { $('#setlang_form').submit() } }); }); and here's the form: <form id="setlang_form" method="post" action="{% url django.views.i18n.set_language %}"> <fieldset> <select name="language"> {% for lang in interface_languages %} <option value="{{ lang.code }}" onmouseup="formSubmit('{{ lang.name }}')" {% ifequal lang.code LANGUAGE_CODE %}selected="selected"{% endifequal %}>{{ lang.name }}</option> {% endfor %} </select> </fieldset> </form> My question is, how can I get this working in Safari?

    Read the article

  • Where to translate message strings - in the view or in the model?

    - by GrGr
    We have a multilingual (PHP) application and use gettext for i18n. There are a few classes in the backend/model that return messages or message formats for printf(). We use xgettext to extract the strings that we want to translate. We apply the gettext function T_() in the frontend/view - this seems to be where it belongs. So far we kept the backend clean from T_() calls, this way we can also unit-test messages. So in the frontend we have something like echo T_($mymodel->getMessage()); or printf(T_($mymodel->getMessageFormat()), $mymodel->getValue()); This makes it impossible to apply xgettext to extract the strings, unless we put some dummy T_("my message %s to translate") call in the MyModel class. So this leads to the more general question: Do you apply translation in the backend classes, resp. where do you apply translation and how do you keep track of the strings which you have to translate? (I am aware of Question: poedit workaround for dynamic gettext.)

    Read the article

  • Ubuntu 12.04: apt-get "failed to fetch"; apt is trying to fetch via old static IP

    - by gabe
    Sample error: W: Failed to fetch http://security.ubuntu.com/ubuntu/dists/precise-security/universe/i18n/Translation-en Unable to connect to 192.168.1.70:8118: Now this was working just fine until I changed the IP this morning. I have the server set to a static IP of 10.0.1.70 and for years it has been 192.168.1.70 - the IP apt-get is trying to use right now. I use privoxy and tor thus the 8118 port. Like I said it all worked until I changed the static IP from 192.168.1.70 to 10.0.1.70. I was forced to do so because of router issues. (Long and involved story, I didn't really want to change the IP because I know something like this would happen.) The setup for TOR/Privoxy requires that has you point Privoxy at TOR via 127.0.0.1:9050. Then point curl, etc to Privoxy via $HOME/.bashrc. Typically you would set the listen to IP for Privoxy to 127.0.0.1 but if you want it accessible to the rest of the LAN you set the IP to the server's LAN IP. Which I did a long time ago and was working fine until this morning. I have changed all instances of 192.168.1.70 to 10.0.1.70 in both /etc/privoxy/config and $HOME/.bashrc. What makes this really strange for me is that curl is working fine. I curl icanhazip.com and voila I get a new IP every 10 minutes or so. I curl CNN.com and I get the short but sweet permanently moved to www.cnn.com message I expect. Firefox works fine. Ping works fine. And I've tested all of this via Remote Desktop over my LAN. So the connection appears to be fine for everything except apt. I've also rebooted hoping that would clear 192.168.1.70 from apt. So the connection to the internet and DNS aren't an issue for these programs. And they are, as far as I can tell, using Privoxy/TOR just fine. The real irony here is that I've tried to open up Privoxy to go to Ubuntu's servers directly without going through TOR to speed up the downloads from Ubuntu (did this months ago). So somewhere that I have not been able to find, apt has stored the IP 192.168.1.70. And 192.168.1.70 is no longer valid. Thanks for the help

    Read the article

  • DNS and DHCP dies after ~2 days of use on ClearOS

    - by TheLQ
    I'm using ClearOS (based on CentOS, so any info specific to it should apply here) as a gateway, DHCP, and DNS server. I had this server running perfectly for a month or two before replacing it with another server. However due DNS and DHCP failing 2 days in and a host of other performance issues (the box was a little underpowered), I changed back to the origional server. However 2 days in DHCP and DNS are failing again, and I'm out of idea's on why. In both cases to my knowledge no network or server changes occurred after installation. Right after installing (and at least a day in) DNS and DHCP was working just fine. However later (Day 2) I get a call saying their internet is down (translation: Nobody can get to websites because DNS is down) I've tried to fix the problem by checking if the dnsmasq is even running (it is), restarting the service, and restarting the server to no effect. I do have two internal servers that have static DHCP leases but one's lease must of expired as I can't connect to it anymore. I'm hesitant to do any dhcp testing on the last server as I'll not be able to connect to it anymore. Is there anything anyone can think of on why DNS and DHCP would fail 2 days in to running perfectly? More info: Running dnsmasq in debug mode. This is all that's displayed even when running nslookup quackwall. I'm not sure though if nslookup commands should show up in the log [root@quackwall ~]# /usr/sbin/dnsmasq -dq dnsmasq: started, version 2.49 cachesize 150 dnsmasq: compile time options: IPv6 GNU-getopt no-DBus no-I18N DHCP TFTP dnsmasq-dhcp: DHCP, IP range 10.0.0.100 -- 10.0.0.254, lease time 12h dnsmasq: reading /etc/resolv.conf dnsmasq: using nameserver 74.128.17.114#53 dnsmasq: using nameserver 74.128.19.102#53 dnsmasq: read /etc/hosts - 5 addresses dnsmasq-dhcp: read /etc/ethers - 2 addresses On the other server DNS and the Gateway are all configured correctly (10.0.0.2 is quackwall) lordquackstar@quackgame:~$ netstat -rn Kernel IP routing table Destination Gateway Genmask Flags MSS Window irtt Iface 10.0.0.0 0.0.0.0 255.255.240.0 U 0 0 0 eth0 0.0.0.0 10.0.0.2 0.0.0.0 UG 0 0 0 eth0 lordquackstar@quackgame:~$ cat /etc/resolv.conf nameserver 10.0.0.2 domain highwow.lan search highwow.lan

    Read the article

  • Very large database, very small portion most being retrieved in real time

    - by mingyeow
    Hi folks, I have an interesting database problem. I have a DB that is 150GB in size. My memory buffer is 8GB. Most of my data is rarely being retrieved, or mainly being retrieved by backend processes. I would very much prefer to keep them around because some features require them. Some of it (namely some tables, and some identifiable parts of certain tables) are used very often in a user facing manner How can I make sure that the latter is always being kept in memory? (there is more than enough space for these) More info: We are on Ruby on rails. The database is MYSQL, our tables are stored using INNODB. We are sharding the data across 2 partitions. Because we are sharding it, we store most of our data using JSON blobs, while indexing only the primary keys

    Read the article

  • Determine nginx reverse-proxy load limits

    - by Aaron
    Hi all: I have an nginx server (CentOS 5.3, linux) that I'm using as a reverse-proxy load-balancer in front of 8 ruby on rails application servers. As our load on these servers increases, I'm beginning to wonder at what point will the nginx server become a bottleneck? The CPUs are hardly used, but that's to be expected. The memory seems to be fine. No IO to speak of. So is my only limitation bandwidth on the NICs? Currently, according to some cacti graphs, the server is hitting around 700Kbps ( 5 min average ) on each NIC during high load. I would think this is still pretty low. Or, will the limit be in sockets or some other resource in the operating system? Thanks for any thoughts and insights. Aaron

    Read the article

  • General purpose ticketing/tech support system [closed]

    - by crazybyte
    Possible Duplicate: What’s your favorite ticketing system? I was wondering if somebody could recommend me a very user friendly or simple general purpose ticketing/tech support system. I need something that is web based, preferably open-sourced/free software implemented using PHP, Ruby, Ruby on Rails or Java (as back end) with MySQL or PostgreSQL as database engine. I need something that is not development management oriented or project management oriented like Eventum or similar (random example), something to which the user can connect open a tech support request and be able to follow it until is solved or dropped.I need it to be open-sourced to be able to modify it if there is a need or extend it. I tried a number of such systems available and I found that osTicket or eTicket is something that it's close to what I need, but the code is somewhat flaky and some of the features are working badly or behaving strangely. Any thoughts/advice where to find something similar? Thanks!

    Read the article

  • Sendmail background process sometimes processes queue, but sendmail -q always works

    - by markmcb
    I'm using sendmail version 8.14.4 on Fedora 15 to send email. My Rails app uses delayed_job to queue up emails. Messages will queue up in /var/spool/mqueue as expected, but don't always get processed. I can see the messages and sendmail is definitely running in the background. Restarting the process does nothing. However, when I issue the sendmail -q command, sendmail gets to work and starts sending. The really odd thing is that this behavior only occurs sometimes. Other times message queue up and are delivered as expected. I've tried tweaking various sendmail configs to reduce the time between queue processing (for example, adding define('confMIN_QUEUE_AGE', '0')dnl to /etc/mail/sendmail.mc), but nothing seems to do the trick. Any ideas what might be the root cause?

    Read the article

  • Can I shorten my directory commands in ubuntu?

    - by Spencer Cooley
    When working on a rails app I like to open all of my files through the command line like so cd my_app gedit app/views/user/show.html.erb Is there a way that I could shorten this so that I could just write something like gedit user_views/show.html.erb ? I would like the console to stay in the main directory, I just don't like having to type out app/controller/user_controler.rb every time I want to open the user controller. I know that I could just open the file with my mouse, but I feel like moving from keyboard to mouse breaks my focus a little bit. When I can just tap away at the keyboard it seems like I have a more smooth workflow.

    Read the article

  • Install Ruby 1.8.7 on Fedora 11/12

    - by tadman
    Is there a simple way to install Ruby 1.8.7 on Fedora 11 or 12 without side-stepping the yum/RPM package management system too severely? Building from source is always an option, but it tends to deploy things in irregular places and proves to be more fuss to maintain in the long run. A self-built RPM is okay, but I'm presuming there's a .rpm out there somewhere already. Rails is not especially happy with 1.8.6 and the Fedora community, for various reasons, considers 1.8.7 to be toxic and best avoided.

    Read the article

  • Thinking Sphinx index rebuild error on windows xp: searchd is already running

    - by Voldy
    I have Sphinx installed on windows xp system. A I use Thinking Sphinx plug-in within my rails application. I can't rebuild index with Thinking Sphinx rake task after application server starting up even if I stop it: Stopped search daemon (pid 4492). ... bla bla bla ... total 3 reads, 0.000 sec, 1.3 kb/call avg, 0.0 msec/call avg total 9 writes, 0.000 sec, 1.2 kb/call avg, 0.0 msec/call avg WARNING: could not open pipe (GetLastError()=2) rake aborted! searchd is already running. If I reload system, I can rebuild index. What do you think about? p.s: Does this question suited for serverfault.com?

    Read the article

  • Apache/passenger/ree doesn't interpret .rb files

    - by Sergey
    I'm trying to get apache + passenger + ree to work. I think I did everything (except for setting up rails env - for now I wanna run just pure ruby) described here: http://rvm.beginrescueend.com/integration/passenger/ But when I try to go to localhost/test.rb it doesn't interpret that file and just download it. I don't know where should I look for mistakes, so here are a few files I think could be relevant: /var/log/apache2/error.log (these 2 lines are repeating) [Mon May 31 23:12:47 2010] [notice] Graceful restart requested, doing restart [Mon May 31 23:12:48 2010] [notice] Apache/2.2.14 (Ubuntu) PHP/5.3.2-1ubuntu4.2 with Suhosin-Patch Phusion_Passenger/2.2.11 configured -- resuming normal operations /etc/apache2/httpd.conf LoadModule passenger_module /home/sergey/.rvm/gems/ree-1.8.7-2010.01/gems/passenger-2.2.11/ext/apache2/mod_passenger.so PassengerRoot /home/sergey/.rvm/gems/ree-1.8.7-2010.01/gems/passenger-2.2.11 PassengerRuby /home/sergey/.rvm/bin/passenger_ruby /var/www/test.rb puts "test"

    Read the article

  • Apache virtual host proxy to nginx for ruby

    - by Kevin Brown
    I'm running a few php sites off apache and want to start rails dev. I've installed rvm/nginx and can get my ruby site by going to websiteroot.com:8000... How do I pass ruby.websiteroot.com to websiteroot.com:8000? What's the best way for me to route a subdomain for ruby dev?? I'd switch to nginx completely if it weren't for all my php sites--seems like it's easier to just proxy for ruby. Advice? My nginx config looks like this: server{ listen 8000; server_name website.com; root /home/me/sites/ruby_folder/public; ... } My apache config looks like this: <VirtualHost> ServerName ruby.website.com ProxyPreserveHost on ProxyPass / http://127.0.0.1:8000 ProxyPassReverse / http://127.0.0.1:8000 </VirtualHost>

    Read the article

  • centos: nginx + thin webserver, incoming connections not allowed

    - by cbrulak
    I setup a fresh CentOS 5 install, compile nginx from scratch and am using thin as the rails server. If I visit the ip adress on the LAN: (for example) 1.2.3.4 I get the website not found error. However, I can ssh into the machine. If I use links to visit the ip address, I get the landing page. Any suggestions? Thanks EDIT I ran system-config-securitylevel and then was able to change the security settings to allow incoming connections.

    Read the article

  • rvmsudo foreman export upstart without asking for password

    - by Millisami
    My capistrano deploy.rb has a foreman export command for a rails app on Ubuntu 10.04 So, while deploying, I want to export the foreman to upstart script. But doing that, the command rvmsudo foreman export ... asks for root password and I cannot do anything. Googled a lot and tried with various tweaks but nothing worked. * executing `foreman:export' * executing "cd /home/deploy/zappy/releases/20111019175422 && rvmsudo foreman export upstart /etc/init -a zappy -u deploy -f ./Procfile.production -c worker=1 redis=1 -l /home/deploy/zappy/releases/20111019175422/log/foreman" servers: ["173.255.205.237"] [173.255.205.237] executing command ** [out :: 173.255.205.237] [sudo] password for deploy: What could be the solution to do it password-less way?

    Read the article

  • Low-cost, Flexible Log Aggregation [closed]

    - by Dan McClain
    I'm starting to have quite the collection of Ubuntu VMs that I must manage. I'm starting to investigate Puppet for managing the configuration of all of them, and apticron to let me know what's out of date. But the issue I feel I should deal with sooner than later is log aggregation. I'd like to stay in the free/open source realm for now, seeing that we don't have much budget for something like splunk yet. In addition to syslog, I would like to collect application specific logs (We are running different apps on different machines, from nginx+passenger for rails, to Apache+Tomcat for java, to PHP for expression engine, and mysql/postgresql database server), so that we can analyze the relavent data. For now, I'm just looking to get all the logs one place.

    Read the article

  • Strange requests coming from Korean Site

    - by Jim Jeffers
    Lately I've been finding a lot of strange requests like this coming to my rails app: Processing ApplicationController#index (for 189.30.242.61 at 2009-12-14 07:38:24) [GET] Parameters: {"_SERVER"=>{"DOCUMENT_ROOT"=>"http://www.usher.co.kr/bbs/id1.txt???"}} ActionController::RoutingError (No route matches "/browse/brand/nike ///" with {:method=>:get}): It looks like it's automated as I get a lot of them and notice the strange parameters they're trying to send: _SERVER"=>{"DOCUMENT_ROOT"=>"http://www.usher.co.kr/bbs/id1.txt??? Is this something malicious and if so what should I do about it?

    Read the article

  • I think I have multiple postgresql servers installed, how do I identify and delete the 'extra' ones?

    - by Guided33
    I seem to have a few installations of postgresql on my machine somehow. I'm not sure if this is a mistake or, if Ubuntu for some odd reason duplicates direcotories and keeps them elsewhere. I have a postgresql directory in /etc one in /usr/lib and one in /opt I'm properly confused at this point. How do I go about deleting the extra ones.which ones are he extra one? I also need to make sure that my 'pg' gem in my rails env is pointing towards the correct posgresql db server. Any thoughts on my issue would be huge.

    Read the article

  • Is there a way to launch a command within a proper zsh shell ?

    - by Wam
    I'm not really clear with my question here, let me rephrase it : I've setup a launch_workspace.sh to launch directly tmux with 5 different commands loaded. Here is my current content : #!/bin/sh tmux new-session -d -s scube -n 'vim' "vim" tmux new-window -t scube:2 -n 'server' "$SHELL -c 'script/rails server'" tmux new-window -t scube:3 -n 'yard' "$SHELL -c 'bundle exec yard server --gems'" tmux new-window -t scube:4 -n 'spork' "$SHELL -c 'bundle exec guard'" tmux new-window -t scube:5 -n 'autotest' "$SHELL -c 'bundle exec autotest'" tmux new-window -t scube:5 -n 'shell' "$SHELL" tmux select-window -t scube:1 tmux -2 attach-session -t scube Problem is : my zsh ($SHELL beeing zsh) launches said commands, but when I Ctrl+C any of these, it closes the full zsh (hence my tmux window) and not just return to a proper zsh prompt. Is there a way to have said behavior, to launch zsh with a command and return to a zsh prompt when the command fails ? Cheers

    Read the article

  • Static error page served by nginx when my application is down

    - by dreeves
    If my (Rails) application is down or undergoing database maintenance or whatever, I'd like to specify at the nginx level to server a static page. So every URL like http://example.com/* should server a static html file, like /var/www/example/foo.html. Trying to specify that in my nginx config is giving me fits and infinite loops and whatnot. I'm trying things like location / { root /var/www/example; index foo.html; rewrite ^/.+$ foo.html; } How would you get every URL on your domain to serve a single static file?

    Read the article

  • xampp mysql and rubby

    - by user115079
    I've installed ruby and xampp server. now i am trying to use xampp mysql for ruby application. i copied xampp mysql lib (libmysql) from C:\xampp\mysql\lib to C:\Ruby192\bin (as told on some post on this forum). now after that when i try to create a resource using following command, i get an error. command: rails generate scaffold ShortUrl url:string error: C:/Ruby192/lib/ruby/gems/1.9.1/gems/mysql2-0.3.11-x86-mingw32/lib/mysql2/mysql2.rb:2:in `require': Incorrect MySQL client library version! This gem was compiled for 6.0.0 but the client library is 5.5.16. (RuntimeError) i know that there is version issue b/w ruby mysql client and xampp mysal. now i need advice that what is better solution? upgraded xampp mysql or downgrade ruby mysql version. Personally i want to upgrade xampp mysql but i read on some post that xampp mysql can't be upgraded. please advise.

    Read the article

  • Apache2 Doesn't Serve Subdomain Alias

    - by Cyle Hunter
    I'm trying to prefix an existing Rails application with a sub-domain, essentially I want the sub-domain to serve the same application. Right now apache2 serves my application with "www.example.com" or "example.com". I adjusted my sites-available virtualhost in hopes of allowing for "foo.example.com" or "www.foo.example.com" however both instances are met with a domain not found error. Here is my current VirtualHost in /etc/apache2/sites-available/example.com: <VirtualHost *:80> ServerName example.com ServerAlias foo.example.com *.example.com www.foo.example.com www.example.com DocumentRoot /home/user/my_app/public <Directory /home/user/my_app/public> AllowOverride all Options -MultiViews </Directory> </VirtualHost> Any ideas? Note, I realized I probably don't need a wild card sub-domain for what I'm trying to do, I simply added that in as a last-ditch effort. Edit: The actual domain is virtualrobotgames.com with the desired subdomain being roboteer.virtualrobotgames.com

    Read the article

  • Relaying requests between third party server and Heroku for static IP

    - by Gady
    I have a rails application hosted on Heroku that I need to integrate with 3rd party payments provider. The payment provider requires that my application will have a static IP for incoming and outgoing HTTPS requests. I want to deploy a proxy on a Linode VPS so it can relay the information as a proxy. Relaying the request to the service provider seems easy, I just use their IP. Can I relay requests coming from the service provider to the heroku application? Can I realy the request using a URL (https://myapp.herokuapp.com) ? What is the recommended proxy server to use?

    Read the article

  • Cannot upgrade to Lion using DVD

    - by James
    When I first setup my MBP on 10.6 I made a lot of newb mistakes, and was unable to get Ruby on Rails to install, amongst a lot of other things. Because of this, I decided to backup everything on to a Time Machine HDD and re-install Lion. I had a bunch of problems but managed to sort them out, requiring me to go back to 10.6 from the original disc that came with the MBP. Before I did anything however, I burnt the Lion installer that I got from the App Store to a DVD. Now though, when I try to run it, I get the following: Cannot download additional components. I've managed to download software updates, so I know it can connect to the Apple servers.

    Read the article

< Previous Page | 280 281 282 283 284 285 286 287 288 289 290 291  | Next Page >