Search Results

Search found 13929 results on 558 pages for 'ruby on rails plugins'.

Page 277/558 | < Previous Page | 273 274 275 276 277 278 279 280 281 282 283 284  | Next Page >

  • Sublime Text: A package/quick and easy way to share code, make it accessible?

    - by Vennsoh
    I am just starting to use Sublime Text. So far it has been great! I am looking for a tutorial/method for a particular problem. I am working on a project, lets say on PC 1 and then once I am done with the project, I want to upload the whole thing to Github so that when I get home I can download the latest code and work on it again on PC 2. Is there a package available for this? Any thoughts or any better ways of managing your projects and still make them accessible.

    Read the article

  • certain Smarty tags don't work in OpenX templates

    - by mikez302
    I am on a team that is developing an OpenX plugin, and I am responsible for the UI. I noticed that if I use certain Smarty tags in my template, the app doesn't work and I see an error message, similar to this: Plugin by name 'Html_select_date' was not found in the registry; used paths: default_views_helpers_: /openx/www/admin/plugins/myApp/application/modules/default/views/helpers/ OX_OXP_UI_View_Helper_: /openx/www/admin/plugins/myApp/application/../library/OX/OXP/UI/View/Helper/ OX_UI_View_Helper_: /openx/www/admin/plugins/myApp/application/../library/OX/UI/View/Helper/ Zend_View_Helper_: Zend/View/Helper/ (stack trace) The stack trace looks like this: #0 /openx/www/admin/plugins/myApp/library/Zend/View/Abstract.php(1117): Zend_Loader_PluginLoader-load('Html_select_dat...') #1 /openx/www/admin/plugins/myApp/library/Zend/View/Abstract.php(568): Zend_View_Abstract-_getPlugin('helper', 'html_select_dat...') #2 /openx/www/admin/plugins/myApp/library/OX/UI/Smarty/SmartyWithViewHelper.php(25): Zend_View_Abstract-getHelper('html_select_dat...') #3 /openx/var/templates_compiled/%2Fdefault%2Fviews%2Fscripts%2Findex%2Fview-reports.html^%%E8^E80^E80B56F2%%view-reports.html.php(38): OX_UI_Smarty_SmartyWithViewHelper-callViewHelper('html_select_dat...', Array) #4 /openx/lib/smarty/Smarty.class.php(1274): include('/openx...') #5 /openx/www/admin/plugins/myApp/library/OX/UI/View/SmartyView.php(103): Smarty-fetch('/openx...') #6 /openx/www/admin/plugins/myApp/library/Zend/View/Abstract.php(832): OX_UI_View_SmartyView-_run('/openx...') #7 /openx/www/admin/plugins/myApp/library/OX/UI/View/SmartyView.php(151): Zend_View_Abstract-render('index/view-repo...') #8 /openx/www/admin/plugins/myApp/library/OX/UI/View/Helper/WithViewScript.php(23): OX_UI_View_SmartyView-render('index/view-repo...') #9 /openx/www/admin/plugins/myApp/application/modules/default/views/helpers/ViewReports.php(5): OX_UI_View_Helper_WithViewScript::renderViewScript('index/view-repo...', Array) #10 /openx/www/admin/plugins/myApp/application/modules/default/controllers/IndexController.php(98): Default_Views_Helpers_ViewReports-renderPage() #11 /openx/www/admin/plugins/myApp/library/Zend/Controller/Action.php(512): IndexController-viewReportsAction() #12 /openx/www/admin/plugins/myApp/library/Zend/Controller/Dispatcher/Standard.php(288): Zend_Controller_Action-dispatch('viewReportsActi...') #13 /openx/www/admin/plugins/myApp/library/Zend/Controller/Front.php(945): Zend_Controller_Dispatcher_Standard-dispatch(Object(Zend_Controller_Request_Http), Object(Zend_Controller_Response_Http)) #14 /openx/www/admin/plugins/myApp/application/bootstrap.php(117): Zend_Controller_Front-dispatch() #15 /openx/www/admin/plugins/myApp/public/index.php(7): require('/openx...') #16 {main} This does not happen with all Smarty tags. For example, I can use {if}, {foreach}, or {assign} tags without any problems. But whenever I try to use {html_select_date}, {html_image}, or {html_table}, I get the errors. In case this matters, the programmer who is designing the plugin copied the openXWorkflow plugin and made some changes. I noticed that the openXWorkflow plugin has a file (openx/plugins_repo/openXWorkflow/www/admin/plugins/openXWorkflow/library/OX/UI/Smarty/SmartyCompilerWithViewHelper.php) with a class that overrides the default Smarty compiler, supposedly with the ability to compile shorthands for calling ZF view helpers. That file has a list of Smarty functions, but the list is incomplete. If I add the functions to the list, or simply delete the file, my template works fine, but I don't like to change library files. It may make the app hard to maintain, and I don't know if it will mess up something else. The file has the comment "There is no easy access to the list of Smarty's built-in functions so we need to list them here. HTML-specific functions are not included as we cover HTML generation separately.", so it seems like certain Smarty functions may be disabled on purpose for some reason. Will anything bad happen if I try to use them? If, for example, I want to use the {html_select_date} tag in my template, how would I go about doing that? Keep in mind that much of this stuff is new and unfamiliar to me. This is my first time ever using OpenX or Smarty, and I only have a little bit of experience with the Zend framework. Please let me know if we are using the wrong approach.

    Read the article

  • Weird Facebooker Plugin & Pushion Passenger ModRails Production Error

    - by Ranknoodle
    I have an application (Rails 2.3.5) that I'm deploying to production Linux/Apache server using the latest Phushion Passenger/Apache Module 2.2.11 version. After deploying my original application, it returns a 500 error with no logging to production log. So I created a minimal test rails application, with some active record calls to the database to print out a list of objects to the home controller/my index page. I also cleared out all plugins. That works fine in the production environment. Then I one by one introduced each plugin that I'm using one at a time. Every plugin works fine EXCEPT facebooker. Every time I load the facebooker plugin into my app/vendor/plugins directory (via script git etc) my test application break (500 error - no error logging). Everytime I remove the facebooker plugin my test application works. Has anyone seen this before/ have any solutions? I saw this solution but didn't see it in the facebooker code.

    Read the article

  • How to store and access JSON data for a site?

    - by Callmeed
    I'm buiding an HTML/jQuery site where almost all the content comes from remote JSON data. I'm having trouble coming up with a good way to store and access the data in the future (scope-wise). Currently, I've written a jQuery plugin that gets the JSONP data when the site loads. But I have other functions and jQuery plugins that need to access this data. Where should this data be stored so other functions and plugins can access it? Should it be a global variable? If it matters, this site will only run on the iPad and the back-end of the site is in Rails.

    Read the article

  • Telling subversion client to ignore certificate errors

    - by Pekka
    I have set up a copy of Redmine through the Bitnami Redmine Stack and am having trouble accessing a remote SVN repository through https. The trouble seems to be related to the fact that I don't have a signed certificate, and the certificate provided doesn't match the host name (I am accessing the same server through a number of host names). I am new to Ruby, Mongrel, Rails and Redmine. Following the advice in this forum thread, I changed the path Redmine uses to invoke the svn client in \apps\redmine\lib\ redmine\scm\adapters\subversion_adapter.rb from SVN_BIN = "svn" to SVN_BIN = "svn --trust-server-cert --non-interactive --config-dir c:/user/temp" I was hoping that the --trust-server-cert option would fix the certificate problem. However, I am still getting the following error message in mongrel.log: svn: OPTIONS of 'https://server.xyz:8443/svn/reponame': Server certificate verification failed: certificate issued for a different hostname, issuer is not trusted (https://server.xyz:8443) Does anybody know what to do about this? Additional info: I re-started the mongrel service after each change I am sure the configuration change has taken effect because subversion has created a full configuration directory in c:\user\temp I can access the remote repository using command line svn no problem The remote repository runs on a Windows box with VisualSVN

    Read the article

  • Redmine subversion won't ignore certificate error even if told

    - by Pekka
    I have set up a copy of Redmine through the Bitnami Redmine Stack and am having trouble accessing a remote SVN repository through https. The trouble seems to be related to the fact that I don't have a signed certificate, and the certificate provided doesn't match the host name (I am accessing the same server through a number of host names). I am new to Ruby, Mongrel, Rails and Redmine. Following the advice in this forum thread, I changed the path Redmine uses to invoke the svn client in \apps\redmine\lib\ redmine\scm\adapters\subversion_adapter.rb from SVN_BIN = "svn" to SVN_BIN = "svn --trust-server-cert --non-interactive --config-dir c:/user/temp" I was hoping that the --trust-server-cert option would fix the certificate problem. However, I am still getting the following error message in mongrel.log: svn: OPTIONS of 'https://server.xyz:8443/svn/reponame': Server certificate verification failed: certificate issued for a different hostname, issuer is not trusted (https://server.xyz:8443) Does anybody know what to do about this? Additional info: I re-started the mongrel service after each change I am sure the configuration change has taken effect because subversion has created a full configuration directory in c:\user\temp I can access the remote repository using command line svn no problem The remote repository runs on a Windows box with VisualSVN

    Read the article

  • SSL on nginx + unicorn got "Error 102 (net::ERR_CONNECTION_REFUSED)"

    - by panggi
    I tried to deploy my app on EC2 (opened port: 22, 80, 443) App: Rails 3.2.2 Server: nginx 1.2.1 unicorn gem (latest) ubuntu 12.04 Deployer: Capistrano I tried to follow the instruction in Railscasts : http://railscasts.com/episodes/335-deploying-to-a-vps (Sorry, it's a Pro Episode) Anything fine with normal port 80 http but i got Error 102 after trying to use SSL, here is the nginx.conf content: upstream unicorn { server unix:/tmp/unicorn.frontend.sock fail_timeout=0; } server { server_name beta.sukeru.com; listen 443 default; root /home/deployer/apps/appname/current/public; ssl on; ssl_certificate server.crt; ssl_certificate_key server.key; ssl_session_timeout 5m; ssl_protocols SSLv2 SSLv3 TLSv1; ssl_ciphers HIGH:!aNULL:!MD5; ssl_prefer_server_ciphers on; location / { proxy_set_header Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto https; proxy_redirect off; proxy_pass http://unicorn; } error_page 500 502 503 504 /500.html; } In production.rb i set: config.force_ssl = true Can anyone give a solution for this? :)

    Read the article

  • Sending email with exim and extrnal sender address

    - by Tronic
    hi. i have following problem: i want to send emails with an rails webapp. i set up an exim server and when looking into the logs, the sending works, but the emails aren't sent really. i had the same problem with another isp. the sender address is hosted on another mailserver, other isp. i think the problem is, that sending doesn't work because the sener address isn't hosted on the same server. do you have any advice on this? the logs (exim) tell me the following: 2011-01-01 14:38:06 1PZ1eo-0000Ga-38 <= <> R=1PZ1eo-0000GY-1p U=Debian-exim P=local S=1778 2011-01-01 14:38:08 1PZ1eo-0000Ga-38 => [email protected] R=dnslookup T=remote_smtp H=mx1.emailsrvr.com [98.129.184.131] X=TLS1.0:RSA_AES_256_CBC_SHA1:32 DN="C=US,O=mx1.emailsrvr.com,OU=GT21850092,OU=See www.geotrust.com/resources/cps (c)08,OU=Domain Control Validated - QuickSSL(R),CN=mx1.emailsrvr.com" 2011-01-01 14:38:08 1PZ1eo-0000Ga-38 Completed [email protected] is the external sender-address! thank you!

    Read the article

  • How do I ensure a process is running, even if it kills itself? (it needs to be restarted then)

    - by le_me
    I'm using linux. I want a process (an irc bot) to run every time I start the computer. But I've got a problem: The network is bad and it disconnects often, so I need to manually restart the bot a few times a day. How do I automate that? Additional information: The bot creates a pid file, called bot.pid The bot reconnects itself, but only a few times. The network is too bad, so the bot kills itself sometimes because it gets no response. What I do currently (aka my approach ;) ) I have a cron job executing startbot.rb every 5 minutes. (The script itself is in the same directory as the bot) The script: #!/usr/bin/ruby require 'fileutils' if File.exists?(File.expand_path('tmp/bot.pid')) @pid = File.read(File.expand_path('tmp/bot.pid')).chomp!.to_i begin raise "ouch" if Process.kill(0, @pid) != 1 rescue puts "Removing abandoned pid file" FileUtils.rm(File.expand_path('tmp/bot.pid')) puts "Starting the bot!" Kernel.exec(File.expand_path('./bot.rb')) else puts "Bot up and running!" end else puts "Starting the bot!" Kernel.exec(File.expand_path('./bot.rb')) end What this does: It checks if the pid file exists, if that's true it checks if kill -s 0 BOT_PID == 1 (if the bot's running) and starts the bot if one of the two checks fail/are not true. My approach seems to be quite dirty so how do I do it better?

    Read the article

  • Sending email with exim and external sender address

    - by Tronic
    i have following problem: i want to send emails with an rails webapp. i set up an exim server and when looking into the logs, the sending works, but the emails aren't sent really. i had the same problem with another isp. the sender address is hosted on another mailserver, other isp. i think the problem is, that sending doesn't work because the sener address isn't hosted on the same server. do you have any advice on this? the logs (exim) tell me the following: 2011-01-01 14:38:06 1PZ1eo-0000Ga-38 <= <> R=1PZ1eo-0000GY-1p U=Debian-exim P=local S=1778 2011-01-01 14:38:08 1PZ1eo-0000Ga-38 => [email protected] R=dnslookup T=remote_smtp H=mx1.emailsrvr.com [98.129.184.131] X=TLS1.0:RSA_AES_256_CBC_SHA1:32 DN="C=US,O=mx1.emailsrvr.com,OU=GT21850092,OU=See www.geotrust.com/resources/cps (c)08,OU=Domain Control Validated - QuickSSL(R),CN=mx1.emailsrvr.com" 2011-01-01 14:38:08 1PZ1eo-0000Ga-38 Completed [email protected] is the external sender-address! thank you! Edit with more details when sending a mail from command line with echo "Test" | mail -s Testmail [email protected] the logs says 2011-01-01 20:45:24 1PZ7OG-0001Vp-Rx <= root@gustav U=root P=local S=360 2011-01-01 20:45:26 1PZ7OG-0001Vp-Rx => [email protected] R=dnslookup T=remote_smtp H=gmail-smtp-in.l.google.com [209.85.229.27] X=TLS1.0:RSA_ARCFOUR_MD5:16 DN="C=US,ST=California,L=Mountain View,O=Google Inc,CN=mx.google.com" 2011-01-01 20:45:26 1PZ7OG-0001Vp-Rx Completed and i get the mail on my gmail account. but when sending by webapp (when testing locally with sendmail it works fine) i only get this log output 2011-01-01 20:50:08 1PZ7Sq-0001X9-L4 <= <> R=1PZ7Sq-0001X7-Jo U=Debian-exim P=local S=1780 2011-01-01 20:50:11 1PZ7Sq-0001X9-L4 => [email protected] R=dnslookup T=remote_smtp H=mx1.emailsrvr.com [98.129.184.3] X=TLS1.0:RSA_AES_256_CBC_SHA1:32 DN="C=US,O=mx1.emailsrvr.com,OU=GT21850092,OU=See www.geotrust.com/resources/cps (c)08,OU=Domain Control Validated - QuickSSL(R),CN=mx1.emailsrvr.com" 2011-01-01 20:50:11 1PZ7Sq-0001X9-L4 Completed

    Read the article

  • After few days of server running fine with nginx it start throwing 499 and 502

    - by Abhay Kumar
    Nginx start throwing 499 and 502 after running fine for few days, website is a rails app using thin as the webserver. Restarting the Nginx doent not seem to help. Below the the Nginx config Nginx config under sites-enabled upstream domain1 { least_conn; server 127.0.0.1:3009; server 127.0.0.1:3010; server 127.0.0.1:3011; } server { listen 80; # default_server; server_name xyz.com *.xyz.com; client_max_body_size 5M; access_log /home/ubuntu/www/xyz/current/log/access.log; root /home/ubuntu/www/xyz/current/public/; index index.html; location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; proxy_read_timeout 150; if (!-f $request_filename) { proxy_pass http://domain1; break; } } }

    Read the article

  • Sending email with exim and external sender address

    - by Tronic
    hi. i have following problem: i want to send emails with an rails webapp. i set up an exim server and when looking into the logs, the sending works, but the emails aren't sent really. i had the same problem with another isp. the sender address is hosted on another mailserver, other isp. i think the problem is, that sending doesn't work because the sener address isn't hosted on the same server. do you have any advice on this? the logs (exim) tell me the following: 2011-01-01 14:38:06 1PZ1eo-0000Ga-38 <= <> R=1PZ1eo-0000GY-1p U=Debian-exim P=local S=1778 2011-01-01 14:38:08 1PZ1eo-0000Ga-38 => [email protected] R=dnslookup T=remote_smtp H=mx1.emailsrvr.com [98.129.184.131] X=TLS1.0:RSA_AES_256_CBC_SHA1:32 DN="C=US,O=mx1.emailsrvr.com,OU=GT21850092,OU=See www.geotrust.com/resources/cps (c)08,OU=Domain Control Validated - QuickSSL(R),CN=mx1.emailsrvr.com" 2011-01-01 14:38:08 1PZ1eo-0000Ga-38 Completed [email protected] is the external sender-address! thank you!

    Read the article

  • Apache local verses external (domain)

    - by Jessy Houle
    I have an Apache server running on Ubuntu server 10, using Passenger for Ruby on Rails. I have configured my site under the sites-enabled directory of Apache and can hit the server with an internal IP address (192.168.X.X) and the site comes back as expected. However, whenever I try to hit the site externally, either through the domain name or the IP address tied to the domain name, the site will not come back. I have a router in the middle with a static IP address, with Port Forwarding turned on (forwarding 80/443) to the server and I'm quite confident the issue isn't there. In fact, I even DMZed to the Ubuntu Server just to make sure. Also, all router firewall options have been turned off. So here is the question... Is there something else I have to do with Ubuntu server to allow externally requested port 80 traffic? Otherwise, is there some settings that need to be set in Apache to allow domain or external IP address port 80 traffic through? I'm pretty new to Apache, so, please take it a bit easy on me :-) Thank you for your responses. -Jessy Houle

    Read the article

  • unicorn and nginx, went wrong

    - by achempion
    I try to deploy my app via capistrano. It was done, but when I start to nginx and show my site in the browser I see 'We're sorry, but something went wrong.' It is bad. I use unicorn. See my configs https://gist.github.com/3904032 I try to start server via rails s -e prodiction and it's work! I think that this error may be because I can't restart server root@li272-194:~# /etc/init.d/nginx restart Restarting nginx: the configuration file /etc/nginx/nginx.conf syntax is ok configuration file /etc/nginx/nginx.conf test is successful [emerg]: bind() to 0.0.0.0:80 failed (98: Address already in use) [emerg]: bind() to 0.0.0.0:80 failed (98: Address already in use) [emerg]: bind() to 0.0.0.0:80 failed (98: Address already in use) [emerg]: bind() to 0.0.0.0:80 failed (98: Address already in use) [emerg]: bind() to 0.0.0.0:80 failed (98: Address already in use) [emerg]: still could not bind() nginx. any ideas? nginx log 2012/10/17 02:57:41 [error] 3271#0: *1 could not find named location "@myapp", client: 91.192.62.77, server: 178.79.153.194, request: "GET / HTTP/1.1", host: "178.79.153.194" 2012/10/17 02:19:08 [crit] 2448#0: *8 connect() to unix:/srv/zarcon/shared/unicorn.sock failed (2: No such file or directory) while connecting to upstream, client: 91.192.62.77, server: zarkon, request: "GET / HTTP/1.1", upstream: "http://unix:/srv/zarcon/shared/unicorn.sock:/", host: "178.79.153.194"

    Read the article

  • Web service for checking out / leasing a token

    - by JP Slavinsky
    I run a web site on AWS that has a number of web servers (say 4 of them) running behind a load balancer. For this particular web site, I have one license key of New Relic for doing instrumentation. At any one time, I only want one of the 4 web servers to be using the key. If that server goes offline, I want one of the remaining web servers to be able to begin using the license key. Does anyone know of a service that would let me manage this process? The service would not particularly need to store the key itself but rather just manage the fact that only one web server can lease out the right to use the key at any time. Something where the web servers would have to come back every few minutes and renew their lease, and if they don't it becomes available to someone else. I just realized I could maybe accomplish a hacked version of this using a file on S3, but that doesn't prevent race conditions / etc and is definitely hackish. Any thoughts welcome. FWIW, this site is built on Ruby on Rails. Thanks! JP

    Read the article

  • Mysql InnoDB and quickly applying large updates

    - by Tim
    Basically my problem is that I have a large table of about 17,000,000 products that I need to apply a bunch of updates to really quickly. The table has 30 columns with the id set as int(10) AUTO_INCREMENT. I have another table which all of the updates for this table are stored in, these updates have to be pre-calculated as they take a couple of days to calculate. This table is in the format of [ product_id int(10), update_value int(10) ]. The strategy I'm taking to issue these 17 million updates quickly is to load all of these updates into memory in a ruby script and group them in a hash of arrays so that each update_value is a key and each array is a list of sorted product_id's. { 150: => [1,2,3,4,5,6], 160: => [7,8,9,10] } Updates are then issued in the format of UPDATE product SET update_value = 150 WHERE product_id IN (1,2,3,4,5,6); UPDATE product SET update_value = 160 WHERE product_id IN (7,8,9,10); I'm pretty sure I'm doing this correctly in the sense that issuing the updates on sorted batches of product_id's should be the optimal way to do it with mysql / innodb. I'm hitting a weird issue though where when I was testing with updating ~13 million records, this only took around 45 minutes. Now I'm testing with more data, ~17 million records and the updates are taking closer to 120 minutes. I would have expected some sort of speed decrease here but not to the degree that I'm seeing. Any advice on how I can speed this up or what could be slowing me down with this larger record set? As far as server specs go they're pretty good, heaps of memory / cpu, the whole DB should fit into memory with plenty of room to grow.

    Read the article

  • Using Monit to monitor Resque

    - by Alex
    I'm trying to use resque as a job runner for Rails. I've tried this config, and many other ways of demonizing the rescue task (because running rake resque:work leaves the terminal tied to that command). Unfortunately, their example configuration doesn't work for me. Does the configuration look correct? Or is there another way to turn the process into a daemon? Thank you :) check process resque_worker_QUEUE with pidfile /data/APP_NAME/current/tmp/pids/resque_worker_QUEUE.pid start program = "/bin/sh -c 'cd /data/APP_NAME/current; RAILS_ENV=production QUEUE=queue_name VERBOSE=1 nohup rake environment resque:work& > log/resque_worker_QUEUE.log && echo $! > tmp/pids/resque_worker_QUEUE.pid'" as uid deploy and gid deploy stop program = "/bin/sh -c 'cd /data/APP_NAME/current && kill -s QUIT `cat tmp/pids/resque_worker_QUEUE.pid` && rm -f tmp/pids/resque_worker_QUEUE.pid; exit 0;'" if totalmem is greater than 300 MB for 10 cycles then restart # eating up memory?

    Read the article

  • emacs setup for web development

    - by fig-gnuton
    What should a complete emacs setup have if you want to use it for web development? This SO question is about using emacs for Rails, however it mostly leaves out things like HTML, CSS, and JavaScript. This emacs wiki page has suggestions for emulating aspects of TextMate, but isn't specific to web development.

    Read the article

  • Changing Commenting behavior for ruby in Aptana Studio

    - by Flethuseo
    There is something I really hate in Aptana Studio 3 when I am using Ruby. When I try to use Ctrl+Shift+/ it inserts a comment of this form: =begin My lines of code My lines of code My lines of code My lines of code My lines of code =end I would like the Ctrl+Shift+/ to be defaulted to toggle commenting with '# ' instead. I have gone to the key preferences and tried changing PyDev toggle comment to Ctrl+Shift+/ but it doesn't work. It must be picking that behavior from somewhere else. What do I need to change so that I get the IDE to behave like I want. Ted.

    Read the article

  • Perl program for extracting the functions alone in a Ruby file

    - by thillaiselvan
    Hai all, I am having the following Ruby program. puts "hai" def mult(a,b) a * b end puts "hello" def getCostAndMpg cost = 30000 # some fancy db calls go here mpg = 30 return cost,mpg end AltimaCost, AltimaMpg = getCostAndMpg puts "AltimaCost = #{AltimaCost}, AltimaMpg = {AltimaMpg}" I have written a perl script which will extract the functions alone in a Ruby file as follows while (<DATA>){ print if ( /def/ .. /end/ ); } Here the <DATA> is reading from the ruby file. So perl prograam produces the following output. def mult(a,b) a * b end def getCostAndMpg cost = 30000 # some fancy db calls go here mpg = 30 return cost,mpg end But, if the function is having block of statements, say for example it is having an if condition testing block means then it is not working. It is taking only up to the "end" of "if" block. And it is not taking up to the "end" of the function. So kindly provide solutions for me. Example: def function if x > 2 puts "x is greater than 2" elsif x <= 2 and x!=0 puts "x is 1" else puts "I can't guess the number" end #----- My code parsing only up to this end Thanks in Advance!

    Read the article

  • Nginx with Passenger setup problems

    - by Kreeki
    I'm trying to setup nginx webserver with Passenger support for Ruby on Rails application on Ubuntu 10.04 (on sub URI). All went fine until I tried to access the server/application from the browser. My instalation of nginx is in location /opt/nginx # my nginx.conf server { listen 80; server_name www.mydomain.com; root /websites/site/public; passenger_enabled on; passenger_base_uri /site; location / { # added by default, I don't know if its supposed to be here or not root html; index index.html index.htm; } Then I started the server. But when I put www.mydomain.com/site in browser I get 404 Not Found error. Error.log shows this: 2011/03/04 10:07:07 [error] 21387#0: *2 open() "/opt/nginx/html/favicon.ico" failed (2: No such file or directory), client: 90.182.7.150, server: www.mydomain.com, request: "GET /favicon.ico HTTP/1.1", host: "80.79.23.71", referrer: "http://80.79.23.71/" 2011/03/04 10:07:07 [error] 21387#0: *2 open() "/opt/nginx/html/404.html" failed (2: No such file or directory), client: 90.182.7.150, server: www.mydomain.com, request: "GET /favicon.ico HTTP/1.1", host: "80.79.23.71", referrer: "http://80.79.23.71/" 2011/03/04 10:07:11 [error] 21387#0: *4 open() "/opt/nginx/html/favicon.ico" failed (2: No such file or directory), client: 90.182.7.150, server: www.mydomain.com, request: "GET /favicon.ico HTTP/1.1", host: "80.79.23.71:80", referrer: "http://80.79.23.71:80/" 2011/03/04 10:07:11 [error] 21387#0: *4 open() "/opt/nginx/html/404.html" failed (2: No such file or directory), client: 90.182.7.150, server: www.mydomain.com, request: "GET /favicon.ico HTTP/1.1", host: "80.79.23.71:80", referrer: "http://80.79.23.71:80/" 2011/03/04 10:07:13 [error] 21387#0: *5 open() "/opt/nginx/html/site" failed (2: No such file or directory), client: 90.182.7.150, server: www.mydomain.com, request: "GET /site HTTP/1.1", host: "80.79.23.71:80" 2011/03/04 10:07:13 [error] 21387#0: *5 open() "/opt/nginx/html/404.html" failed (2: No such file or directory), client: 90.182.7.150, server: www.mydomain.com, request: "GET /site HTTP/1.1", host: "80.79.23.71:80" 2011/03/04 10:07:15 [error] 21387#0: *6 open() "/opt/nginx/html/site" failed (2: No such file or directory), client: 90.182.7.150, server: www.mydomain.com, request: "GET /site HTTP/1.1", host: "80.79.23.71:80" 2011/03/04 10:07:15 [error] 21387#0: *6 open() "/opt/nginx/html/404.html" failed (2: No such file or directory), client: 90.182.7.150, server: www.mydomain.com, request: "GET /site HTTP/1.1", host: "80.79.23.71:80" 2011/03/04 10:07:19 [error] 21387#0: *7 open() "/opt/nginx/html/site" failed (2: No such file or directory), client: 90.182.7.150, server: www.mydomain.com, request: "GET /site HTTP/1.1", host: "80.79.23.71:80" 2011/03/04 10:07:19 [error] 21387#0: *7 open() "/opt/nginx/html/404.html" failed (2: No such file or directory), client: 90.182.7.150, server: www.mydomain.com, request: "GET /site HTTP/1.1", host: "80.79.23.71:80" Why does nginx look for site in /opt/nginx/html/site as log shows when there's another path set in nginx.conf? Any idea whats wrong with my setup?

    Read the article

  • RHEL 6.x on Rackspace Cloud and Dedicated hardware experiencing Redis Timeouts

    - by zhallett
    I just recently set up a mixture of RHEL 6.1 Rackspace cloud hosts and RHEL 6.2 dedicated hosts using Rackconnect. I am experiencing intermittent Redis timeouts from within our Rails 3.2.8 app with Redis 2.4.16 running on the RHEL 6.2 dedicated hosts. There is no network latency or packet loss. Also there are no errors on any interfaces on our cloud or dedicated servers or on the managed firewall from Rackspace. When Redis timesout, there is nothing logged within redis even though it is set up to do debug logging. The only error we receive is from Airbrake saying there was a Redis timeout. Network topology: RHEL 6.1 cloud hosts <--> Alert logic IDS <--> Cisco ASA 5510 <--> RHEL 6.2 dedicated hosts (web nodes) (two way NAT) (db hosts running redis) Ping from db host to web host: 64 bytes from 10.181.230.180: icmp_seq=998 ttl=64 time=0.520 ms 64 bytes from 10.181.230.180: icmp_seq=999 ttl=64 time=0.579 ms 64 bytes from 10.181.230.180: icmp_seq=1000 ttl=64 time=0.482 ms --- web1.xxxxxx.com ping statistics --- 1000 packets transmitted, 1000 received, 0% packet loss, time 999007ms rtt min/avg/max/mdev = 0.359/0.535/5.684/0.200 ms Ping from web host to db host: 64 bytes from 192.168.100.26: icmp_seq=998 ttl=64 time=0.544 ms 64 bytes from 192.168.100.26: icmp_seq=999 ttl=64 time=0.452 ms 64 bytes from 192.168.100.26: icmp_seq=1000 ttl=64 time=0.529 ms --- data1.xxxxxx.com ping statistics --- 1000 packets transmitted, 1000 received, 0% packet loss, time 999017ms rtt min/avg/max/mdev = 0.358/0.499/6.120/0.201 ms Redis config: daemonize yes pidfile /var/run/redis/6379/redis_6379.pid port 6379 timeout 0 loglevel debug logfile /var/lib/redis/log syslog-enabled yes syslog-ident redis-6379 syslog-facility local0 databases 16 save 900 1 save 300 10 save 60 10000 rdbcompression yes dbfilename dump-6379.rdb dir /var/lib/redis maxclients 10000 maxmemory-policy volatile-lru maxmemory-samples 3 appendfilename appendonly-6379.aof appendfsync everysec no-appendfsync-on-rewrite no auto-aof-rewrite-percentage 100 auto-aof-rewrite-min-size 64mb slowlog-log-slower-than 10000 slowlog-max-len 1024 vm-enabled no vm-swap-file /tmp/redis.swap vm-max-memory 0 vm-page-size 32 vm-pages 134217728 vm-max-threads 4 hash-max-zipmap-entries 512 hash-max-zipmap-value 64 list-max-ziplist-entries 512 list-max-ziplist-value 64 set-max-intset-entries 512 zset-max-ziplist-entries 128 zset-max-ziplist-value 64 activerehashing yes Redis-cli info: redis-cli info redis_version:2.4.16 redis_git_sha1:00000000 redis_git_dirty:0 arch_bits:64 multiplexing_api:epoll gcc_version:4.4.6 process_id:4174 uptime_in_seconds:79346 uptime_in_days:0 lru_clock:1064644 used_cpu_sys:13.08 used_cpu_user:19.81 used_cpu_sys_children:1.56 used_cpu_user_children:7.69 connected_clients:167 connected_slaves:0 client_longest_output_list:0 client_biggest_input_buf:0 blocked_clients:6 used_memory:15060312 used_memory_human:14.36M used_memory_rss:22061056 used_memory_peak:15265928 used_memory_peak_human:14.56M mem_fragmentation_ratio:1.46 mem_allocator:jemalloc-3.0.0 loading:0 aof_enabled:0 changes_since_last_save:166 bgsave_in_progress:0 last_save_time:1352823542 bgrewriteaof_in_progress:0 total_connections_received:286 total_commands_processed:507254 expired_keys:0 evicted_keys:0 keyspace_hits:1509 keyspace_misses:65167 pubsub_channels:0 pubsub_patterns:0 latest_fork_usec:690 vm_enabled:0 role:master db0:keys=6,expires=0 edit 1: add redis-cli info output

    Read the article

  • Scaling a video processing application on EC2?

    - by Stpn
    I am approaching the need to scale a video-processign application that runs on EC2. So far the setup is one machine: Backbonejs frontend Rails 3.2 Postgresql Resque + S3 for storage The flow of the app is as follows: 1) Request from frontend. Upload a video. 2) Storing video 3) Quering external APIs. 4) Processing / encoding videos. 5) Post to frontend. I can separate the backend and frontend without any problems, but when it comes to distributing the backend between several servers I am a bit puzzled. I can probably come up with a temporary solution (like just duplicating apps making several instances), but since I don't really have expertise in backend system administration, there can be some fundamental mistakes.. Also I would rather have something that is scalable. I wonder if anyone can give some feedback on the following plan: A) Frontend machine. Just frontend, talks to backend via REST Api of sorts. B) Backend server (BS), main database. Gets request from 1), posts to 2) saves uploads to 3) C) S3 storage. D) Server for quering APIs. Basically just a Resque workers, that post info back to 2) E) Server for video encoding. Processes videos uploaded on 3) and uploads them back. So I will have: A)frontend \ \ B)MAIN_APP/DB ----- C)S3 Storage (Files) / \ / / \ / D)ExternalAPI_queries E)Video_Processing (redundant DB) (redundant DB) All this will supposedly talk to each other via HTTP requests. My reason for this is that Video Processing part is really the most resource-intensive and I would just run barebones application that accepts requests and starts processing them. Questions: 1) In this setup I will have the main database at B) and all other servers will communicate with it via HTTP requests (and store duplicates of databases also I guess..for safety reasons). Is it the right approach or should I have 1 database that everyone connects to (how then?) 2) Is it a good idea to separate API queries from Video Processing part? Logically they are very close (processing is determined by the result of API queries), but resource-wise Video Processing is waaay more intensive. 3) what should I use to distribute calls between backend apps based on load?

    Read the article

< Previous Page | 273 274 275 276 277 278 279 280 281 282 283 284  | Next Page >