Search Results

Search found 57 results on 3 pages for 'artem shnayder'.

Page 1/3 | 1 2 3  | Next Page >

  • Cucumber can't find installed gems

    - by artemave
    environment/cucumber.rb: ... # gem dependencies config.gem 'cucumber-rails', :lib => false, :version => '>=0.3.0' unless File.directory?(File.join(Rails.root, 'vend config.gem 'database_cleaner', :lib => false, :version => '>=0.5.0' unless File.directory?(File.join(Rails.root, 'vend config.gem 'webrat', :lib => false, :version => '>=0.7.0' unless File.directory?(File.join(Rails.root, 'vend config.gem 'spork', :lib => false, :version => '>=0.7.5' unless File.directory?(File.join(Rails.root, 'vend config.gem 'factory_girl', :source => 'http://gemcutter.org' config.gem 'selenium-client', :lib => false config.gem 'Selenium', :lib => false config.gem 'rspec', :lib => 'spec' config.gem 'rspec-rails', :lib => 'spec/rails' config.gem 'test-unit', :lib => false Running cucumber gives missing gems error: artem:~/projects/food4feed (master)$ cucumber ... no such file to load -- test-unit /home/artem/.rvm/gems/ruby-1.9.1-p378/gems/polyglot-0.3.0/lib/polyglot.rb:65:in `require' /home/artem/.rvm/gems/ruby-1.9.1-p378/gems/polyglot-0.3.0/lib/polyglot.rb:65:in `require' /home/artem/.rvm/gems/ruby-1.9.1-p378/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:158:in `require' /home/artem/.rvm/gems/ruby-1.9.1-p378/gems/rails-2.3.5/lib/rails/gem_dependency.rb:208:in `load' /home/artem/.rvm/gems/ruby-1.9.1-p378/gems/rails-2.3.5/lib/initializer.rb:307:in `block in load_gems' /home/artem/.rvm/gems/ruby-1.9.1-p378/gems/rails-2.3.5/lib/initializer.rb:307:in `each' /home/artem/.rvm/gems/ruby-1.9.1-p378/gems/rails-2.3.5/lib/initializer.rb:307:in `load_gems' /home/artem/.rvm/gems/ruby-1.9.1-p378/gems/rails-2.3.5/lib/initializer.rb:169:in `process' /home/artem/.rvm/gems/ruby-1.9.1-p378/gems/rails-2.3.5/lib/initializer.rb:113:in `run' /home/artem/projects/food4feed/config/environment.rb:9:in `<top (required)>' /home/artem/.rvm/gems/ruby-1.9.1-p378/gems/polyglot-0.3.0/lib/polyglot.rb:65:in `require' /home/artem/.rvm/gems/ruby-1.9.1-p378/gems/polyglot-0.3.0/lib/polyglot.rb:65:in `require' /home/artem/projects/food4feed/features/support/env.rb:12:in `block in <top (required)>' /home/artem/.rvm/gems/ruby-1.9.1-p378/gems/spork-0.8.1/lib/spork.rb:23:in `prefork' /home/artem/projects/food4feed/features/support/env.rb:10:in `<top (required)>' /home/artem/.rvm/gems/ruby-1.9.1-p378/gems/polyglot-0.3.0/lib/polyglot.rb:65:in `require' /home/artem/.rvm/gems/ruby-1.9.1-p378/gems/polyglot-0.3.0/lib/polyglot.rb:65:in `require' /home/artem/.rvm/gems/ruby-1.9.1-p378/gems/cucumber-0.6.3/lib/cucumber/rb_support/rb_language.rb:124:in `load_code_file' /home/artem/.rvm/gems/ruby-1.9.1-p378/gems/cucumber-0.6.3/lib/cucumber/step_mother.rb:85:in `load_code_file' /home/artem/.rvm/gems/ruby-1.9.1-p378/gems/cucumber-0.6.3/lib/cucumber/step_mother.rb:77:in `block in load_code_files' /home/artem/.rvm/gems/ruby-1.9.1-p378/gems/cucumber-0.6.3/lib/cucumber/step_mother.rb:76:in `each' /home/artem/.rvm/gems/ruby-1.9.1-p378/gems/cucumber-0.6.3/lib/cucumber/step_mother.rb:76:in `load_code_files' /home/artem/.rvm/gems/ruby-1.9.1-p378/gems/cucumber-0.6.3/lib/cucumber/cli/main.rb:48:in `execute!' /home/artem/.rvm/gems/ruby-1.9.1-p378/gems/cucumber-0.6.3/lib/cucumber/cli/main.rb:20:in `execute' /home/artem/.rvm/gems/ruby-1.9.1-p378/gems/cucumber-0.6.3/bin/cucumber:8:in `<top (required)>' /home/artem/.rvm/gems/ruby-1.9.1-p378/bin/cucumber:19:in `load' /home/artem/.rvm/gems/ruby-1.9.1-p378/bin/cucumber:19:in `<main>' Missing these required gems: selenium-client Selenium rspec-rails test-unit You're running: ruby 1.9.1.378 at /home/artem/.rvm/rubies/ruby-1.9.1-p378/bin/ruby rubygems 1.3.5 at /home/artem/.rvm/gems/ruby-1.9.1-p378, /home/artem/.rvm/gems/ruby-1.9.1-p378%global All gems are obviously there: artem:~/projects/food4feed (master)$ gem list | egrep "elenium|rspec|test-unit" rspec (1.3.0) rspec-rails (1.3.2) Selenium (1.1.14) selenium-client (1.2.18) test-unit (2.0.7) The even more confusing part is that it only complains about certain gems. factory_girl and rspec don't cause problems. Any idea what is going on? My environment: Rails 2.3.5 cucumber (0.6.3) cucumber-rails (0.3.0)

    Read the article

  • ngModel and component with isolated scope

    - by Artem Andreev
    I am creating simple ui-datetime directive. It splits javascript Date object into _date, _hours and _minutes parts. _date uses jquery ui datepicker, _hours and _minutes - number inputs. See example: http://jsfiddle.net/andreev_artem/nWsZp/3/ On github: https://github.com/andreev-artem/angular_experiments/tree/master/ui-datetime As far as I understand - best practice when you create a new component is to use isolated scope. When I tried to use isolated scope - nothing works. ngModel.$viewValue === undefined. When I tried to use new scope (my example, not so good variant imho) - ngModel uses value on newly created scope. Of course I can create directive with isolated scope and work with ngModel value through "=expression" (example). But I think that working with ngModelController is a better practice. My questions: Can I use ngModelController with isolated scope? If it is not possible which solution is better for creating such component?

    Read the article

  • The cost of passing by shared_ptr

    - by Artem
    I use std::tr1::shared_ptr extensively throughout my application. This includes passing objects in as function arguments. Consider the following: class Dataset {...} void f( shared_ptr< Dataset const > pds ) {...} void g( shared_ptr< Dataset const > pds ) {...} ... While passing a dataset object around via shared_ptr guarantees its existence inside f and g, the functions may be called millions of times, which causes a lot of shared_ptr objects being created and destroyed. Here's a snippet of the flat gprof profile from a recent run: Each sample counts as 0.01 seconds. % cumulative self self total time seconds seconds calls s/call s/call name 9.74 295.39 35.12 2451177304 0.00 0.00 std::tr1::__shared_count::__shared_count(std::tr1::__shared_count const&) 8.03 324.34 28.95 2451252116 0.00 0.00 std::tr1::__shared_count::~__shared_count() So, ~17% of the runtime was spent on reference counting with shared_ptr objects. Is this normal? A large portion of my application is single-threaded and I was thinking about re-writing some of the functions as void f( const Dataset& ds ) {...} and replacing the calls shared_ptr< Dataset > pds( new Dataset(...) ); f( pds ); with f( *pds ); in places where I know for sure the object will not get destroyed while the flow of the program is inside f(). But before I run off to change a bunch of function signatures / calls, I wanted to know what the typical performance hit of passing by shared_ptr was. Seems like shared_ptr should not be used for functions that get called very often. Any input would be appreciated. Thanks for reading. -Artem

    Read the article

  • LAMP Stack Location

    - by Artem Moskalev
    I have installed the LAMP stack on Ubuntu 12.04 LTS using the tasksel command. I checked - it works. But I cant find the location of the installaton, I worked with WAMP - there you have a separate folder for Apache, for PHP and for mysql. Now I cant even find where to put the documents I create. Which folder is used to contain my web projects? How to start MySQL console and where to look for its installation directory? Which directories are PHP and Apache installed in? how to erase LAMP stack? I found out that some of the parts of the stack are installed in the root/var and root/etc directories: How can I install the whole LAMP stack in /home?

    Read the article

  • Windows driver signing

    - by Artem Smolny
    My company is developing driver for our hardware. Now I need to sign my driver for 32 and 64 bit platforms. Please tell, now I need to buy Authenticode certificate, right? What CA to use? DigiCert? GlobalSign? ( http://www.sslshopper.com/microsoft-authenticode-certificates.html ) Symantec? ( http://www.symantec.com/verisign/code-signing/microsoft-authenticode ) What is the difference between this CA offers? I need to use tools from WDK?

    Read the article

  • How to stop camera from rotating in 2.5d platformer

    - by Artem Suchkov
    I'm stuck with a problem: I can not make my camera stop rotating after character. What I already have tried: using empty game object with rigid body and locked rotation and make it parent of camera, while player being the parent of object. Also, I've tried using few scripts from web, that did not help. Right now I'm bad with using JS in Unity (can handle JS on website, but I dont know how to integrate it for now) and practicing the basics, making easy 2.5d platformer with basic features, so I can not write code for now.

    Read the article

  • mysql does not start or work

    - by Artem Moskalev
    Recently I installed LAMP with tasksel. Then I remember I issued some commands to get into the mysql console - it worked. Right now I checked - apache and php modules work perfectly. But as for mysql - whatever commands I issue - it does not start the console. It writes: Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' How can I fix it and start mysql? Why did this happen? Where is it installed (I used the default location for the installation), because I dont understand what is started when I issue the commands?

    Read the article

  • System in low-graphics mode

    - by Artem Moskalev
    I am totally new to Linux, and Ubuntu. I bought ASUS X53E Notebook, erased Windows and installed Ubuntu. First it worked ok. Then when I started working with it, i opened the terminal, entered sudo chmod 666 usr and then all the icons from the main panel disappeared + the whole system stopped responding. I decided to restart the system. When restarted, a message appears: the system is running in low graphics mode and below it: Your screen, graphics card and input device setting could not be detected correctly. You will need to configure it yourself. But the "OK" button is disabled and if I press any buttons nothing happens. If I enter Ctrl-Alt-F2 it opens the bash terminal. But there commands sudo or apt-get are not found and it says that permission denied if i try to enter any folder like cd /usr If I enter the su command it asks for the password I don't know. When encountering this problem first, I reinstalled the whole Ubuntu. but today it happened again just the same. What shall I do? maybe there is something wrong with the hardware? If I need to install another distribution of Linux could you recommend one? but I'd rather stick to Debian releases like Ubuntu. so how do I fix the problem? PS: Please give answers in simple terms because am a newbie so i don't know what goes where yet.

    Read the article

  • Does .net calling COM which in turn calls other .net COM object works when using SxS and manifest fi

    - by Alex Shnayder
    I have a .net application calling to a COM component (C++) which in turn calls to another COM object implemented in .NET. This application is using Windows SxS capabilities and does not register any of it's COM components. Not the one written in C++, and not the one written in .net. This first call to the C++ COM component works fine. But when the C++ COM component calls to the .net one, it fails with class not registered. I have tried creating a small C++ app with a manifest file which calls the .net component and it works. It seems that when the flow is .net - COM NATIVE - .NET COM. Then SxS breaks and does not work. When looking at Fusion Logs (assembly loading logs) I see that no one is even attempting to resolve the .NET COM assembly. Is this SxS scenario even supposed to work (I think it does supposed to work)? If yes, then what can I be doing wrong ?

    Read the article

  • Order objects for Northwind Access database

    - by Artem Shnayder
    I need to build two objects: an OrderList and an Order. Using those two objects, I have to populate a DataGridView with a history of the orders. However, I am instructed not to use binding sources for the connection or other drag and drop controls. Unfortunately, from Google it seems like those are the most popular options for this type of problem. Can anyone point me in the right direction? I don't have much experience with C#. Thanks.

    Read the article

  • Hot deploying with Tomcat Manager fails because file already exists

    - by Artem
    Tomcat beginner question that I hope will help many. Could someone explain how TomCat hot deploy is supposed to work. We have a currently deployed 'TomCatTest', and we want to fix a small bug in 'TomCatTest' with no downtime for users. We are using the Tomcat Manager console, and just trying to upload a file there. We must be making a stupid error, but we see: 'FAIL - War file "TomCatTest.war" already exists on server' There are many many posts suggesting this works somehow: http://serverfault.com/questions/120706/replace-single-file-on-tomcat-deployed-war http://tomcat.apache.org/tomcat-6.0-doc/config/host.html#Automatic%20Application%20Deployment For the life of me, I can't figure out this simple problem. Could you help, please?

    Read the article

  • nginx: How can I set proxy_* directives only for matching URIs?

    - by Artem Russakovskii
    I've been at this for hours and I can't figure out a clean solution. Basically, I have an nginx proxy setup, which works really well, but I'd like to handle a few urls more manually. Specifically, there are 2-3 locations for which I'd like to set proxy_ignore_headers to Set-Cookie to force nginx to cache them (nginx doesn't cache responses with Set-Cookie as per http://wiki.nginx.org/HttpProxyModule#proxy_ignore_headers). So for these locations, all I'd like to do is set proxy_ignore_headers Set-Cookie; I've tried everything I could think of outside of setting up and duplicating every config value, but nothing works. I tried: Nesting location directives, hoping the inner location which matches on my files would just set this value and inherit the rest, but that wasn't the case - it seemed to ignore anything set in the outer location, most notably proxy_pass and I end up with a 404). Specifying the proxy_cache_valid directive in an if block that matches on $request_uri, but nginx complains that it's not allowed ("proxy_cache_valid" directive is not allowed here). Specifying a variable equal to "Set-Cookie" in an if block, and then trying to set proxy_cache_valid to that variable later, but nginx isn't allowing variables for this case and throws up. It should be so simple - modifying/appending a single directive for some requests, and yet I haven't been able to make nginx do that. What am I missing here? Is there at least a way to wrap common directives in a reusable block and have multiple location blocks refer to it, after adding their own unique bits? Thank you. Just for reference, the main location / block is included below, together with my failed proxy_ignore_headers directive for a specific URI. location / { # Setup var defaults set $no_cache ""; # If non GET/HEAD, don't cache & mark user as uncacheable for 1 second via cookie if ($request_method !~ ^(GET|HEAD)$) { set $no_cache "1"; } if ($http_user_agent ~* '(iphone|ipod|ipad|aspen|incognito|webmate|android|dream|cupcake|froyo|blackberry|webos|s8000|bada)') { set $mobile_request '1'; set $no_cache "1"; } # feed crawlers, don't want these to get stuck with a cached version, especially if it caches a 302 back to themselves (infinite loop) if ($http_user_agent ~* '(FeedBurner|FeedValidator|MediafedMetrics)') { set $no_cache "1"; } # Drop no cache cookie if need be # (for some reason, add_header fails if included in prior if-block) if ($no_cache = "1") { add_header Set-Cookie "_mcnc=1; Max-Age=17; Path=/"; add_header X-Microcachable "0"; } # Bypass cache if no-cache cookie is set, these are absolutely critical for Wordpress installations that don't use JS comments if ($http_cookie ~* "(_mcnc|comment_author_|wordpress_(?!test_cookie)|wp-postpass_)") { set $no_cache "1"; } if ($request_uri ~* wpsf-(img|js)\.php) { proxy_ignore_headers Set-Cookie; } # Bypass cache if flag is set proxy_no_cache $no_cache; proxy_cache_bypass $no_cache; # under no circumstances should there ever be a retry of a POST request, or any other request for that matter proxy_next_upstream off; proxy_read_timeout 86400s; # Point nginx to the real app/web server proxy_pass http://localhost; # Set cache zone proxy_cache microcache; # Set cache key to include identifying components proxy_cache_key $scheme$host$request_method$request_uri$mobile_request; # Only cache valid HTTP 200 responses for this long proxy_cache_valid 200 15s; #proxy_cache_min_uses 3; # Serve from cache if currently refreshing proxy_cache_use_stale updating timeout; # Send appropriate headers through proxy_set_header Host $host; # no need for this proxy_set_header X-Real-IP $remote_addr; # no need for this proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; # Set files larger than 1M to stream rather than cache proxy_max_temp_file_size 1M; access_log /var/log/nginx/androidpolice-microcache.log custom; }

    Read the article

  • How to convert series of MP3 to a M4B in a batch

    - by Artem Tikhomirov
    Hello. I have a batch of MP3 based books. Some of them divide into files according to book's own structure: chapters and so on. Some of them was just divided into equally lengthened parts. So. I've bought an iPhone, and I want to convert them all to M4B format. How could I convert them in a batch? I mean how cold I set up a process once, for each book, and then, after couple of weeks, receive totally converted library. The only able program for such conversion I've found was Audiobook Builder for a Mac. But it is pretty slow and do not support batching in principle. Solutions for any platform, please.

    Read the article

  • Windows XP seemingly out of resources but plenty of free RAM and swap available

    - by Artem Russakovskii
    This one has been bothering me for years and so far I couldn't find an adequate solution. The problem occurs on pretty much every XP install I've done. After opening a variety of programs or the system running existing programs for a while, Windows seemingly runs out of resources, without telling me. There's ALWAYS free RAM. For example, it just happened to me and I had over a gig of free RAM. There are no viruses, spyware, or other nonsense - it is a Windows resource problem, but the question is which resource is it running out of, how does one pinpoint it, and how does one prevent it? Sometimes, this happens after running specific programs - for example, today it happened when I started Photoshop CS4 and Flash CS4 at the same time. I also noticed that restarting The Bat (email client by Ritlabs) seems to get rid of this problem for a while but again, this happens on machines that don't even have The Bat installed. So what does exactly happen? The symptoms are: pressing alt-tab doesn't bring up the list anymore - it just jumps to the next window instantly, very similar to the way Alt-Esc works, however in this case, it's due to not having enough resources to bring up the alt-tab menu random programs would randomly crash, citing random errors, out of memory errors, system resources, inabilities to do system calls, etc. random programs would start missing random parts - for example, Firefox top menus might disappear, pull up partial selections, or not pull up anymore altogether. IE might lose a few of its toolbars. Some programs might fail to redraw or would just plain go gray where the UI used to be. Windows itself never complains about running out of RAM, virtual memory, or anything at all, yet it's running out of something. The only clue I was able to find and apply the fix today was this Desktop Heap Limitation. I haven't confirmed the fix working as not enough time passed. In the meantime, what are everyone's thoughts?

    Read the article

  • Make a snapshot of a live mySQL database with myISAM & innoDB tables without locking

    - by Artem
    We have a live database in production where we are running out of space on the server. So I would like to transfer to a new server without any downtime (or as little downtime as possible). In general, I would also like to have a hot failover copy of the database available. I would like to use replication to get all of the data copied to the new machine, and then at some point flip a switch and have that new machine become the master (normal failover scenario). My problem is that I am not sure how to initialize replication without locking the db to make the initial snapshot I will use? Is there any way to do this? I know I could do it using single-transaction if I was using innoDB, but very unfortunately we have some myISAM tables in there (in fact the largest 150GB table is myISAM and I want to switch it to InnoDB but I can't do it until I have more space & a hot copy to switch to). Any ideas? Is there some way to make such a snapshot? Or is there alternatively a way to get replication to "catch up" without an snapshot for initialization?

    Read the article

  • Is it possible to use the MMM tool without virtual IP capability?

    - by Artem
    We are on a host (Serverbeach) that does not support Virtual/floating IPs until you reserve at least a half-rack, which is just a little more than we are willing to spend per month right now. We do have 2 machines in one of their datacenters, and I am using these 2 machines right now in the Master-Master in Active-Passive Mode just like done by MMM -- http://mysql-mmm.org/. I have just set them up and I managing them manually, with manual switch on the Web frontend to tell it to connect to the correct (active) master. Is there any way to use MMM without virtual IPs? Any other comments on this setup?

    Read the article

  • Why can't I create soft link on vboxsf file system?

    - by Artem Ice
    ln -s keeps saying me that file system is read-only, however it is not. ice@distantstar:~/virt ? touch file ice@distantstar:~/virt ? rm file ice@distantstar:~/virt ? ln -s ~/.bashrc ~/virt/.bashrc ln: failed to create symbolic link `/home/ice/virt/.bashrc': Read-only file system ice@distantstar:~/virt ? mount | grep virt none on /home/ice/virt type vboxsf (rw,nodev,relatime) ice@distantstar:~/virt ? cat /etc/fstab | grep virt VIRT /home/ice/virt vboxsf rw 0 0

    Read the article

  • Why powershell runs executables in separate window?

    - by Artem Tikhomirov
    On one of my servers (2008 R2) powershell refuses to run executables without extension, so typing cmd (or &cmd) in command prompt results in folowing error message: The term 'cmd' is not recognized as the name of a cmdlet Invoking executable one of the following ways pops out separate window (which executes asynchronously in respect to parent). I've tried that in x86 version of powershell and in x64 one. I've tried -Noprofile argument. PATH seems to be OK. It includes System32 and all. The only way I've managed to execute cmd inline form powershell is opening standard cmd.exe shell, executing powershell.exe from it and executing cmd /c echo test from it. Inception, huh? What should I try next?

    Read the article

  • How to convert series of MP3 to a M4B in a batch

    - by Artem Tikhomirov
    I have a batch of MP3 based books. Some of them divide into files according to book's own structure: chapters and so on. Some of them was just divided into equally lengthened parts. So. I've bought an iPhone, and I want to convert them all to M4B format. How could I convert them in a batch? I mean how cold I set up a process once, for each book, and then, after couple of weeks, receive totally converted library. The only able program for such conversion I've found was Audiobook Builder for a Mac. But it is pretty slow and do not support batching in principle. Solutions for any platform, please.

    Read the article

  • Routing protocols, distance vector vs link state

    - by Artem Barger
    I'm trying to figure out the differences(pros/cons) between two routing protocols approach and I would be great-full for any help, advice and explanation. As far I can say that it seems like distance vector is more static and more local based routing, since it doesn't know the network state whereas link state is more aware of current states therefore it seems more natural to use it over distance-vector, but I have a feeling like I'm missing something. And I would be glad to here about more aspects and different issues I have to consider while choosing one of them.

    Read the article

  • Change user login in Windows 7 (after a misprint in username)

    - by Artem Russakovskii
    I have an install of Windows 7 that I've already put a few days into. Today I realized I've made a mistake in the username and it's driving me nuts (my personal OCD). While changing the physical folder name is perhaps possible, though quite involved, I do not want to open that can of worms. What I want to do is simply change the username I give when the login prompt shows up. I thought it's possible by just renaming the user account in the User Accounts but that didn't work. Is it possible to do then? Or is the only way to create another user and spend hour migrating everything I'd already customized to that user?

    Read the article

  • Getting ANT to scp only new/changed files

    - by Artem
    I would like to optimize my scp deployment which currently copies all files to only copy files that have changed since the last build. I believe it should be possible with the current setup somehow, but I don't know how to do this. I have the following: Project/src/blah/blah/ <---- files I am editing (mostly PHP in this case, some static assets) Project/build <------- I have a local build step that I use to copy the files to here I have an scp task right now that copies all of Project/build out to a remote server when I need it. Is it possible to somehow take advantage of this extra "build" directory to accomplish what I want -- meaning I only want to upload the "diff" between src/** and build/**. Is it possible to somehow retrieve this as a fileset in ANT and then scp that? I do realize that what it means is that if I somehow delete/mess around with files on the server in between, the ANT script would not notice, but for me this is okay.

    Read the article

1 2 3  | Next Page >