Search Results

Search found 5454 results on 219 pages for 'soa purge instances dehyd'.

Page 88/219 | < Previous Page | 84 85 86 87 88 89 90 91 92 93 94 95  | Next Page >

  • Nginx & Apache Cannot get try_files to work with permalinks

    - by tcherokee
    I have been working on this for the past two weeks not and for some reason I cannot seem to get nginx's try_files to work with my wordpress permalinks. I am hoping someone will be able to tell me where I am going wrong and also hopefully tell me if I made any major errors with my configurations as well (I am an nginx newbie... but learning :) ). Here are my Configuration files nginx.conf user www-data; worker_processes 4; pid /var/run/nginx.pid; events { worker_connections 768; # multi_accept on; } http { ## # Basic Settings ## sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; # server_tokens off; # server_names_hash_bucket_size 64; # server_name_in_redirect off; include /etc/nginx/mime.types; default_type application/octet-stream; ## # Logging Settings ## # Defines the cache log format, cache log location # and the main access log location. log_format cache '***$time_local ' '$upstream_cache_status ' 'Cache-Control: $upstream_http_cache_control ' 'Expires: $upstream_http_expires ' '$host ' '"$request" ($status) ' '"$http_user_agent" ' ; access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } mydomain.com.conf server { listen 123.456.78.901:80; # IP goes here. server_name www.mydomain.com mydomain.com; #root /var/www/mydomain.com/prod; index index.php; ## mydomain.com -> www.mydomain.com (301 - Permanent) if ($host !~* ^(www|dev)) { rewrite ^/(.*)$ $scheme://www.$host/$1 permanent; } # Add trailing slash to */wp-admin requests. rewrite /wp-admin$ $scheme://$host$uri/ permanent; # All media (including uploaded) is under wp-content/ so # instead of caching the response from apache, we're just # going to use nginx to serve directly from there. location ~* ^/(wp-content|wp-includes)/(.*)\.(jpg|png|gif|jpeg|css|js|m$ root /var/www/mydomain.com/prod; } # Don't cache these pages. location ~* ^/(wp-admin|wp-login.php) { proxy_pass http://backend; } location / { if ($http_cookie ~* "wordpress_logged_in_[^=]*=([^%]+)%7C") { set $do_not_cache 1; } proxy_cache_key "$scheme://$host$request_uri $do_not_cache"; proxy_cache main; proxy_pass http://backend; proxy_cache_valid 30m; # 200, 301 and 302 will be cached. # Fallback to stale cache on certain errors. # 503 is deliberately missing, if we're down for maintenance # we want the page to display. #try_files $uri $uri/ /index.php?q=$uri$args; #try_files $uri =404; proxy_cache_use_stale error timeout invalid_header http_500 http_502 http_504 http_404; } # Cache purge URL - works in tandem with WP plugin. # location ~ /purge(/.*) { # proxy_cache_purge main "$scheme://$host$1"; # } # No access to .htaccess files. location ~ /\.ht { deny all; } } # End server gzip.conf # Gzip Configuration. gzip on; gzip_disable msie6; gzip_static on; gzip_comp_level 4; gzip_proxied any; gzip_types text/plain text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript; proxy.conf # Set proxy headers for the passthrough proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_max_temp_file_size 0; client_max_body_size 10m; client_body_buffer_size 128k; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 90; proxy_buffer_size 4k; proxy_buffers 4 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; add_header X-Cache-Status $upstream_cache_status; backend.conf upstream backend { # Defines backends. # Extracting here makes it easier to load balance # in the future. Needs to be specific IP as Plesk # doesn't have Apache listening on localhost. ip_hash; server 127.0.0.1:8001; # IP goes here. } cache.conf # Proxy cache and temp configuration. proxy_cache_path /var/www/nginx_cache levels=1:2 keys_zone=main:10m max_size=1g inactive=30m; proxy_temp_path /var/www/nginx_temp; proxy_cache_key "$scheme://$host$request_uri"; proxy_redirect off; # Cache different return codes for different lengths of time # We cached normal pages for 10 minutes proxy_cache_valid 200 302 10m; proxy_cache_valid 404 1m; The two commented out try_files in location \ of the mydomain config files are the ones I tried. This error I found in the error log can be found below. ...rewrite or internal redirection cycle while internally redirecting to "/index.php" Thanks in advance

    Read the article

  • Gitosis installation of public key not working...

    - by user29600
    I've been following this tutorial to install and setup git on Ubuntu Server 10.04 using Windows 7 as a client. However, after finally figuring out how it works (executed gitosis-init a bunch of times on the wrong key), I copied the id_rsa.pub file over to the server in /tmp folder and ran it again. Unfortunately it still doesn't work and when I execute git clone [email protected]:gitosis-admin.git it asks for gitosis's password rather than the RSA Passphrase. I'm assuming is the same problem this guy is having here... however, after following his instructions: Purge git-core and gitosis and manually remove the /srv/gitosis folder and following the instructions again (with the proper id_rsa.pub file this time), I'm still having the same issue. Anyone know what I'm doing wrong? Is there any way to probe for more information that might help in solving this?

    Read the article

  • Scaling a LAMP website hosted on EC2

    - by Gublooo
    Hello, I'm very new to all this - I've recently managed to launch my website on EC2. As next step, I want to learn how to scale the website. I have a general idea but wanted some input from the experts about how to go about it. My website is based on LAMP but also has Red5 server which allows users to record messages and also used for playing them back. Currently this is the architecture I'm planning to setup for initial scaling. Deploy four small EC2 instances for the following purposes: Instance-1: On this instance I will run the MySql database Instance-2: On this instance I will run the red5 server Instance-3 & Instance-4 These 2 instances will be used to deploy the website and will have Apache running on them. They will communicate with the mysql server on Instance-1 and red5 server on Instance-2 using the internal IP address. As an when required, I will launch another instance of the same EBS - I will have EBS of say 50 GIG where all the mysql data will be stored. Also red5 will use this EBS to store the video messages Load-Balancer - Use the load balancer provided by Amazon to load balance Instance-3 and Instance-4 This is what I have in mind. I could be way off so please bear with me. Also I have not taken into account the case of scaling MySql server as I currently have no idea about how that will be done and whether or not it is necessary initially. I am aware that Amazon provides auto scaling and mysql scaling as well but I dont want to get into that right now. Your feedback is appreciated Thanks

    Read the article

  • Memory overcommitment on VmWare ESXi 5.0

    - by Tibor
    I would like to understand better the possibilities of VmWare ESXi memory overcommitment. I've read this paper from VmWare, so I am familiar with general concepts, such as hypervisor swapping, memory balooning and page sharing. It seems that a combination of these techniques allows for quite a large degree of overcommitment. However, I am not sure. I am deploying a virtual test lab comprising of 4 identical sets of virtual servers and workstations and a couple of virtual router instances. Overall, I expect to be running around 20 virtual machines with Windows XP, Windows 7 and Ubuntu for workstation hosts as well as CentOS and Windows 2008 Server instances for servers. The problem is, however, that the host machine only has 12GB of RAM and I don't have an option to stuff in some more. I would like to know what is the best option to configure hosts in order to achieve reasonable performance within the constrains. I have these two options: Allocate as little as possible of RAM to each virtual machine. Allocate an extraordinary amount (such as 4 GB per instance) and let the baloon driver do the rest. Something else? Which would work better? Machines will mostly be idle, so I don't have any major performance expectations, but they should run reasonably smoothly nevertheless.

    Read the article

  • Deploying multiple identical copies of a virtual machine for compute tasks

    - by Reid
    I have a compute task which has a large number of library dependencies. I would like to deploy it on some of my company's large Linux clusters, where I do not have root. I could probably track down, compile, and install the right versions of all the libraries, but this looks to be quite tedious and would have to be repeated if I deployed it again somewhere else. On the other hand, it's pretty easy to install on current Ubuntu. This led me to wonder about a virtual machine approach. Could I put together a virtual machine which booted up, ran the computation (with parameters from and results to the host), and then shut down? In other words, I'd like a command like this that I could run on the host: $ ./run-vm --ram N --task /path/on/host/foo.sh --results /another/host/dir/ This would boot the VM, run foo.sh, and put the (relatively small) results of the computation in /another/host/dir/. It's important to start up many instances of the VM simultaneously, both on a single node and multiple nodes of the cluster. So it would be nice if I didn't have to make many copies of the VM virtual disk and metadata. As the task instances are completely independent, the VMs would not need any network support once deployed, or any outside communications beyond reading and writing the host filesystem. Is this possible, and if so, how might I go about doing it? Are there assumptions I've made above which are bogus?

    Read the article

  • log execution of certain commands on linux

    - by jlsksr
    I have to maintain a system (debian) on which several users are allowed to install programs - so I would like to log, for example, if anyone executes "apt-get install" or "apt-get purge", so I can keep track of manually installed packages.. I'm looking for a general way to achieve this; it's not just APT, but several programs/scripts etc. Any ideas? /edit a google-search with few different keywords brought up this: http://serverfault.com/questions/201221/how-to-log-every-linux-command-to-a-logserver http://stackoverflow.com/questions/15698590/how-to-capture-all-the-commands-typed-in-unix-linux-by-any-user http://sourceforge.net/projects/rootsh/

    Read the article

  • MariaDB, Galera, xtrabackup - do I need the binary log?

    - by bernhardrusch
    We are using a MariaDB Galera Cluster with 3 nodes. For the state transfer we are using xtrabackup. We have some problems with the binary logs - they got too big and crashed the server. We can remove them manually with the purge binary logs command, another way would be to set the expire_logs_days so they would expire. I now that we could use xtrabackup to backup the DB and use the binlog to get to some point in time. But do we really need it for Galera to work ?

    Read the article

  • Provider claiming "all web servers in the cloud are automatically kept in sync" - should I be skeptical?

    - by RobMasters
    I'm no expert in cloud computing - I've spent a fair bit of time researching it and various providers but am yet to get any hands-on experience with it. From what I've read about AWS and auto-scaling EC2 instances though, it seems as though each instance should be completely decoupled from all other instances. i.e. If content is uploaded to the web server's local filesystem from a custom CMS backend then that content won't be available if subsequently requested from a different web server in the auto-scaling group. Is that right? I met with a representative of our existing hosting provider recently and he was claiming that it isn't a problem that our legacy CMS system is highly dependent on having a local filesystem. He said that all web servers, regardless of how many, would be kept as exact duplicates so I shouldn't notice any difference compared to our existing setup of a single dedicated server. This smells a little too much like bull fecal-matter to me...should I be skeptical about this? I'm a little worried because my (non-technical) boss who ultimately makes the decisions is all for signing up to this cloud solution because it won't require any extra work. I'm sure that they must at least be able to provide this, otherwise they wouldn't be attempting to sell it to us. But at what cost? It sounds as though each web server will always need to be checking the other web server(s) for new static content, which to me sounds like unwanted overhead that'll slow things down. I'd really appreciate it if somebody could clear this up to me. I'm all for switching to AWS and using S3+CloudFront for all static content, but that isn't looking very likely to happen at the moment.

    Read the article

  • Can expire_logs_days be less than 1 day in MySQL?

    - by Scott
    So... yesterday I received an "after the fact email" about a campaign that has started for one of the services that I run. Now the DB server is getting hammered, hard, to the tune of about 300mb/min in binary logging for the replicate. As you could imagine, this is chewing up space at a fairly tremendous rate. My normal 7 day expiry of binary logs just isn't cutting it. I've resorted to truncating logs to just the last for 4 hours with(I'm verifying that replication is up to date with mk-heartbeat): PURGE MASTER LOGS BEFORE DATE_SUB( NOW(), INTERVAL 4 HOUR); I'm just running that from cron every few hours to weather the storm, but it made me question the minimum value for expire_logs_days. I haven't come across a value that is less than 1, but that doesn't mean that it isn't possible. http://dev.mysql.com/doc/refman/5.0/en/server-system-variables.html#sysvar_expire_logs_days gives the type as being numeric, but doesn't indicate if it's expecting integers.

    Read the article

  • Mutt: apply command to all tagged messages

    - by mrucci
    From the mutt manual: Once you have tagged the desired messages, you can use the tag-prefix operator, which is the ; (semicolon) key by default. When the tag-prefix operator is used, the next operation will be applied to all tagged messages if that operation can be used in that manner. But it seems that I can only execute commands that are already bound to a specific keyboard shortcut. For example I can use ;d to delete all selected messages. What if I want to apply an "unbound" command (such as purge-message)? I have also tried using something based on :exec tag-prefix or :push tag-prefix without success.

    Read the article

  • Searching for just files

    - by M Schenkel
    I have a couple questions about searching for files on Windows 7. I find the XP method much easier than this new Windows 7 search. Note: I am only concerned about finding files matching a search term, not ALL files containing the search term. Is there a way to search just for files? When I use the search it seems to be searching "within" files and returning instances where the name of the file is used. Example: I have a whole web directory and want to find the javascript files. But if I enter "myjavascript.js" in the search, it also returns all the html files which reference the javascript file. This is both slow and difficult to actually find the reference to the file. Is there a way to search for an exact match? The search seems to implicitly use wildcards. For instance, say I have a bunch of files in a folder: file1.txt,file11.txt, file12.txt, file13.txt. If I enter "file1.txt" in the searcher it returns instances as if I were using a wild card file1*.txt I miss XP!!!!

    Read the article

  • How does everyone set up AWS for PHP with a git workflow while worrying about distributing EC2?

    - by Parris
    Hello, I have been looking for something like heroku but for php, and after much frustration (and almost finding what I need, but not quite) we decided to just go with AWS without any other abstraction. We are using PHP 5.3 (and CakePHP 1.3), and are currently using git. Ubuntu seems like the easiest way to get both of those on there and we will most likely use that. We aren't really going worry about outgoing email. We are using smtp through gmail, but will most likely switch to some other service eventually. I had 3 questions: 1) I have been looking at Zend Server, and I am not quite sure how that is more beneficial than xampp. Perhaps it is not? 2) I suppose to make the application scale we would need multiple instances of some ec2 ami. Then just duplicate it and such. The question then becomes how do we make sure all EC2 instances are up to date? 3) I understand the concept of load balancing to some degree. I understand that in 1 region you select a bunch of servers and have it load balance across them. The question then becomes well how about world wide? How do I make it so that traffic is directed to the correct ec2 server? I have heard of route 53, and tried signing up for that, but nothing appears in my control panel. Also perhaps it is just a DNS thing with my domain registrar? AHHH... some tutorial would be helpful!

    Read the article

  • How to meet Windows 8 upgrade's 20 GB requirement on a 40 GB SSD with a 22 GB Windows 7 install?

    - by deryus
    A PC I have has Windows 7 installed on a 40 GB SSD, and I bought a Windows 8 upgrade for it. The current Windows folder on it however is 22 GB, that's after removing hibernation, turning off the pagefile and removing all extra programs/features. So even if I purge every other file and folder, the Windows folder itself takes more than half the disk. The PC also has a 1 TB HDD, but the upgrade installer didn't give me any options about choosing another drive. So, is my only option to reinstall Windows 7 on a larger drive, then proceed with the Windows 8 upgrade? Or is there anything I can remove from the Windows folder that while might be dangerous for long term usage, is fine for the few minutes I need to get Windows 8 installing?

    Read the article

  • What is excessive swapping.

    - by amateur barista
    This post led me to ask that question. Cache contention On a large site, if you are using MyISAM, contention occurs in the database tables when the cache is forced to clear after a node or a comment is added. With tens of thousands of filter text snippets needing to be deleted, the table will be locked for a long period, and any accesses to it will be queued pending the purge of the data in it. The same is true for the page cache as well. This often causes a "site hang" for a minute or two. During that time new requests keep piling up, and if you do not have the MaxClients parameter in Apache setup correctly, the system can go into thrashing because of excessive swapping.

    Read the article

  • Chrome to download Gmail attachments to a different place

    - by Joe
    I'm using Chrome and Gmail. When I download stuff from the web, Chrome automatically puts it in Downloads, which is fine. However, I'm aware that everything I download from Gmail is in the cloud anyway and so I don't really need it taking up space on my hard drive as well. Is there a way to have Gmail attachements automatically download to a different folder, say, Downloads\Gmail? That way I could regularly purge the ones that won't be useful for a while. (Yes, I'm aware that Word documents and the like often get changed after they arrive, I'm more focused on images, pdfs, and zip files.)

    Read the article

  • (squid): failed to find or read error text file.

    - by adam
    There is something in our ERR_NO_RELAY that is causing this error to be logged and for the squid process to fail on start up. I can't show you the exact content of the file but I can tell you It has several lines of JavaScript When we remove the JavaScript, the problem goes away. This same file does not cause any issues other 3 instances of squid that we have running internally. All instances of squid came from the same VM images so they should be the same. We are unable to reproduce this issue except on the one box and we are unable to debug more on this box right now because it is running in production. I know these files are interpreted so squid can replace certain values available in the session so it may be that a syntax error caused this issue. That does not explain why we cannot reproduce it on other (virtually the same) images. One difference is that the instance of squid that has the issue was under load when the issue occurred. Any suggestions/insight would be appreciated. thanks!

    Read the article

  • Resolving Domainnames differently for different services

    - by mlaug
    Some time ago we had an issue with our network infrastructure and php with curl. Our Network infrastructure is fairly simple. LoadBalancer/Firewall = 5 servers The Domainname of our website is set to the ip of the Loadbalancer, of course. But calling curl from one of the servers did result in a timeout. It appears that a server could not call for its own domain it is serving. So we had to set the domains via /etc/hosts to the sever itself. But now We have implemented a Varnish in front of the Loadbalancer, which we want to automatically purge, once a change on a page happens. So now we need to call the domain www.example.com/url_to_purge. Sadly this call what be resolved to the server itself instead of the varnish, because of the /etc/hosts entries. So now I am wondering, if you could resolve domain names differently for different services :)

    Read the article

  • Deploying Memcached as 32bit or 64bit?

    - by rlotun
    I'm curious about how people deploy memcached on 64 bit machines. Do you compile a 64bit (standard) memcached binary and run that, or do people compile it in 32bit mode and run N instances (where N = machine_RAM / 4GB)? Consider a recommended deployment of Redis (from the Redis FAQ): Redis uses a lot more memory when compiled for 64 bit target, especially if the dataset is composed of many small keys and values. Such a database will, for instance, consume 50 MB of RAM when compiled for the 32 bit target, and 80 MB for 64 bit! That's a big difference. You can run 32 bit Redis binaries in a 64 bit Linux and Mac OS X system without problems. For OS X just use make 32bit. For Linux instead, make sure you have libc6-dev-i386 installed, then use make 32bit if you are using the latest Git version. Instead for Redis <= 1.2.2 you have to edit the Makefile and replace "-arch i386" with "-m32". If your application is already able to perform application-level sharding, it is very advisable to run N instances of Redis 32bit against a big 64 bit Redis box (with more than 4GB of RAM) instead than a single 64 bit instance, as this is much more memory efficient. Would not the same recommendation also apply to a memcached cluster?

    Read the article

  • Postgresql fails to start on Ubuntu 10.04.4 LTS

    - by cancerballs
    I installed postgresql 9.2 from add-apt-repository ppa:pitti/postgresql using apt-get install postgresql-9.2 At the end of the install and every time I try to launch postgresql by using the following command /etc/init.d/postgresql start or service postgresql start I get this error: Error: could not exec /usr/lib/postgresql/9.2/bin/pg_ctl /usr/lib/postgresql/9.2/bin/pg_ctl start -D /var/lib/postgresql/9.2/main -l /var/log/postgresql/postgresql-9.2-main.log -s -o -c config_file="/etc/postgresql/9.2/main/postgresql.conf" : [fail] invoke-rc.d: initscript postgresql, action "start" failed. dpkg: error processing postgresql-9.2 (--configure): subprocess installed post-installation script returned error exit status 1 Errors were encountered while processing: postgresql-9.2 E: Sub-process /usr/bin/dpkg returned an error code (1) I have tried everything found here: How to thoroughly purge and reinstall postgresql on ubuntu and here: Eliminating non working postgresql installations on ubuntu 10-04 and starting af. I have also done dpkg -P --force-remove-reinstreq postgresql-client-9.2 in my attempt to remove everything postgres related from my server. After removing postgresql I have used dpkg --get-selections | grep postg To be sure there is nothing left and I can do a clean install. I have also made sure that the files and folders mentioned in the error message have the right permissions. The /var/log/postgresql/postgresql-9.2-main.log file is empty. I have tried installing every postgresql version from 8.3 to 9.2 and I get the same error on every time. I once managed to compile postgresql from the source provided on their website but then I encountered weird errors with psycopg2 so I figured I'd install postgresql this way and avoid those errors. Also when I type apt-get install postgresql it by default tries to install the 8.3 version even when I can find the package by typing apt-get install postgresql-9.2.

    Read the article

  • Messed up installation of mysql-server - cannot complete installation or deinstallation

    - by Christian Engel
    apt-get got stuck while installing mysql-server. I don't know why but it just stopped working and never continued. I had to reboot the machine in the middle of the setup process. Now, if I try to install or purge the mysql-server package, apt-get tries to configure mysql-server first (tells me its not installed before that) and cancels with a error message: Sub-process /usr/bin/dpkg returned an error code(1) apt-get also tells me that two packages have not been successfully installed or removed. this is the complete console output: christian@devbox:~$ sudo apt-get install mysql-server [sudo] password for christian: Reading package lists... Done Building dependency tree Reading state information... Done mysql-server is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 17 not upgraded. 2 not fully installed or removed. After this operation, 0 B of additional disk space will be used. Do you want to continue [Y/n]? y Setting up mysql-server-5.5 (5.5.32-0ubuntu7) ... start: Job failed to start invoke-rc.d: initscript mysql, action "start" failed. dpkg: error processing mysql-server-5.5 (--configure): subprocess installed post-installation script returned error exit status 1 dpkg: dependency problems prevent configuration of mysql-server: mysql-server depends on mysql-server-5.5; however: Package mysql-server-5.5 is not configured yet. dpkg: error processing mysql-server (--configure): dependency problems - leaving unconfigured No apport report written because the error message indicates its a followup error from a previous failure. Errors were encountered while processing: mysql-server-5.5 mysql-server E: Sub-process /usr/bin/dpkg returned an error code (1) christian@devbox:~$

    Read the article

  • email-spec destroys my rake cucumber:all

    - by Leonardo Dario Perna
    This works fine: $ rake cucumber:all Then $ script/plugin install git://github.com/bmabey/email-spec.git remote: Counting objects: 162, done. remote: Compressing objects: 100% (130/130), done. remote: Total 162 (delta 18), reused 79 (delta 13) Receiving objects: 100% (162/162), 127.65 KiB | 15 KiB/s, done. Resolving deltas: 100% (18/18), done. From git://github.com/bmabey/email-spec * branch HEAD - FETCH_HEAD And $ script/generate email_spec exists features/step_definitions create features/step_definitions/email_steps.rb And I add 'require 'email_spec/cucumber' in /feature/support/env.rb so it looks somethinng like: require File.expand_path(File.dirname(__FILE__) + '/../../config/environment') require 'cucumber/rails/world' require 'cucumber/formatter/unicode' # Comment out this line if you don't want Cucumber Unicode support require 'email_spec/cucumber' and now: rake cucumber:all gives me this error: $ rake cucumber:all --trace (in /Users/leonardodarioperna/Projects/frestyl/frestyl) ** Invoke cucumber:all (first_time) ** Invoke cucumber:ok (first_time) ** Invoke db:test:prepare (first_time) ** Invoke db:abort_if_pending_migrations (first_time) ** Invoke environment (first_time) ** Execute environment ** Execute db:abort_if_pending_migrations ** Execute db:test:prepare ** Invoke db:test:load (first_time) ** Invoke db:test:purge (first_time) ** Invoke environment ** Execute db:test:purge ** Execute db:test:load ** Invoke db:schema:load (first_time) ** Invoke environment ** Execute db:schema:load ** Execute cucumber:ok /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/bin/ruby -I "/Library/Ruby/Gems/1.8/gems/cucumber-0.4.4/lib:lib" "/Library/Ruby/Gems/1.8/gems/cucumber-0.4.4/bin/cucumber" --profile default cucumber.yml was not found. Please refer to cucumber's documentation on defining profiles in cucumber.yml. You must define a 'default' profile to use the cucumber command without any arguments. Type 'cucumber --help' for usage. rake aborted! Command failed with status (1): [/System/Library/Frameworks/Ruby.framework/...] /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:995:in `sh' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:1010:in `call' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:1010:in `sh' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:1094:in `sh' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:1029:in `ruby' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:1094:in `ruby' /Library/Ruby/Gems/1.8/gems/cucumber-0.4.4/lib/cucumber/rake/task.rb:68:in `run' /Library/Ruby/Gems/1.8/gems/cucumber-0.4.4/lib/cucumber/rake/task.rb:138:in `define_task' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:636:in `call' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:636:in `execute' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:631:in `each' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:631:in `execute' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:597:in `invoke_with_call_chain' /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/1.8/monitor.rb:242:in `synchronize' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:590:in `invoke_with_call_chain' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:607:in `invoke_prerequisites' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:604:in `each' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:604:in `invoke_prerequisites' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:596:in `invoke_with_call_chain' /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/1.8/monitor.rb:242:in `synchronize' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:590:in `invoke_with_call_chain' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:583:in `invoke' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:2051:in `invoke_task' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:2029:in `top_level' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:2029:in `each' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:2029:in `top_level' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:2068:in `standard_exception_handling' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:2023:in `top_level' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:2001:in `run' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:2068:in `standard_exception_handling' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:1998:in `run' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/bin/rake:31 /usr/bin/rake:19:in `load' /usr/bin/rake:19 WHY? but the command: $ cucumber still works Any idea?

    Read the article

  • Downgrade form php5 5.3.10 to php5 5.3.2 in ubuntu 12.04

    - by iori
    i wanted to install php5 5.3.2, so first deleted all the php5 files sudo apt-get purge php5 php5-cli php5-common php5-mysql and also delete the deb files form /var/cache/apt/archives so now there is no deb file on the system then i add this person repository sudo apt-add-repository ppa:sushkov/personal because he add php5.3.2 and then i updated it and upgraded it sudo apt-get update && sudo apt-get upgrade then i installed php5 sudo apt-get install php5 php5-cli php5-common php5-mysql now when i check the php version it says php5.3.10 and when i run this command sudo apt-cache show php5 it says Package: php5 Version: 5.3.15-1~dotdeb.0 Architecture: all Maintainer: Guillaume Plessis <[email protected]> Installed-Size: 0 Depends: libapache2-mod-php5 (>= 5.3.15-1~dotdeb.0) | libapache2-mod-php5filter (>= 5.3.15-1~dotdeb.0) | php5-cgi (>= 5.3.15-1~dotdeb.0) | php5-fpm (>= 5.3.15-1~dotdeb.0), php5-common (>= 5.3.15-1~dotdeb.0) Filename: dists/squeeze/php5/binary-i386/php5_5.3.15-1~dotdeb.0_all.deb now i dont know how to downgrade, is there any way that i change something in the repository and write sudo apt-get install php5 it will install php5.3.2 which i want instead of php5.3.10 Thanks

    Read the article

  • Stupid Geek Tricks: Compare Your Browser’s Memory Usage with Google Chrome

    - by The Geek
    Ever tried to figure out exactly how much memory Google Chrome or Internet Explorer is using? Since they each show up a bunch of times in Task Manager, it’s not so easy! Here’s the quick and easy way to compare them. Both Chrome and IE use multiple processes to isolate tabs from each other, to make sure that one tab doesn’t kill the whole browser. Firefox, on the other hand, just uses a single process for everything. Rather than pulling out a calculator and adding them all up, you can just open up Google Chrome, and type in about:memory into the location bar to see a full list of each browser’s memory usage.   On my test system with 6 GB of system RAM, I’m running the Development channel version of Chrome, and I’ve got about 40 different tabs open, which is why the memory usage is so high. Firefox has 8 tabs open, and IE is enjoying being opened for the first time in forever. Want to help cut down on memory usage and keep your Chrome browser running fast? Disable all unnecessary extensions, and then make sure you disable any plug-ins that you don’t need either. Similar Articles Productive Geek Tips Stupid Geek Tricks: Duplicate a Tab with a Shortcut Key in Chrome or FirefoxStupid Geek Tricks: Shrink the XP Volume ControlStupid Geek Tricks: Tile or Cascade Multiple Windows in Windows 7Fix for Firefox memory leak on WindowsHow to Purge Memory in Google Chrome TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Download Free MP3s from Amazon Awe inspiring, inter-galactic theme (Win 7) Case Study – How to Optimize Popular Wordpress Sites Restore Hidden Updates in Windows 7 & Vista Iceland an Insurance Job? Find Downloads and Add-ins for Outlook

    Read the article

  • So i ran sudo apt-get install kubuntu-full on my Ubuntu... and saw all the apps...now I want it off...help?

    - by Alex Poulos
    I'm running 12.04 - I installed kubuntu to try it out and realized that with all the bloatware applications that I didn't want it anymore - I was able to uninstall the kubuntu-desktop but there are still packages left over... How can I make sure I get rid of EVERYTHING Kubuntu installed - even the kde leftovers? Here's some of what's left when I ran sudo apt-get autoremove kde then "tab" it displayed this: kdeaccessibility kdepim-runtime kdeadmin kde-runtime kde-baseapps kde-runtime-data kde-baseapps-bin kdesdk-dolphin-plugins kde-baseapps-data kde-style-oxygen kde-config-cron kdesudo kde-config-gtk kdeutils kde-config-touchpad kde-wallpapers kdegames-card-data kde-wallpapers-default kdegames-card-data-extra kde-window-manager kde-icons-mono kde-window-manager-common kdelibs5-data kde-workspace kdelibs5-plugins kde-workspace-bin kdelibs-bin kde-workspace-data kdemultimedia-kio-plugins kde-workspace-data-extras kdenetwork kde-workspace-kgreet-plugins kdenetwork-filesharing kde-zeroconf kdepasswd kdf kdepim-kresources kdm kdepimlibs-kio-plugins kdoctools Those are all installed by kubuntu... correct? I just want to go back to my Ubuntu 12.04LTS with Gnome2-classic and without all the kubuntu extras. I started it off by just removing unnecessary apps that came with kubuntu-full - then realized I didnt want the whole thing at all and uninstalled kubuntu-full but it still says I have these as well: alex@griever:~$ sudo apt-get --purge remove kubuntu- kubuntu-debug-installer kubuntu-netbook-default-settings kubuntu-default-settings kubuntu-notification-helper kubuntu-firefox-installer kubuntu-web-shortcuts

    Read the article

  • Unity stuck in 2D mode, Nvidia Quadro graphics "unknown", Nvidia-Current active but not in use

    - by Jordan Lund
    I've seen this problem reported under several questions, but I haven't been able to resolve any of it so I thought I'd bring it all in under one umbrella. I started a new job and was given a Dell Precision M6400 laptop with Nvidia Quadro FX 2700M graphics card. It had a previous version of Ubuntu on it, but nobody had any passwords for it so I wiped the drive and did a fresh install of 11.10 from scratch. I didn't do any updates during installation, preferring to do them after boot. Updates ran fine and the system works... except Unity is in 2D mode. System Settings - Additional Drivers shows that Nvidia-Current is active but not in use. System Settings - System Info shows Graphics Driver Unknown, Experience Standard Nvidia X Server Settings is installed and working, re-writing the xorg.conf did nothing. /usr/lib/nux/unity_support_test -p OpenGL vendor string: NVIDIA Corporation OpenGL renderer string: Quadro FX 2700M/PCI/SSE2 OpenGL version string: 3.3.0 NVIDIA 285.05.09 Not software rendered: yes Not blacklisted: yes GLX fbconfig: yes GLX texture from pixmap: yes GL npot or rect textures: yes GL vertex program: yes GL fragment program: yes GL vertex buffer object: yes GL framebuffer object: yes GL version is 1.4+: yes Unity 3D supported: yes One suggestion was to do a sudo apt-get --purge remove nvidia* and that resulted in a scrambled screen on boot and a non-bootable installation. Pressing the Delete key on boot allowed me to access the recovery console and do a sudo apt-get install nvidia-current, which brought me back to a working, bootable system. Another suggestion was to edit /etc/default/grub and change the line reading "GRUB_CMDLINE_LINUX_DEFAULT="quiet splash" to read "GRUB_CMDLINE_LINUX_DEFAULT="quiet splash vmalloc=192MB" thus allocating more video RAM. I did that, followed by a sudo update-grub and a re-boot. No change. Created a brand new standard user and logged on with that account, no change.

    Read the article

< Previous Page | 84 85 86 87 88 89 90 91 92 93 94 95  | Next Page >