Search Results

Search found 13713 results on 549 pages for 'production environment'.

Page 66/549 | < Previous Page | 62 63 64 65 66 67 68 69 70 71 72 73  | Next Page >

  • How do I setup a cloud server to share and sync files on ESXi hosted environment?

    - by Manoj Agarwal
    I want to setup my private cloud network for my company for syncing and sharing files. Instead of using existing players like dropbox, google drive, amazon etc. I want to setup my own cloud infrastructure. The requirement is to easily share private data internally within the organization. I already have an ESXi based cloud environment, running several virtual machines in it. Will it be feasible and achievable?

    Read the article

  • Is there an AUTOMATED way to pin items to the taskbar in a windows 2008 TS environment?

    - by l0c0b0x
    We have a Windows 2008 (R2) Terminal Server environment and would like to find a way to automate the addition (Pinning) of programs to the Windows taskbar for our users. My first thought was to do this from a Group Policy, but haven't been able to find a setting for this. I've heard you can 'probably' do it via a script, but haven't seen any examples yet (I'm not a programmer, but have worked with batch scripts before). Please advice. Thanks!

    Read the article

  • How can I perform a syntax check on an .htaccess file in a shared hosting environment?

    - by Danny
    I have a build script (Perl) that modifies the .htaccess file when I deploy my applications. As a double-check I'd like to be able to perform some sort of syntax checking on the created .htaccess file. I am familiar with the idea of using apachectl -t however, I am in a shared hosting environment and because of file access restrictions I cannot read certain configuration files specified by the sysadmins. Apachectl simply will not work in this regard. Ideas or suggestions welcome.

    Read the article

  • I have a NGINX server configured to work with node.js, but many times a file of 1.03MB of js is not loaded by various browser and various pc

    - by Totty
    I'm using this in a local LAN so it should be quite fast. The nginx server use the node.js server to serve static files, so it must pass throught node.js to download the files, but that is not a problem when I'm not using the nginx. In chrome with debugger on I can see that the status is: 206 - partial content and it only has downloaded 31KB of 1.03MB. After 1.1 min it turns red and the status failed. Waiting time: 6ms Receiving: 1.1 min The headers in google chrom: Request URL:http://192.168.1.16/production/assembly/script/production.js Request Method:GET Status Code:206 Partial Content Request Headersview source Accept:*/* Accept-Charset:ISO-8859-1,utf-8;q=0.7,*;q=0.3 Accept-Encoding:gzip,deflate,sdch Accept-Language:pt-PT,pt;q=0.8,en-US;q=0.6,en;q=0.4 Connection:keep-alive Cookie:connect.sid=s%3Abls2qobcCaJ%2FyBNZwedtDR9N.0vD4Fi03H1bEdCszGsxIjjK0lZIjJhLnToWKFVxZOiE Host:192.168.1.16 If-Range:"1081715-1350053827000" Range:bytes=16090-16090 Referer:http://192.168.1.16/production/assembly/ User-Agent:Mozilla/5.0 (Windows NT 6.0) AppleWebKit/537.4 (KHTML, like Gecko) Chrome/22.0.1229.94 Safari/537.4 Response Headersview source Accept-Ranges:bytes Cache-Control:public, max-age=0 Connection:keep-alive Content-Length:1 Content-Range:bytes 16090-16090/1081715 Content-Type:application/javascript Date:Mon, 15 Oct 2012 09:18:50 GMT ETag:"1081715-1350053827000" Last-Modified:Fri, 12 Oct 2012 14:57:07 GMT Server:nginx/1.1.19 X-Powered-By:Express My nginx configurations: File 1: user totty; worker_processes 4; pid /var/run/nginx.pid; events { worker_connections 768; # multi_accept on; } http { ## # Basic Settings ## sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; # server_tokens off; # server_names_hash_bucket_size 64; # server_name_in_redirect off; include /etc/nginx/mime.types; default_type application/octet-stream; ## # Logging Settings ## access_log /home/totty/web/production01_server/node_modules/production/_logs/_NGINX_access.txt; error_log /home/totty/web/production01_server/node_modules/production/_logs/_NGINX_error.txt; ## # Gzip Settings ## gzip on; gzip_disable "msie6"; # gzip_vary on; # gzip_proxied any; # gzip_comp_level 6; # gzip_buffers 16 8k; # gzip_http_version 1.1; # gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript; ## # nginx-naxsi config ## # Uncomment it if you installed nginx-naxsi ## #include /etc/nginx/naxsi_core.rules; ## # nginx-passenger config ## # Uncomment it if you installed nginx-passenger ## #passenger_root /usr; #passenger_ruby /usr/bin/ruby; ## # Virtual Host Configs ## autoindex on; include /home/totty/web/production01_server/_deployment/nginxConfigs/server/*; } File that is included by the previous file: server { # custom location for entry # using only "/" instead of "/production/assembly" it # would allow you to go to "thatip/". In this way # we are limiting to "thatip/production/assembly/" location /production/assembly/ { # ip and port used in node.js proxy_pass http://127.0.0.1:3000/; } location /production/assembly.mongo/ { proxy_pass http://127.0.0.1:9000/; proxy_redirect off; } location /production/assembly.logs/ { autoindex on; alias /home/totty/web/production01_server/node_modules/production/_logs/; } }

    Read the article

  • How to use USB Key in Xen 5.6 Environment?

    - by Az
    I am looking for a way to use USB key on a guest OS running on a 5.6 Xen Server environment. The catch is that I need to actually detect in the Guest OS (Win2003 Server) like an actual USB Key. Attaching it as a storage drive doesnt work (It is a key with special attributes that servers as a licensing mechanism). Just wondering if anyone has had a similar need and found a good solution? Thanks, Nate

    Read the article

  • Complete list of tools and technologies that make up a solid ASP.NET MVC 2 development environment f

    - by Dr Dork
    This question is related to another wiki I found on SO, but I'd like to develop a more comprehensive example of an automated ASP MVC 2 development environment that can be used to develop and deploy a wide range of small-scale websites by beginners. As far as characteristics of the dev environment go, I'd like to focus on beginner-friendly over powerful since the other wiki focuses more on advanced, powerful setups. This information is targeted for beginners (that already know C# and understand web dev concepts) that have selected... ASP.NET MVC 2 as their dev framework Visual Studio 2010 Pro (or 2008 Pro SP1) as their IDE Windows 7 as their OS and are looking for a quick and easy-to-setup environment that covers managing, building, testing, tracking, and deploying their website with as much automation as possible. A system that can be used for becoming familiar with the whole process, as well as a launching point for exploring other, more custom and powerful systems. Since we've already selected the Compiler, Framework, and OS, I'd like to develop ideas for... Code editor (unless you feel VS will suffice for all areas of code) Database and related tools Unit testing (VS?) Continuous integration build system (VS?) Project Planning Issue tracking Deployment (VS?) Source management (VS?) ASP, C#, VS, and related blogs that beginners can follow Any other categories I'm probably missing Since we're already using Visual Studio, I'd like to focus on the out-of-the-box solutions and features built into Visual Studio, unless you feel there are better solutions that work well with VS and are easier to use than the features built directly into VS. Thanks so much in advance for your wisdom!

    Read the article

  • What is the best way to save the environment from before an alarm handler goes off when the alarm do

    - by EpsilonVector
    I'm trying to implement user threads on a 2.4 Linux kernel (homework) and the trick for context switch seems to be using an alarm that goes off every x milliseconds and sends us to an alarm handler from which we can longjmp to the next thread. What I'm having difficulties with is figuring out how to save the environment to return to later. Basically I have an array of jmp_buffs, and every time a "context switch" using the alarm happens I want to save the previous context to the appropriate entry of the array and longjmp to the next one. However, just the fact that I need to do this from the event handler means that just using setjmp in the event handler won't give me exactly the kind of environment I want (as far as stack and program counter are involved) because the stack has the event handler call in it and the pc is in the event handler. I suppose I can look at the stack and alter it to fit my needs, but that feels a bit cumbersome. Another idea I had is to somehow pass the environment before the jump to event handler as a parameter to the event handler, but I can't figure out if this is possible. So I guess my question is- how do I do this right?

    Read the article

  • How is a referencing environment generally implemented for closures?

    - by Alexandr Kurilin
    Let's say I have a statically/lexically scoped language with deep binding and I create a closure. The closure will consist of the statements I want executed plus the so called referencing environment, or, to quote this post, the collection of variables which can be used. What does this referencing environment actually look like implementation-wise? I was recently reading about ObjectiveC's implementation of blocks, and the author suggests that behind the scenes you get a copy of all of the variables on the stack and also of all the references to heap objects. The explanation claims that you get a "snapshot" of the referencing environment at the point in time of the closure's creation. Is that more or less what happens, or did I misread that? Is anything done to "freeze" a separate copy of the heap objects, or is it safe to assume that if they get modified between closure creation and the closure executing, the closure will no longer be operating on the original version of the object? If indeed there's copying being made, are there memory usage considerations in situations where one might want to create plenty of closures and store them somewhere? I think that misunderstanding of some of these concepts might lead to tricky issues like the ones Eric Lippert mentions in this blog post. It's interesting because you'd think that it wouldn't make sense to keep a reference to a value type that might be gone by the time the closure is called, but I'm guessing that in C# the compiler will figure out that the variable is needed later and put it into the heap instead. It seems that in most memory-managed languages everything is a reference and thus ObjectiveC is a somewhat unique situation with having to deal with copying what's on the stack.

    Read the article

  • Given a typical Rails 3 environment, why am I unable to execute any tests?

    - by Tom
    I'm working on writing simple unit tests for a Rails 3 project, but I'm unable to actually execute any tests. Case in point, attempting to run the test auto-generated by Rails fails: require 'test_helper' class UserTest < ActiveSupport::TestCase # Replace this with your real tests. test "the truth" do assert true end end Results in the following error: <internal:lib/rubygems/custom_require>:29:in `require': no such file to load -- test_helper (LoadError) from <internal:lib/rubygems/custom_require>:29:in `require' from user_test.rb:1:in `<main>' Commenting out the require 'test_helper' line and attempting to run the test results in this error: user_test.rb:3:in `<main>': uninitialized constant Object::ActiveSupport (NameError) The action pack gems appear to be properly installed and up to date: actionmailer (3.0.3, 2.3.5) actionpack (3.0.3, 2.3.5) activemodel (3.0.3) activerecord (3.0.3, 2.3.5) activeresource (3.0.3, 2.3.5) activesupport (3.0.3, 2.3.5) Ruby is at 1.9.2p0 and Rails is at 3.0.3. The sample dump of my test directory is as follows: /fixtures /functional /integration /performance /unit -- /helpers -- user_helper_test.rb -- user_test.rb test_helper.rb I've never seen this problem before - I've run the typical rake tasks for preparing the test environment. I have nothing out of the ordinary in my application or environment configuration files, nor have I installed any unusual gems that would interfere with the test environment. Edit Xavier Holt's suggestion, explicitly specifying the path to the test_helper worked; however, this revealed an issue with ActiveSupport. Now when I attempt to run the test, I receive the following error message (as also listed above): user_test.rb:3:in `<main>': uninitialized constant Object::ActiveSupport (NameError) But as you can see above, Action Pack is all installed and update to date.

    Read the article

  • What Test Environment Setup do Committers Use in the Ruby Community?

    - by viatropos
    Today I am going to get as far as I can setting up my testing environment and workflow. I'm looking for practical advice on how to setup the test environment from you guys who are very passionate and versed in Ruby Testing. By the end of the day (6am PST?) I would like to be able to: Type one 1-command to run test suites for ANY project I find on Github. Run autotest for ANY Github project so I can fork and make TESTABLE contributions. Build gems from the ground up with Autotest and Shoulda. For one reason or another, I hardly ever run tests for projects I clone from Github. The major reason is because unless they're using RSpec and have a Rake task to run the tests, I don't see the common pattern behind it all. I have built 3 or 4 gems writing tests with RSpec, and while I find the DSL fun, it's less than ideal because it just adds another layer/language of methods I have to learn and remember. So I'm going with Shoulda. But this isn't a question about which testing framework to choose. So the questions are: What is your, the SO reader and Github project committer, test environment setup using autotest so that whenever you git clone a gem, you can run the tests and autotest-develop them if desired? What are the guys who are writing the Paperclip Tests and Authlogic Tests doing? What is their setup? Thanks for the insight. Looking for answers that will make me a more effective tester.

    Read the article

  • What Test Environment Setup do Top Project Committers Use in the Ruby Community?

    - by viatropos
    Today I am going to get as far as I can setting up my testing environment and workflow. I'm looking for practical advice on how to setup the test environment from you guys who are very passionate and versed in Ruby Testing. By the end of the day (6am PST?) I would like to be able to: Type one 1-command to run test suites for ANY project I find on Github. Run autotest for ANY Github project so I can fork and make TESTABLE contributions. Build gems from the ground up with Autotest and Shoulda. For one reason or another, I hardly ever run tests for projects I clone from Github. The major reason is because unless they're using RSpec and have a Rake task to run the tests, I don't see the common pattern behind it all. I have built 3 or 4 gems writing tests with RSpec, and while I find the DSL fun, it's less than ideal because it just adds another layer/language of methods I have to learn and remember. So I'm going with Shoulda. But this isn't a question about which testing framework to choose. So the questions are: What is your, the SO reader and Github project committer, test environment setup using autotest so that whenever you git clone a gem, you can run the tests and autotest-develop them if desired? What are the guys who are writing the Paperclip Tests and Authlogic Tests doing? What is their setup? Thanks for the insight. Looking for answers that will make me a more effective tester.

    Read the article

  • Production debugging: Is there a less intrusive way than WinDbg?

    - by Alex
    Hi, I was wondering if there is a less intrusive way to analyze a running, managed process in production environments. Less intrusive meaning: No delay of execution when attaching the debugger. No delay of execution when getting basic stats like running threads. In the Java world there is a such a tool part of the JDK. I was wondering if there're similar tools in the .NET world. Any ideas? Alex

    Read the article

  • Grunt: Bower for development and CDN for production - is it possible?

    - by EricC
    For development, I guess it is fine to use a plugin like https://github.com/stephenplusplus/grunt-wiredep But for production I would like to use CDN where such exists. Does it exist a Grunt plugin that goes through the bower.json file and replaces this with a CDN-link from the most popular ones (and if a component is present in more than one CDN, then pick one based on rank-setting or random or something).

    Read the article

  • Production ready alternative to Microsoft Doloto (Javascript minifier/prefetcher)?

    - by usr
    As you surely know Microsoft Doloto is tool which profiles you javascript code as it actually runs on the page and splits it in to two files: one file will be statically included in the footer of the page which contains stubs for all functions and loads the actual implementations (in file 2) in the background (under the assumption that only very litte javascript is needed on page load so you can defer downloading the rest). I found Doloto not to be production ready, it meanwhile has been canceled afaik. Is there a working alternative?

    Read the article

  • How do I use beta Perl modules from beta Perl scripts?

    - by DVK
    If my Perl code has a production code location and "beta" code location (e.g. production Perl code us in /usr/code/scripts, BETA Perl code is in /usr/code/beta/scripts; production Perl libraries are in /usr/code/lib/perl and BETA versions of those libraries are in /usr/code/beta/lib/perl, is there an easy way for me to achieve such a setup? The exact requirements are: The code must be THE SAME in production and BETA location. To clarify, to promote any code (library or script) from BETA to production, the ONLY thing which needs to happen is literally issuing cp command from BETA to prod location - both the file name AND file contents must remain identical. BETA versions of scripts must call other BETA scripts and BETA libraries (if exist) or production libraries (if BETA libraries do not exist) The code paths must be the same between BETA and production with the exception of base directory (/usr/code/ vs /usr/code/beta/) I will present how we solved the problem as an answer to this question, but I'd like to know if there's a better way.

    Read the article

  • How do I use beta test Perl modules from test Perl scripts?

    - by DVK
    If my Perl code has a production code location and "beta" code location (e.g. production Perl code us in /usr/code/scripts, BETA Perl code is in /usr/code/beta/scripts; production Perl libraries are in /usr/code/lib/perl and BETA versions of those libraries are in /usr/code/beta/lib/perl, is there an easy way for me to achieve such a setup? The exact requirements are: The code must be THE SAME in production and BETA location. To clarify, to promote any code (library or script) from BETA to production, the ONLY thing which needs to happen is literally issuing cp command from BETA to prod location - both the file name AND file contents must remain identical. BETA versions of scripts must call other BETA scripts and BETA libraries (if exist) or production libraries (if BETA libraries do not exist) The code paths must be the same between BETA and production with the exception of base directory (/usr/code/ vs /usr/code/beta/) I will present how we solved the problem as an answer to this question, but I'd like to know if there's a better way.

    Read the article

  • How do I use test Perl modules from test Perl scripts?

    - by DVK
    If my Perl code has a production code location and "test" code location (e.g. production Perl code us in /usr/code/scripts, test Perl code is in /usr/code/test/scripts; production Perl libraries are in /usr/code/lib/perl and test versions of those libraries are in /usr/code/test/lib/perl, is there an easy way for me to achieve such a setup? The exact requirements are: The code must be THE SAME in production and test location. To clarify, to promote any code (library or script) from test to production, the ONLY thing which needs to happen is literally issuing cp command from test to prod location - both the file name AND file contents must remain identical. Test versions of scripts must call other test scripts and test libraries (if exist) or production libraries (if test libraries do not exist) The code paths must be the same between test and production with the exception of base directory (/usr/code/ vs /usr/code/test/) I will present how we solved the problem as an answer to this question, but I'd like to know if there's a better way.

    Read the article

  • How do I use test/beta Perl modules from test Perl scripts?

    - by DVK
    If my Perl code has a production code location and "beta" code location (e.g. production Perl code us in /usr/code/scripts, BETA Perl code is in /usr/code/beta/scripts; production Perl libraries are in /usr/code/lib/perl and BETA versions of those libraries are in /usr/code/beta/lib/perl, is there an easy way for me to achieve such a setup? The exact requirements are: The code must be THE SAME in production and BETA location. To clarify, to promote any code (library or script) from BETA to production, the ONLY thing which needs to happen is literally issuing cp command from BETA to prod location - both the file name AND file contents must remain identical. BETA versions of scripts must call other BETA scripts and BETA libraries (if exist) or production libraries (if BETA libraries do not exist) The code paths must be the same between BETA and production with the exception of base directory (/usr/code/ vs /usr/code/beta/) I will present how we solved the problem as an answer to this question, but I'd like to know if there's a better way.

    Read the article

< Previous Page | 62 63 64 65 66 67 68 69 70 71 72 73  | Next Page >