Search Results

Search found 23967 results on 959 pages for 'multiple languages'.

Page 154/959 | < Previous Page | 150 151 152 153 154 155 156 157 158 159 160 161  | Next Page >

  • Extract tar with multiple tars inside?

    - by Andrew Fashion
    Is there a way to untar a file with multiple tars inside? It's suppose to just untar everything inside including untarring the tars inside the tar... With windows it does it, quite annoying I can't figure it out on linux... Here is what I am doing: # tar -xvf socialengine4.0.5p1.tar core-base-4.0.5.tar core-install-4.0.7.tar external-autocompleter-4.0.0.tar external-calendar-4.0.1.tar external-chootools-4.0.3.tar external-fancyupload-4.0.1.tar external-firebug-4.0.0.tar external-flowplayer-4.0.0.tar external-moocomet-4.0.0.tar external-moocrop-4.0.0.tar external-moolasso-4.0.0.tar external-mootools-4.0.2.tar external-mootree-4.0.0.tar external-open-flash-chart-4.0.0.tar external-smoothbox-4.0.0.tar external-swfobject-4.0.0.tar external-tagger-4.0.2.tar external-tinymce-4.0.2.tar library-engine-4.0.5.tar library-facebook-4.0.0.tar library-ofc-4.0.0.tar library-pear-4.0.1.tar library-scaffold-4.0.3.tar module-activity-4.0.5p1.tar module-announcement-4.0.3.tar module-authorization-4.0.5.tar module-core-4.0.5.tar module-fields-4.0.5p1.tar module-invite-4.0.3.tar module-messages-4.0.5.tar module-network-4.0.5p1.tar module-storage-4.0.4.tar module-user-4.0.5.tar widget-rss-4.0.2.tar widget-weather-4.0.0.tar changelog.html [root@D18634 se4]# ls -l total 36980 -rw-r--r-- 1 1000 1000 27188 Oct 8 15:39 changelog.html -rw-r--r-- 1 1000 1000 359424 Oct 8 16:13 core-base-4.0.5.tar -rw-r--r-- 1 1000 1000 1122304 Oct 8 16:13 core-install-4.0.7.tar -rw-r--r-- 1 1000 1000 38400 Oct 8 16:13 external-autocompleter-4.0.0.tar -rw-r--r-- 1 1000 1000 100352 Oct 8 16:13 external-calendar-4.0.1.tar -rw-r--r-- 1 1000 1000 31232 Oct 8 16:13 external-chootools-4.0.3.tar -rw-r--r-- 1 1000 1000 66560 Oct 8 16:13 external-fancyupload-4.0.1.tar -rw-r--r-- 1 1000 1000 85504 Oct 8 16:13 external-firebug-4.0.0.tar -rw-r--r-- 1 1000 1000 216576 Oct 8 16:13 external-flowplayer-4.0.0.tar -rw-r--r-- 1 1000 1000 11776 Oct 8 16:13 external-moocomet-4.0.0.tar -rw-r--r-- 1 1000 1000 16384 Oct 8 16:13 external-moocrop-4.0.0.tar -rw-r--r-- 1 1000 1000 27648 Oct 8 16:13 external-moolasso-4.0.0.tar -rw-r--r-- 1 1000 1000 1445376 Oct 8 16:13 external-mootools-4.0.2.tar -rw-r--r-- 1 1000 1000 45568 Oct 8 16:13 external-mootree-4.0.0.tar -rw-r--r-- 1 1000 1000 330240 Oct 8 16:13 external-open-flash-chart-4.0.0.tar -rw-r--r-- 1 1000 1000 43008 Oct 8 16:13 external-smoothbox-4.0.0.tar -rw-r--r-- 1 1000 1000 18432 Oct 8 16:13 external-swfobject-4.0.0.tar -rw-r--r-- 1 1000 1000 19968 Oct 8 16:13 external-tagger-4.0.2.tar -rw-r--r-- 1 1000 1000 5711360 Oct 8 16:13 external-tinymce-4.0.2.tar -rw-r--r-- 1 1000 1000 1230848 Oct 8 16:13 library-engine-4.0.5.tar -rw-r--r-- 1 1000 1000 28672 Oct 8 16:13 library-facebook-4.0.0.tar -rw-r--r-- 1 1000 1000 125952 Oct 8 16:13 library-ofc-4.0.0.tar -rw-r--r-- 1 1000 1000 1715200 Oct 8 16:13 library-pear-4.0.1.tar -rw-r--r-- 1 1000 1000 340480 Oct 8 16:13 library-scaffold-4.0.3.tar -rw-r--r-- 1 1000 1000 354304 Oct 8 16:13 module-activity-4.0.5p1.tar -rw-r--r-- 1 root root 327680 Jan 8 02:37 module-albums-4.0.5.tar -rw-r--r-- 1 1000 1000 80896 Oct 8 16:13 module-announcement-4.0.3.tar -rw-r--r-- 1 1000 1000 147456 Oct 8 16:13 module-authorization-4.0.5.tar -rw-r--r-- 1 1000 1000 2643968 Oct 8 16:13 module-core-4.0.5.tar -rw-r--r-- 1 root root 665600 Jan 8 02:37 module-events-4.0.5.tar -rw-r--r-- 1 1000 1000 377344 Oct 8 16:13 module-fields-4.0.5p1.tar -rw-r--r-- 1 root root 501760 Jan 8 02:37 module-forum-4.0.5p1.tar -rw-r--r-- 1 1000 1000 81408 Oct 8 16:14 module-invite-4.0.3.tar -rw-r--r-- 1 1000 1000 147968 Oct 8 16:14 module-messages-4.0.5.tar -rw-r--r-- 1 1000 1000 111616 Oct 8 16:14 module-network-4.0.5p1.tar -rw-r--r-- 1 1000 1000 99840 Oct 8 16:14 module-storage-4.0.4.tar -rw-r--r-- 1 1000 1000 844288 Oct 8 16:14 module-user-4.0.5.tar -rw-r--r-- 1 root root 18094080 Jan 8 02:40 socialengine4.0.5p1.tar -rw-r--r-- 1 1000 1000 12288 Oct 8 16:14 widget-rss-4.0.2.tar -rw-r--r-- 1 1000 1000 13824 Oct 8 16:14 widget-weather-4.0.0.tar

    Read the article

  • How to serve static files for multiple Django projects via nginx to same domain

    - by thanley
    I am trying to setup my nginx conf so that I can serve the relevant files for my multiple Django projects. Ultimately I want each app to be available at www.example.com/app1, www.example.com/app2 etc. They all serve static files from a 'static-files' directory located in their respective project root. The project structure: Home Ubuntu Web www.example.com ref logs app app1 app1 static bower_components templatetags app1_project templates static-files app2 app2 static templates templatetags app2_project static-files app3 tests templates static-files static app3_project app3 venv When I use the conf below, there are no problems for serving the static-files for the app that I designate in the /static/ location. I can also access the different apps found at their locations. However, I cannot figure out how to serve all of the static files for all the apps at the same time. I have looked into using the 'try_files' command for the static location, but cannot figure out how to see if it is working or not. Nginx Conf - Only serving static files for one app: server { listen 80; server_name example.com; server_name www.example.com; access_log /home/ubuntu/web/www.example.com/logs/access.log; error_log /home/ubuntu/web/www.example.com/logs/error.log; root /home/ubuntu/web/www.example.com/; location /static/ { alias /home/ubuntu/web/www.example.com/app/app1/static-files/; } location /media/ { alias /home/ubuntu/web/www.example.com/media/; } location /app1/ { include uwsgi_params; uwsgi_param SCRIPT_NAME /app1; uwsgi_modifier1 30; uwsgi_pass unix:///home/ubuntu/web/www.example.com/app1.sock; } location /app2/ { include uwsgi_params; uwsgi_param SCRIPT_NAME /app2; uwsgi_modifier1 30; uwsgi_pass unix:///home/ubuntu/web/www.example.com/app2.sock; } location /app3/ { include uwsgi_params; uwsgi_param SCRIPT_NAME /app3; uwsgi_modifier1 30; uwsgi_pass unix:///home/ubuntu/web/www.example.com/app3.sock; } # what to serve if upstream is not available or crashes error_page 400 /static/400.html; error_page 403 /static/403.html; error_page 404 /static/404.html; error_page 500 502 503 504 /static/500.html; # Compression gzip on; gzip_http_version 1.0; gzip_comp_level 5; gzip_proxied any; gzip_min_length 1100; gzip_buffers 16 8k; gzip_types text/plain text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript; # Some version of IE 6 don't handle compression well on some mime-types, # so just disable for them gzip_disable "MSIE [1-6].(?!.*SV1)"; # Set a vary header so downstream proxies don't send cached gzipped # content to IE6 gzip_vary on; } Essentially I want to have something like (I know this won't work) location /static/ { alias /home/ubuntu/web/www.example.com/app/app1/static-files/; alias /home/ubuntu/web/www.example.com/app/app2/static-files/; alias /home/ubuntu/web/www.example.com/app/app3/static-files/; } or (where it can serve the static files based on the uri) location /static/ { try_files $uri $uri/ =404; } So basically, if I use try_files like above, is the problem in my project directory structure? Or am I totally off base on this and I need to put each app in a subdomain instead of going this route? Thanks for any suggestions TLDR: I want to go to: www.example.com/APP_NAME_HERE And have nginx serve the static location: /home/ubuntu/web/www.example.com/app/APP_NAME_HERE/static-files/;

    Read the article

  • How does the GPL static vs. dynamic linking rule apply to interpreted languages?

    - by ekolis
    In my understanding, the GPL prohibits static linking from non-GPL code to GPL code, but permits dynamic linking from non-GPL code to GPL code. So which is it when the code in question is not linked at all because the code is written in an interpreted language (e.g. Perl)? It would seem to be too easy to exploit the rule if it was considered dynamic linking, but on the other hand, it would also seem to be impossible to legally reference GPL code from non-GPL code if it was considered static! Compiled languages at least have a distinction between static and dynamic linking, but when all "linking" is just running scripts, it's impossible to tell what the intent is without an explicit license! Or is my understanding of this issue incorrect, rendering the question moot? I've also heard of a "classpath exception" which involves dynamic linking; is that not part of the GPL but instead something that can be added on to it, so dynamic linking is only allowed when the license includes this exception?

    Read the article

  • Can static and dynamically typed languages be seen as different tools for different types of jobs?

    - by Erik Reppen
    Yes, similar questions have been asked but always with the aim of finding out 'which one is better.' I'm asking because I came up as a dev primarily in JavaScript and don't really have any extensive experience writing in statically typed languages. In spite of this I definitely see value in learning C for handling demanding operations at lower levels of code (which I assume has a lot to do with static vs dynamic at the compiler level), but what I'm trying to wrap my head around is whether there are specific project contexts (maybe certain types of dynamic data-intensive operations?) involving things other than performance where it makes a lot more sense to go with Java or C# vs. something like Python.

    Read the article

  • How to optimize a one language website's SEO for foreign languages?

    - by moomoochoo
    DETAILS I have a website that's content is in English. It is a niche website with a global market. However I would like users to be able to find the website using their own language. The scenario I envision is that the searcher is looking for the English content, but is searching in their own language. An example could be someone looking for "downloadable English crosswords." MY IDEAS Buy ccTLDs and have them permanently redirect to subdirectories on domain.com. The subdirectories would contain html sitemaps in the target language e.g.-Redirect domain.fr to domain.com/fr OR perhaps it would be better to maintain domain.fr as an independent site in the target language with the html sitemap linking to pages on domain.com ? QUESTION Are the above methods good/bad? What are some other ways I can optimize SEO for foreign languages?

    Read the article

  • How to be a pro in several programming languages? [on hold]

    - by trerums
    I love PHP and C# languages and i want to be excellent in both. I like to develop PHP applications using MySQL, Nginx, Memcached and so on technologies. I also like ASP.NET MVC stack and think it's great tools. But each technology requires a lot of time to master it. The same is true for C# web stack - there is a huge amount of things to be mastered like Azure, LINQ, Entity Framework ets. Mastering PHP means knowing how it works under the hood. Mastering C# means knowing CLR implementations on a deep level, knowing MSIL etc/ Where to get time for all this? Maybe this is "jack of all trades"? What can you advice?

    Read the article

  • How easy is it to change languages/frameworks professionally? [closed]

    - by user924731
    Forgive me for asking a career related question - I know that they can be frowned upon here, but I think that this one is general enough to be useful to many people. My question is: How easy/difficult is it to get a job using language/frameowork B, when your current job uses language/framework A? e.g. If you use C#/ASP.NET in your current job, how difficult would it be to get a job using python/django, or PHP/Zend, or whatever (the specifics of the example don't matter). Relatedly, if you work in client side scripting, but perhaps work on server-side projects in your own time, how difficult would it be to switch to server-side professionally? So, to sum up, does the choice of which languages/frameworks use at work tend to box you in professionally?

    Read the article

  • Adobe Coldfusion Railo OpenBD Apache Tomcat Multiple Sites

    - by chris hough
    Here's what I am trying to do, unless I am crazy: I am trying to use Tomcat with the multiple workers, so far I got OpenBD working, but having trouble with Railo, and will be tackling Adobe after. each engine deployed as a war separated by different workers I wanted to keep both the sites and engines inside my sites directory I have to remap the symlink for the WEB-INF when I switch engines = have not found a way around this my thought is to have everything separated into modules and I want to be able to execute both cfm and php code in a single site.  Ideally, it would be amazing if there would be a way to not have to remap the symlink as well. thoughts? can this be done? I am trying to mimic how this would be setup on a live server, not using eclipse for example. here is what I am working with so far: my apache workers.properties worker.list=openbd, openbdadmin, railo, railoadmin  worker.openbd.type=ajp13  worker.openbd.host=local.mydev.openbd  worker.openbd.port=8009 worker.openbdadmin.type=ajp13  worker.openbdadmin.host=local.admin.openbd worker.openbdadmin.port=8009   worker.railo.type=ajp13  worker.railo.host=local.mydev.railo  worker.railo.port=8009 worker.railoadmin.type=ajp13  worker.railoadmin.host=local.admin.railo worker.railoadmin.port=8009   my tomcat servers.xml < Host name="local.admin.openbd" appBase="/Users/[myusername]/Websites/coldfusion.engines"  unpackWARs="false" autoDeploy="true" xmlValidation="true" xmlNamespaceAware="false"        < Context path="" docBase="openbd/" reloadable="true" privileged="true" antiResourceLocking="false" anitJARLocking="false" allowLinking="true" < /Host        < Host name="local.admin.railo"   appBase="/Users/[my username]/Websites/coldfusion.engines" unpackWARs="false" autoDeploy="true" xmlValidation="true" xmlNamespaceAware="false"        < Context path="" docBase="railo/"  reloadable="true" privileged="true" antiResourceLocking="false" anitJARLocking="false" allowLinking="true" < /Host < Host name="local.mydev.openbd"   appBase="/Users/[my username]/Websites/coldfusion.engines" unpackWARs="false" autoDeploy="true" xmlValidation="true" xmlNamespaceAware="false" < Context path="" docBase="/Users/[my username]/Websites/example.mydev/wwwroot/"  reloadable="true" privileged="true" antiResourceLocking="false" anitJARLocking="false" allowLinking="true"< /Context < /Host < Host name="local.mydev.railo"   appBase="/Users/[my username]/Websites/coldfusion.engines"  unpackWARs="false" autoDeploy="true" xmlValidation="true" xmlNamespaceAware="false" < Context path="" docBase="/Users/[my username]/Websites/example.mydev/wwwroot/"  reloadable="true" privileged="true" antiResourceLocking="false" anitJARLocking="false" allowLinking="true" < /Host my apache vhosts ServerName local.admin.openbd DocumentRoot /Users/[my username]/Websites/coldfusion.engines/openBD/ #Mount OpenBD and tell it to only server cfml files JkMount /*.cfm openbdadmin ErrorLog "/Users/[my username]/Websites/apache.logs/local_openbdadmin_error.log" ServerName local.admin.railo DocumentRoot /Users/[my username]/Websites/coldfusion.engines/railo/ #Mount Railo and tell it to only server cfml files JkMount /*.cfm railoadmin ErrorLog "/Users/[my username]/Websites/apache.logs/local_railoadmin_error.log" ServerName local.mydev DocumentRoot /Users/[my username]/Websites/example.mydev/wwwroot ErrorLog "/Users/[my username]/Websites/apache.logs/local_example_mydev_error.log" ServerName local.mydev.openbd DocumentRoot /Users/[my username]/Websites/example.mydev/wwwroot #Mount OpenBD and tell it to only server cfml files JkMount /*.cfm openbd ErrorLog "/Users/[my username]/Websites/apache.logs/local_example_mydev_openbd_error.log" ServerName local.mydev.railo DocumentRoot /Users/[my username]/Websites/example.mydev/wwwroot JkMount /*.cfm railo ErrorLog "/Users/[my username]/Websites/apache.logs/local_example_mydev_railo_error.log" my folder structure I am using websites/apache.logs/ websites/coldfusion.engines/ websites/coldfusion.engines/cfusion/ websites/coldfusion.engines/openBD/ websites/coldfusion.engines/railo/ websites/example.mydev/ websites/example.mydev/wwwroot/ websites/example.mydev/wwwroot/index.cfm   websites/example.mydev/wwwroot/index.htm   websites/example.mydev/wwwroot/index.php   error log output [Thu Aug 27 00:54:50.443 2009] [11279:2686719776] [info] init_jk::mod_jk.c (3183): mod_jk/1.2.28 initialized [Thu Aug 27 00:54:51.346 2009] [11280:2686719776] [info] init_jk::mod_jk.c (3183): mod_jk/1.2.28 initialized [Thu Aug 27 00:55:18.963 2009] [11284:2686719776] [info] jk_open_socket::jk_connect.c (594): connect to 127.0.0.1:8009 failed (errno=61) [Thu Aug 27 00:55:18.963 2009] [11284:2686719776] [info] ajp_connect_to_endpoint::jk_ajp_common.c (922): Failed opening socket to (127.0.0.1:8009) (errno=61) [Thu Aug 27 00:55:18.963 2009] [11284:2686719776] [error] ajp_send_request::jk_ajp_common.c (1507): (openbdadmin) connecting to backend failed. Tomcat is probably not started or is listening on the wrong port (errno=61) [Thu Aug 27 00:55:18.963 2009] [11284:2686719776] [info] ajp_service::jk_ajp_common.c (2447): (openbdadmin) sending request to tomcat failed (recoverable), because of error during request sending (attempt=1) [Thu Aug 27 00:55:19.063 2009] [11284:2686719776] [info] jk_open_socket::jk_connect.c (594): connect to 127.0.0.1:8009 failed (errno=61) [Thu Aug 27 00:55:19.063 2009] [11284:2686719776] [info] ajp_connect_to_endpoint::jk_ajp_common.c (922): Failed opening socket to (127.0.0.1:8009) (errno=61) [Thu Aug 27 00:55:19.063 2009] [11284:2686719776] [error] ajp_send_request::jk_ajp_common.c (1507): (openbdadmin) connecting to backend failed. Tomcat is probably not started or is listening on the wrong port (errno=61) [Thu Aug 27 00:55:19.063 2009] [11284:2686719776] [info] ajp_service::jk_ajp_common.c (2447): (openbdadmin) sending request to tomcat failed (recoverable), because of error during request sending (attempt=2) [Thu Aug 27 00:55:19.063 2009] [11284:2686719776] [error] ajp_service::jk_ajp_common.c (2466): (openbdadmin) connecting to tomcat failed. [Thu Aug 27 00:55:19.063 2009] [11284:2686719776] [info] jk_handler::mod_jk.c (2615): Service error=-3 for worker=openbdadmin [Thu Aug 27 00:55:20.377 2009] [11283:2686719776] [info] jk_open_socket::jk_connect.c (594): connect to 127.0.0.1:8009 failed (errno=61) [Thu Aug 27 00:55:20.377 2009] [11283:2686719776] [info] ajp_connect_to_endpoint::jk_ajp_common.c (922): Failed opening socket to (127.0.0.1:8009) (errno=61) [Thu Aug 27 00:55:20.377 2009] [11283:2686719776] [error] ajp_send_request::jk_ajp_common.c (1507): (railoadmin) connecting to backend failed. Tomcat is probably not started or is listening on the wrong port (errno=61) [Thu Aug 27 00:55:20.377 2009] [11283:2686719776] [info] ajp_service::jk_ajp_common.c (2447): (railoadmin) sending request to tomcat failed (recoverable), because of error during request sending (attempt=1) [Thu Aug 27 00:55:20.477 2009] [11283:2686719776] [info] jk_open_socket::jk_connect.c (594): connect to 127.0.0.1:8009 failed (errno=61) [Thu Aug 27 00:55:20.477 2009] [11283:2686719776] [info] ajp_connect_to_endpoint::jk_ajp_common.c (922): Failed opening socket to (127.0.0.1:8009) (errno=61) [Thu Aug 27 00:55:20.477 2009] [11283:2686719776] [error] ajp_send_request::jk_ajp_common.c (1507): (railoadmin) connecting to backend failed. Tomcat is probably not started or is listening on the wrong port (errno=61) [Thu Aug 27 00:55:20.477 2009] [11283:2686719776] [info] ajp_service::jk_ajp_common.c (2447): (railoadmin) sending request to tomcat failed (recoverable), because of error during request sending (attempt=2) [Thu Aug 27 00:55:20.477 2009] [11283:2686719776] [error] ajp_service::jk_ajp_common.c (2466): (railoadmin) connecting to tomcat failed. [Thu Aug 27 00:55:20.477 2009] [11283:2686719776] [info] jk_handler::mod_jk.c (2615): Service error=-3 for worker=railoadmin

    Read the article

  • Adobe Coldfusion Railo OpenBD Apache Tomcat Multiple Sites

    - by chris hough
    Here's what I am trying to do, unless I am crazy: I am trying to use Tomcat with the multiple workers, so far I got OpenBD working, but having trouble with Railo, and will be tackling Adobe after. each engine deployed as a war separated by different workers I wanted to keep both the sites and engines inside my sites directory I have to remap the symlink for the WEB-INF when I switch engines = have not found a way around this my thought is to have everything separated into modules and I want to be able to execute both cfm and php code in a single site.  Ideally, it would be amazing if there would be a way to not have to remap the symlink as well. thoughts? can this be done? I am trying to mimic how this would be setup on a live server, not using eclipse for example. here is what I am working with so far: my apache workers.properties worker.list=openbd, openbdadmin, railo, railoadmin  worker.openbd.type=ajp13  worker.openbd.host=local.mydev.openbd  worker.openbd.port=8009 worker.openbdadmin.type=ajp13  worker.openbdadmin.host=local.admin.openbd worker.openbdadmin.port=8009   worker.railo.type=ajp13  worker.railo.host=local.mydev.railo  worker.railo.port=8009 worker.railoadmin.type=ajp13  worker.railoadmin.host=local.admin.railo worker.railoadmin.port=8009   my tomcat servers.xml < Host name="local.admin.openbd" appBase="/Users/[myusername]/Websites/coldfusion.engines"  unpackWARs="false" autoDeploy="true" xmlValidation="true" xmlNamespaceAware="false"        < Context path="" docBase="openbd/" reloadable="true" privileged="true" antiResourceLocking="false" anitJARLocking="false" allowLinking="true" < /Host        < Host name="local.admin.railo"   appBase="/Users/[my username]/Websites/coldfusion.engines" unpackWARs="false" autoDeploy="true" xmlValidation="true" xmlNamespaceAware="false"        < Context path="" docBase="railo/"  reloadable="true" privileged="true" antiResourceLocking="false" anitJARLocking="false" allowLinking="true" < /Host < Host name="local.mydev.openbd"   appBase="/Users/[my username]/Websites/coldfusion.engines" unpackWARs="false" autoDeploy="true" xmlValidation="true" xmlNamespaceAware="false" < Context path="" docBase="/Users/[my username]/Websites/example.mydev/wwwroot/"  reloadable="true" privileged="true" antiResourceLocking="false" anitJARLocking="false" allowLinking="true"< /Context < /Host < Host name="local.mydev.railo"   appBase="/Users/[my username]/Websites/coldfusion.engines"  unpackWARs="false" autoDeploy="true" xmlValidation="true" xmlNamespaceAware="false" < Context path="" docBase="/Users/[my username]/Websites/example.mydev/wwwroot/"  reloadable="true" privileged="true" antiResourceLocking="false" anitJARLocking="false" allowLinking="true" < /Host my apache vhosts ServerName local.admin.openbd DocumentRoot /Users/[my username]/Websites/coldfusion.engines/openBD/ #Mount OpenBD and tell it to only server cfml files JkMount /*.cfm openbdadmin ErrorLog "/Users/[my username]/Websites/apache.logs/local_openbdadmin_error.log" ServerName local.admin.railo DocumentRoot /Users/[my username]/Websites/coldfusion.engines/railo/ #Mount Railo and tell it to only server cfml files JkMount /*.cfm railoadmin ErrorLog "/Users/[my username]/Websites/apache.logs/local_railoadmin_error.log" ServerName local.mydev DocumentRoot /Users/[my username]/Websites/example.mydev/wwwroot ErrorLog "/Users/[my username]/Websites/apache.logs/local_example_mydev_error.log" ServerName local.mydev.openbd DocumentRoot /Users/[my username]/Websites/example.mydev/wwwroot #Mount OpenBD and tell it to only server cfml files JkMount /*.cfm openbd ErrorLog "/Users/[my username]/Websites/apache.logs/local_example_mydev_openbd_error.log" ServerName local.mydev.railo DocumentRoot /Users/[my username]/Websites/example.mydev/wwwroot JkMount /*.cfm railo ErrorLog "/Users/[my username]/Websites/apache.logs/local_example_mydev_railo_error.log" my folder structure I am using websites/apache.logs/ websites/coldfusion.engines/ websites/coldfusion.engines/cfusion/ websites/coldfusion.engines/openBD/ websites/coldfusion.engines/railo/ websites/example.mydev/ websites/example.mydev/wwwroot/ websites/example.mydev/wwwroot/index.cfm   websites/example.mydev/wwwroot/index.htm   websites/example.mydev/wwwroot/index.php   error log output [Thu Aug 27 00:54:50.443 2009] [11279:2686719776] [info] init_jk::mod_jk.c (3183): mod_jk/1.2.28 initialized [Thu Aug 27 00:54:51.346 2009] [11280:2686719776] [info] init_jk::mod_jk.c (3183): mod_jk/1.2.28 initialized [Thu Aug 27 00:55:18.963 2009] [11284:2686719776] [info] jk_open_socket::jk_connect.c (594): connect to 127.0.0.1:8009 failed (errno=61) [Thu Aug 27 00:55:18.963 2009] [11284:2686719776] [info] ajp_connect_to_endpoint::jk_ajp_common.c (922): Failed opening socket to (127.0.0.1:8009) (errno=61) [Thu Aug 27 00:55:18.963 2009] [11284:2686719776] [error] ajp_send_request::jk_ajp_common.c (1507): (openbdadmin) connecting to backend failed. Tomcat is probably not started or is listening on the wrong port (errno=61) [Thu Aug 27 00:55:18.963 2009] [11284:2686719776] [info] ajp_service::jk_ajp_common.c (2447): (openbdadmin) sending request to tomcat failed (recoverable), because of error during request sending (attempt=1) [Thu Aug 27 00:55:19.063 2009] [11284:2686719776] [info] jk_open_socket::jk_connect.c (594): connect to 127.0.0.1:8009 failed (errno=61) [Thu Aug 27 00:55:19.063 2009] [11284:2686719776] [info] ajp_connect_to_endpoint::jk_ajp_common.c (922): Failed opening socket to (127.0.0.1:8009) (errno=61) [Thu Aug 27 00:55:19.063 2009] [11284:2686719776] [error] ajp_send_request::jk_ajp_common.c (1507): (openbdadmin) connecting to backend failed. Tomcat is probably not started or is listening on the wrong port (errno=61) [Thu Aug 27 00:55:19.063 2009] [11284:2686719776] [info] ajp_service::jk_ajp_common.c (2447): (openbdadmin) sending request to tomcat failed (recoverable), because of error during request sending (attempt=2) [Thu Aug 27 00:55:19.063 2009] [11284:2686719776] [error] ajp_service::jk_ajp_common.c (2466): (openbdadmin) connecting to tomcat failed. [Thu Aug 27 00:55:19.063 2009] [11284:2686719776] [info] jk_handler::mod_jk.c (2615): Service error=-3 for worker=openbdadmin [Thu Aug 27 00:55:20.377 2009] [11283:2686719776] [info] jk_open_socket::jk_connect.c (594): connect to 127.0.0.1:8009 failed (errno=61) [Thu Aug 27 00:55:20.377 2009] [11283:2686719776] [info] ajp_connect_to_endpoint::jk_ajp_common.c (922): Failed opening socket to (127.0.0.1:8009) (errno=61) [Thu Aug 27 00:55:20.377 2009] [11283:2686719776] [error] ajp_send_request::jk_ajp_common.c (1507): (railoadmin) connecting to backend failed. Tomcat is probably not started or is listening on the wrong port (errno=61) [Thu Aug 27 00:55:20.377 2009] [11283:2686719776] [info] ajp_service::jk_ajp_common.c (2447): (railoadmin) sending request to tomcat failed (recoverable), because of error during request sending (attempt=1) [Thu Aug 27 00:55:20.477 2009] [11283:2686719776] [info] jk_open_socket::jk_connect.c (594): connect to 127.0.0.1:8009 failed (errno=61) [Thu Aug 27 00:55:20.477 2009] [11283:2686719776] [info] ajp_connect_to_endpoint::jk_ajp_common.c (922): Failed opening socket to (127.0.0.1:8009) (errno=61) [Thu Aug 27 00:55:20.477 2009] [11283:2686719776] [error] ajp_send_request::jk_ajp_common.c (1507): (railoadmin) connecting to backend failed. Tomcat is probably not started or is listening on the wrong port (errno=61) [Thu Aug 27 00:55:20.477 2009] [11283:2686719776] [info] ajp_service::jk_ajp_common.c (2447): (railoadmin) sending request to tomcat failed (recoverable), because of error during request sending (attempt=2) [Thu Aug 27 00:55:20.477 2009] [11283:2686719776] [error] ajp_service::jk_ajp_common.c (2466): (railoadmin) connecting to tomcat failed. [Thu Aug 27 00:55:20.477 2009] [11283:2686719776] [info] jk_handler::mod_jk.c (2615): Service error=-3 for worker=railoadmin

    Read the article

  • General RewriteRule for many undefined parameters in URL

    - by FedericoBiccheddu
    I'm trying to write a rule to make that one can generalize, since multiple pages to pass the values are different. Right now I could do: RewriteRule ^forum/([^/]{1,255})/([\+]{1})/((([a-z]+)([_]{1})([a-zA-Z0-9]+)([/]?))+)$ forum.php?name=$1&$5=$7 [L] To address such as: Nome+del+Forum/+/page_1/action_do Should return: forum.php?name=Nome+del+Forum&page=1&action=do Instead, take only the last parameter (in this case action=do): forum.php?name=Nome+del+Forum&action=do How can I fix? Thanks in advance!

    Read the article

  • [.htaccess] General RewriteRule for many undefined parameters in url

    - by FedericoBiccheddu
    I'm trying to write a rule to make that one can generalize, since multiple pages to pass the values are different. Right now I could do: RewriteRule ^forum/([^/]{1,255})/([\+]{1})/((([a-z]+)([_]{1})([a-zA-Z0-9]+)([/]?))+)$ forum.php?name=$1&$5=$7 [L] To address such as: Nome+del+Forum/+/page_1/action_do Should return: forum.php?name=Nome+del+Forum&page=1&action=do Instead, take only the last parameter (in this case action=do): forum.php?name=Nome+del+Forum&action=do How can I fix? Thanks in advance!

    Read the article

  • Using subselect to accomplish LEFT JOIN

    - by Andre
    Is is possible to accomplish the equivalent of a LEFT JOIN with subselect where multiple columns are required. Here's what I mean. SELECT m.*, (SELECT * FROM model WHERE id = m.id LIMIT 1) AS models FROM make m As it stands now doing this gives me a 'Operand should contain 1 column(s)' error. Yes I know this is possible with LEFT JOIN, but I was told it was possible with subselect to I'm curious as to how it's done.

    Read the article

  • Rails: Easy way to add more than one flash[:notice] at a time.

    - by Josh Pinter
    I thought every time you do a flash[:notice]="Message" it would add it to the array which would then get displayed during the view but the following just keeps the last flash: flash[:notice] = "Message 1" flash[:notice] = "Message 2" Now I realize it's just a simple hash with a key (I think :)) but is there a better way to do multiple flashes than the following: flash[:notice] = "Message 1<br />" flash[:notice] = "Message 2" Thanks. Josh

    Read the article

  • Determine number of screens and screen relative location without WinForms

    - by Philipp Schmid
    I want to save and restore the window position of my WPF application. I want to make the code robust to use with multiple monitors who's number and relative location can change (I want to avoid opening my application off-screen when the monitor configuration has changed inbetween invocations). I know of the Screen class in System.Windows.Forms, but I don't want to take a dependency on that assembly just for this feature.

    Read the article

  • VB.Net Split A Group Of Text

    - by Ben
    I am looking to split up multiple lines of text to single them out, for example: Url/Host:ftp://server.com/1 Login:Admin1 Password:Password1 Url/Host:ftp://server.com/2 Login:Admin2 Password:Password2 Url/Host:ftp://server.com/3 Login:Admin3 Password:Password3 How can I split each section into a different textbox, so that section one would be put into TextBox1.Text on its own: Url/Host:ftp://server.com/1 Login:Admin1 Password:Password1 Thanks in advance :)!

    Read the article

  • MySQL search for user and their roles

    - by Jenkz
    I am re-writing the SQL which lets a user search for any other user on our site and also shows their roles. An an example, roles can be "Writer", "Editor", "Publisher". Each role links a User to a Publication. Users can take multiple roles within multiple publications. Example table setup: "users" : user_id, firstname, lastname "publications" : publication_id, name "link_writers" : user_id, publication_id "link_editors" : user_id, publication_id Current psuedo SQL: SELECT * FROM ( (SELECT user_id FROM users WHERE firstname LIKE '%Jenkz%') UNION (SELECT user_id FROM users WHERE lastname LIKE '%Jenkz%') ) AS dt JOIN (ROLES STATEMENT) AS roles ON roles.user_id = dt.user_id At the moment my roles statement is: SELECT dt2.user_id, dt2.publication_id, dt.role FROM ( (SELECT 'writer' AS role, link_writers.user_id, link_writers.publication_id FROM link_writers) UNION (SELECT 'editor' AS role, link_editors.user_id, link_editors.publication_id FROM link_editors) ) AS dt2 The reason for wrapping the roles statement in UNION clauses is that some roles are more complex and require a table join to find the publication_id and user_id. As an example "publishers" might be linked accross two tables "link_publishers": user_id, publisher_group_id "link_publisher_groups": publisher_group_id, publication_id So in that instance, the query forming part of my UNION would be: SELECT 'publisher' AS role, link_publishers.user_id, link_publisher_groups.publication_id FROM link_publishers JOIN link_publisher_groups ON lpg.group_id = lp.group_id I'm pretty confident that my table setup is good (I was warned off the one-table-for-all system when researching the layout). My problem is that there are now 100,000 rows in the users table and upto 70,000 rows in each of the link tables. Initial lookup in the users table is fast, but the joining really slows things down. How can I only join on the relevant roles? -------------------------- EDIT ---------------------------------- Explain above (open in a new window to see full resolution). The bottom bit in red, is the "WHERE firstname LIKE '%Jenkz%'" the third row searches WHERE CONCAT(firstname, ' ', lastname) LIKE '%Jenkz%'. Hence the large row count, but I think this is unavoidable, unless there is a way to put an index accross concatenated fields? The green bit at the top just shows the total rows scanned from the ROLES STATEMENT. You can then see each individual UNION clause (#6 - #12) which all show a large number of rows. Some of the indexes are normal, some are unique. It seems that MySQL isn't optimizing to use the dt.user_id as a comparison for the internal of the UNION statements. Is there any way to force this behaviour? Please note that my real setup is not publications and writers but "webmasters", "players", "teams" etc.

    Read the article

  • Bacula & Multiple Tape Devices, and so on

    - by Tom O'Connor
    Bacula won't make use of 2 tape devices simultaneously. (Search for #-#-# for the TL;DR) A little background, perhaps. In the process of trying to get a decent working backup solution (backing up 20TB ain't cheap, or easy) at $dayjob, we bought a bunch of things to make it work. Firstly, there's a Spectra Logic T50e autochanger, 40 slots of LTO5 goodness, and that robot's got a pair of IBM HH5 Ultrium LTO5 drives, connected via FibreChannel Arbitrated Loop to our backup server. There's the backup server.. A Dell R715 with 2x 16 core AMD 62xx CPUs, and 32GB of RAM. Yummy. That server's got 2 Emulex FCe-12000E cards, and an Intel X520-SR dual port 10GE NIC. We were also sold Commvault Backup (non-NDMP). Here's where it gets really complicated. Spectra Logic and Commvault both sent respective engineers, who set up the library and the software. Commvault was running fine, in so far as the controller was working fine. The Dell server has Ubuntu 12.04 server, and runs the MediaAgent for CommVault, and mounts our BlueArc NAS as NFS to a few mountpoints, like /home, and some stuff in /mnt. When backing up from the NFS mountpoints, we were seeing ~= 290GB/hr throughput. That's CRAP, considering we've got 20-odd TB to get through, in a <48 hour backup window. The rated maximum on the BlueArc is 700MB/s (2460GB/hr), the rated maximum write speed on the tape devices is 140MB/s, per drive, so that's 492GB/hr (or double it, for the total throughput). So, the next step was to benchmark NFS performance with IOzone, and it turns out that we get epic write performance (across 20 threads), and it's like 1.5-2.5TB/hr write, but read performance is fecking hopeless. I couldn't ever get higher than 343GB/hr maximum. So let's assume that the 343GB/hr is a theoretical maximum for read performance on the NAS, then we should in theory be able to get that performance out of a) CommVault, and b) any other backup agent. Not the case. Commvault seems to only ever give me 200-250GB/hr throughput, and out of experimentation, I installed Bacula to see what the state of play there is. If, for example, Bacula gave consistently better performance and speeds than Commvault, then we'd be able to say "**$.$ Refunds Plz $.$**" #-#-# Alas, I found a different problem with Bacula. Commvault seems pretty happy to read from one part of the mountpoint with one thread, and stream that to a Tape device, whilst reading from some other directory with the other thread, and writing to the 2nd drive in the autochanger. I can't for the life of me get Bacula to mount and write to two tape drives simultaneously. Things I've tried: Setting Maximum Concurrent Jobs = 20 in the Director, File and Storage Daemons Setting Prefer Mounted Volumes = no in the Job Definition Setting multiple devices in the Autochanger resource. Documentation seems to be very single-drive centric, and we feel a little like we've strapped a rocket to a hamster, with this one. The majority of example Bacula configurations are for DDS4 drives, manual tape swapping, and FreeBSD or IRIX systems. I should probably add that I'm not too bothered if this isn't possible, but I'd be surprised. I basically want to use Bacula as proof to stick it to the software vendors that they're overpriced ;) I read somewhere that @KyleBrandt has done something similar with a modern Tape solution.. Configuration Files: *bacula-dir.conf* # # Default Bacula Director Configuration file Director { # define myself Name = backuphost-1-dir DIRport = 9101 # where we listen for UA connections QueryFile = "/etc/bacula/scripts/query.sql" WorkingDirectory = "/var/lib/bacula" PidDirectory = "/var/run/bacula" Maximum Concurrent Jobs = 20 Password = "yourekiddingright" # Console password Messages = Daemon DirAddress = 0.0.0.0 #DirAddress = 127.0.0.1 } JobDefs { Name = "DefaultFileJob" Type = Backup Level = Incremental Client = backuphost-1-fd FileSet = "Full Set" Schedule = "WeeklyCycle" Storage = File Messages = Standard Pool = File Priority = 10 Write Bootstrap = "/var/lib/bacula/%c.bsr" } JobDefs { Name = "DefaultTapeJob" Type = Backup Level = Incremental Client = backuphost-1-fd FileSet = "Full Set" Schedule = "WeeklyCycle" Storage = "SpectraLogic" Messages = Standard Pool = AllTapes Priority = 10 Write Bootstrap = "/var/lib/bacula/%c.bsr" Prefer Mounted Volumes = no } # # Define the main nightly save backup job # By default, this job will back up to disk in /nonexistant/path/to/file/archive/dir Job { Name = "BackupClient1" JobDefs = "DefaultFileJob" } Job { Name = "BackupThisVolume" JobDefs = "DefaultTapeJob" FileSet = "SpecialVolume" } #Job { # Name = "BackupClient2" # Client = backuphost-12-fd # JobDefs = "DefaultJob" #} # Backup the catalog database (after the nightly save) Job { Name = "BackupCatalog" JobDefs = "DefaultFileJob" Level = Full FileSet="Catalog" Schedule = "WeeklyCycleAfterBackup" # This creates an ASCII copy of the catalog # Arguments to make_catalog_backup.pl are: # make_catalog_backup.pl <catalog-name> RunBeforeJob = "/etc/bacula/scripts/make_catalog_backup.pl MyCatalog" # This deletes the copy of the catalog RunAfterJob = "/etc/bacula/scripts/delete_catalog_backup" Write Bootstrap = "/var/lib/bacula/%n.bsr" Priority = 11 # run after main backup } # # Standard Restore template, to be changed by Console program # Only one such job is needed for all Jobs/Clients/Storage ... # Job { Name = "RestoreFiles" Type = Restore Client=backuphost-1-fd FileSet="Full Set" Storage = File Pool = Default Messages = Standard Where = /srv/bacula/restore } FileSet { Name = "SpecialVolume" Include { Options { signature = MD5 } File = /mnt/SpecialVolume } Exclude { File = /var/lib/bacula File = /nonexistant/path/to/file/archive/dir File = /proc File = /tmp File = /.journal File = /.fsck } } # List of files to be backed up FileSet { Name = "Full Set" Include { Options { signature = MD5 } File = /usr/sbin } Exclude { File = /var/lib/bacula File = /nonexistant/path/to/file/archive/dir File = /proc File = /tmp File = /.journal File = /.fsck } } Schedule { Name = "WeeklyCycle" Run = Full 1st sun at 23:05 Run = Differential 2nd-5th sun at 23:05 Run = Incremental mon-sat at 23:05 } # This schedule does the catalog. It starts after the WeeklyCycle Schedule { Name = "WeeklyCycleAfterBackup" Run = Full sun-sat at 23:10 } # This is the backup of the catalog FileSet { Name = "Catalog" Include { Options { signature = MD5 } File = "/var/lib/bacula/bacula.sql" } } # Client (File Services) to backup Client { Name = backuphost-1-fd Address = localhost FDPort = 9102 Catalog = MyCatalog Password = "surelyyourejoking" # password for FileDaemon File Retention = 30 days # 30 days Job Retention = 6 months # six months AutoPrune = yes # Prune expired Jobs/Files } # # Second Client (File Services) to backup # You should change Name, Address, and Password before using # #Client { # Name = backuphost-12-fd # Address = localhost2 # FDPort = 9102 # Catalog = MyCatalog # Password = "i'mnotjokinganddontcallmeshirley" # password for FileDaemon 2 # File Retention = 30 days # 30 days # Job Retention = 6 months # six months # AutoPrune = yes # Prune expired Jobs/Files #} # Definition of file storage device Storage { Name = File # Do not use "localhost" here Address = localhost # N.B. Use a fully qualified name here SDPort = 9103 Password = "lalalalala" Device = FileStorage Media Type = File } Storage { Name = "SpectraLogic" Address = localhost SDPort = 9103 Password = "linkedinmakethebestpasswords" Device = Drive-1 Device = Drive-2 Media Type = LTO5 Autochanger = yes } # Generic catalog service Catalog { Name = MyCatalog # Uncomment the following line if you want the dbi driver # dbdriver = "dbi:sqlite3"; dbaddress = 127.0.0.1; dbport = dbname = "bacula"; DB Address = ""; dbuser = "bacula"; dbpassword = "bbmaster63" } # Reasonable message delivery -- send most everything to email address # and to the console Messages { Name = Standard mailcommand = "/usr/lib/bacula/bsmtp -h localhost -f \"\(Bacula\) \<%r\>\" -s \"Bacula: %t %e of %c %l\" %r" operatorcommand = "/usr/lib/bacula/bsmtp -h localhost -f \"\(Bacula\) \<%r\>\" -s \"Bacula: Intervention needed for %j\" %r" mail = root@localhost = all, !skipped operator = root@localhost = mount console = all, !skipped, !saved # # WARNING! the following will create a file that you must cycle from # time to time as it will grow indefinitely. However, it will # also keep all your messages if they scroll off the console. # append = "/var/lib/bacula/log" = all, !skipped catalog = all } # # Message delivery for daemon messages (no job). Messages { Name = Daemon mailcommand = "/usr/lib/bacula/bsmtp -h localhost -f \"\(Bacula\) \<%r\>\" -s \"Bacula daemon message\" %r" mail = root@localhost = all, !skipped console = all, !skipped, !saved append = "/var/lib/bacula/log" = all, !skipped } # Default pool definition Pool { Name = Default Pool Type = Backup Recycle = yes # Bacula can automatically recycle Volumes AutoPrune = yes # Prune expired volumes Volume Retention = 365 days # one year } # File Pool definition Pool { Name = File Pool Type = Backup Recycle = yes # Bacula can automatically recycle Volumes AutoPrune = yes # Prune expired volumes Volume Retention = 365 days # one year Maximum Volume Bytes = 50G # Limit Volume size to something reasonable Maximum Volumes = 100 # Limit number of Volumes in Pool } Pool { Name = AllTapes Pool Type = Backup Recycle = yes AutoPrune = yes # Prune expired volumes Volume Retention = 31 days # one Moth } # Scratch pool definition Pool { Name = Scratch Pool Type = Backup } # # Restricted console used by tray-monitor to get the status of the director # Console { Name = backuphost-1-mon Password = "LastFMalsostorePasswordsLikeThis" CommandACL = status, .status } bacula-sd.conf # # Default Bacula Storage Daemon Configuration file # Storage { # definition of myself Name = backuphost-1-sd SDPort = 9103 # Director's port WorkingDirectory = "/var/lib/bacula" Pid Directory = "/var/run/bacula" Maximum Concurrent Jobs = 20 SDAddress = 0.0.0.0 # SDAddress = 127.0.0.1 } # # List Directors who are permitted to contact Storage daemon # Director { Name = backuphost-1-dir Password = "passwordslinplaintext" } # # Restricted Director, used by tray-monitor to get the # status of the storage daemon # Director { Name = backuphost-1-mon Password = "totalinsecurityabound" Monitor = yes } Device { Name = FileStorage Media Type = File Archive Device = /srv/bacula/archive LabelMedia = yes; # lets Bacula label unlabeled media Random Access = Yes; AutomaticMount = yes; # when device opened, read it RemovableMedia = no; AlwaysOpen = no; } Autochanger { Name = SpectraLogic Device = Drive-1 Device = Drive-2 Changer Command = "/etc/bacula/scripts/mtx-changer %c %o %S %a %d" Changer Device = /dev/sg4 } Device { Name = Drive-1 Drive Index = 0 Archive Device = /dev/nst0 Changer Device = /dev/sg4 Media Type = LTO5 AutoChanger = yes RemovableMedia = yes; AutomaticMount = yes; AlwaysOpen = yes; RandomAccess = no; LabelMedia = yes } Device { Name = Drive-2 Drive Index = 1 Archive Device = /dev/nst1 Changer Device = /dev/sg4 Media Type = LTO5 AutoChanger = yes RemovableMedia = yes; AutomaticMount = yes; AlwaysOpen = yes; RandomAccess = no; LabelMedia = yes } # # Send all messages to the Director, # mount messages also are sent to the email address # Messages { Name = Standard director = backuphost-1-dir = all } bacula-fd.conf # # Default Bacula File Daemon Configuration file # # # List Directors who are permitted to contact this File daemon # Director { Name = backuphost-1-dir Password = "hahahahahaha" } # # Restricted Director, used by tray-monitor to get the # status of the file daemon # Director { Name = backuphost-1-mon Password = "hohohohohho" Monitor = yes } # # "Global" File daemon configuration specifications # FileDaemon { # this is me Name = backuphost-1-fd FDport = 9102 # where we listen for the director WorkingDirectory = /var/lib/bacula Pid Directory = /var/run/bacula Maximum Concurrent Jobs = 20 #FDAddress = 127.0.0.1 FDAddress = 0.0.0.0 } # Send all messages except skipped files back to Director Messages { Name = Standard director = backuphost-1-dir = all, !skipped, !restored }

    Read the article

  • Scripts help FIND command via atime output to multiple files

    - by sswagner
    here is a script I have wrote that I need help with. in the script I do a find for any file that has not been access for over 30 days, 60, 90, 180, 270 & 365 days. This works just fine. however, this takes a few days just to finish the 30 day portion. it is scanning a NAS. (millions and millions of files) as you see, the 30 day information really holds all the data need for the rest of the scripts. the 60, 90, etc. portion of the script are just redoing the same effort as the 30 day portion, except for an extended time frame. it would save in this case weeks worth of re-scanning if some how the 60, 90 180, etc.. portions could just get its data from the 30 day output. this is where I am asking for help. the output is just like an ls -l command. and you can also see from the output below, there are multiple years in this output. the script is attached and printed below. total 24 -rw-r--r-- 1 root bin 60 Apr 12 13:07 config_file -rw-r--r-- 1 root bin 9 Apr 12 13:07 config_file.InProgress -rw-r--r-- 1 root bin 0 Apr 12 13:07 config_file.sids -rw-r--r-- 1 root bin 1284 Apr 19 10:41 rpt_file -rw-r--r-- 1 16074 5003 20083 Apr 26 2002 /nas/quota/slot_2/CR_APP002/eb_ora_bin1/sun8/product/9.2s/oem_webstage/oracle/sysman/qtour/console/dat1_01.gif -rw-r--r-- 1 16074 5003 20088 Apr 26 2002 /nas/quota/slot_2/CR_APP002/eb_ora_bin1/sun8/product/9.2s/oem_webstage/oracle/sysman/qtour/console/set1_04.gif -rw-r--r-- 1 16074 5003 2008 Apr 26 2002 /nas/quota/slot_2/CR_APP002/eb_ora_bin1/sun8/product/9.2s/oem_webstage/oracle/sysman/qtour/oapps/get2_03.htm -rw-r--r-- 1 16074 5003 20083 Apr 26 2002 /nas/quota/slot_2/CR_APP002/eb_ora_bin1/sun8/product/9.2s/oem_webstage/oracle/sysman/qtour/oapps/per1_01.gif any help is appreciated. these are linux distro boxes, so I am sure perl is on there too if needed.. Thanks! !/bin/ksh # search shares for files that have not been accessed for a certain time. NOTE: $IN = input search $OUT = output directory for text file # TESTS Numeric arguments can be specified as # +n for greater than n, -n for less than n, n for exactly n. # -atime n File was last accessed n*24 hours ago. # # IN1=/nas/quota/slot_2/CR* IN2=/nas/quota/slot_3/CR* IN3=/nas/quota/slot_4/CR* IN4=/nas/quota/slot_5/CR* OUT=/nas/quota/slot_3/CR_PRJ144/steve mkdir ${OUT} for dir in ${IN1}; do find $dir -atime +30 -exec ls -l '{}' \; ${OUT}/30days.txt; done for dir in ${IN2}; do find $dir -atime +30 -exec ls -l '{}' \; ${OUT}/30days.txt; done for dir in ${IN3}; do find $dir -atime +30 -exec ls -l '{}' \; ${OUT}/30days.txt; done for dir in ${IN4}; do find $dir -atime +30 -exec ls -l '{}' \; ${OUT}/30days.txt; done for dir in ${IN1}; do find $dir -atime +60 -exec ls -l '{}' \; ${OUT}/60days.txt; done for dir in ${IN2}; do find $dir -atime +60 -exec ls -l '{}' \; ${OUT}/60days.txt; done for dir in ${IN3}; do find $dir -atime +60 -exec ls -l '{}' \; ${OUT}/60days.txt; done for dir in ${IN4}; do find $dir -atime +60 -exec ls -l '{}' \; ${OUT}/60days.txt; done for dir in ${IN1}; do find $dir -atime +90 -exec ls -l '{}' \; ${OUT}/90days.txt; done for dir in ${IN2}; do find $dir -atime +90 -exec ls -l '{}' \; ${OUT}/90days.txt; done for dir in ${IN3}; do find $dir -atime +90 -exec ls -l '{}' \; ${OUT}/90days.txt; done for dir in ${IN4}; do find $dir -atime +90 -exec ls -l '{}' \; ${OUT}/90days.txt; done for dir in ${IN1}; do find $dir -atime +180 -exec ls -l '{}' \; ${OUT}/180days.txt; done for dir in ${IN2}; do find $dir -atime +180 -exec ls -l '{}' \; ${OUT}/180days.txt; done for dir in ${IN3}; do find $dir -atime +180 -exec ls -l '{}' \; ${OUT}/180days.txt; done for dir in ${IN4}; do find $dir -atime +180 -exec ls -l '{}' \; ${OUT}/180days.txt; done for dir in ${IN1}; do find $dir -atime +270 -exec ls -l '{}' \; ${OUT}/270days.txt; done for dir in ${IN2}; do find $dir -atime +270 -exec ls -l '{}' \; ${OUT}/270days.txt; done for dir in ${IN3}; do find $dir -atime +270 -exec ls -l '{}' \; ${OUT}/270days.txt; done for dir in ${IN4}; do find $dir -atime +270 -exec ls -l '{}' \; ${OUT}/270days.txt; done for dir in ${IN1}; do find $dir -atime +365 -exec ls -l '{}' \; ${OUT}/365days.txt; done for dir in ${IN2}; do find $dir -atime +365 -exec ls -l '{}' \; ${OUT}/365days.txt; done for dir in ${IN3}; do find $dir -atime +365 -exec ls -l '{}' \; ${OUT}/365days.txt; done for dir in ${IN4}; do find $dir -atime +365 -exec ls -l '{}' \; ${OUT}/365days.txt; done

    Read the article

  • Unexpected multiple network connections on Windows Vista

    - by Jens
    My Network and Sharing Center shows multiple connections to the internet, where only one is expected: My internet access works fine, but since the "Unidentified Network" is set to public after each boot, sharing and network discovery don't work as well. Similar questions on Google point mostly to the Bonjour service, but I am sure that this is not, and never was, installed on this machine. So: How can I get rid of the unidentified network? Output of ipconfig /all: Windows IP Configuration Host Name . . . . . . . . . . . . : ***** Primary Dns Suffix . . . . . . . : Node Type . . . . . . . . . . . . : Hybrid IP Routing Enabled. . . . . . . . : No WINS Proxy Enabled. . . . . . . . : No DNS Suffix Search List. . . . . . : mySuffix Ethernet adapter Local Area Connection: Connection-specific DNS Suffix . : mySuffix Description . . . . . . . . . . . : Intel(R) 82567LF-3 Gigabit Network Connection Physical Address. . . . . . . . . : 00-19-99-65-F0-B2 DHCP Enabled. . . . . . . . . . . : Yes Autoconfiguration Enabled . . . . : Yes Link-local IPv6 Address . . . . . : fe80::c90:2d23:7651:42f%10(Preferred) IPv4 Address. . . . . . . . . . . : 192.168.141.130(Preferred) Subnet Mask . . . . . . . . . . . : 255.255.255.0 Lease Obtained. . . . . . . . . . : 13 November 2012 09:40:54 Lease Expires . . . . . . . . . . : 21 November 2012 09:45:01 Default Gateway . . . . . . . . . : 192.168.141.109 192.168.141.108 DHCP Server . . . . . . . . . . . : 192.168.141.120 DHCPv6 IAID . . . . . . . . . . . : 218110361 DHCPv6 Client DUID. . . . . . . . : 00-01-00-01-12-DD-00-AF-00-19-99-65-F0-B2 DNS Servers . . . . . . . . . . . : 8.8.8.8 8.8.4.4 NetBIOS over Tcpip. . . . . . . . : Enabled Tunnel adapter Local Area Connection* 13: Media State . . . . . . . . . . . : Media disconnected Connection-specific DNS Suffix . : mySuffix Description . . . . . . . . . . . : Microsoft ISATAP Adapter Physical Address. . . . . . . . . : 00-00-00-00-00-00-00-E0 DHCP Enabled. . . . . . . . . . . : No Autoconfiguration Enabled . . . . : Yes

    Read the article

  • Passing multiple parameters in an MVC Ajax.ActionLink

    - by mwright
    I am using an Ajax.ActionLink to call an Action in a Controller, nothing special there. I want to pass two parameters to the Action. Is this possible using an Ajax.ActionLink? I thought that it would just be a matter of including multiple values in the AjaxOptions: <%= Ajax.ActionLink("Link Text", "ActionName", "ControllerName", new { firstParameter = firstValueToPass, secondParameter = secondValueToPass }, new AjaxOptions{ UpdateTargetId = "updateTargetId"} )%> Is it possible to pass multiple parameters? Where is a good place to learn more about the AjaxOptions?

    Read the article

  • Defining multiple values in DefineConstants in MsBuild element?

    - by Sardaukar
    I'm currently integrating my Wix projects in MSBuild. It is necessary for me to pass multiple values to the Wix project. One value will work (ProductVersion in the sample below). <Target Name="BuildWixSetups"> <MSBuild Condition="'%(WixSetups.Identity)'!=''" Projects="%(WixSetups.Identity)" Targets="Rebuild" Properties="Configuration=Release;OutputPath=$(OutDir);DefineConstants=ProductVersion=%(WixSetups.ISVersion)" ContinueOnError="true"/> </Target> However, how do I pass multiple values to the DefineConstants key? I've tried all the 'logical' separators (space, comma, semi-colon, pipe-symbol), but this doesn't work. Has someone else come across this problem? Solutions that don't work: Trying to add a DefineConstants element does not work because DefineConstants needs to be expressed within the Properties attribute.

    Read the article

  • RowFilter.regexFilter multiple columns

    - by twodayslate
    I am currently using the following to filter my JTable RowFilter.regexFilter( Pattern.compile(textField.getText(), Pattern.CASE_INSENSITIVE).toString(), columns ); How do I format my textField or filter so if I want to filter multiple columns I can do that. Right now I can filter multiple columns but my filter can only be of one of the columns An example might help my explanation better: Name Grade GPA Zac A 4.0 Zac F 1.0 Mike A 4.0 Dan C 2.0 The text field would contain Zac A or something similar and it would show the first Zac row if columns was int[]{0, 1}. Right now if I do the above I get nothing. The filter Zac works but I get both Zac's. A also works but I would then get Zac A 4.0 and Mike A 3.0. I hope I have explained my problem well. Please let me know if you do not understand.

    Read the article

  • idioms for returning multiple values in shell scripting

    - by Wang
    Are there any idioms for returning multiple values from a bash function within a script? http://tldp.org/LDP/abs/html/assortedtips.html describes how to echo multiple values and process the results (e.g., example 35-17), but that gets tricky if some of the returned values are strings with spaces in. A more structured way to return would be to assign to global variables, like foo () { FOO_RV1="bob" FOO_RV2="bill" } foo echo "foo returned ${FOO_RV1} and ${FOO_RV2}" I realize that if I need re-entrancy in a shell script I'm probably doing it wrong, but I still feel very uncomfortable throwing global variables around just to hold return values. Is there a better way? I would prefer portability, but it's probably not a real limitation if I have to specify #!/bin/bash.

    Read the article

< Previous Page | 150 151 152 153 154 155 156 157 158 159 160 161  | Next Page >