Search Results

Search found 25414 results on 1017 pages for 'default arguments'.

Page 199/1017 | < Previous Page | 195 196 197 198 199 200 201 202 203 204 205 206  | Next Page >

  • How to disable VGA

    - by Bitmap
    If i run lspci | grep VGA I get below output which tells me below VGA cards are present on my computer. 01:00.0 VGA compatible controller: nVidia Corporation GT216 [GeForce GT 220] (rev a2) 08:02.0 VGA compatible controller: ATI Technologies Inc ES1000 (rev 02) The ES1000 is an onboard card which came with my machine. Do anyone know how to disable this VGA on my machine. The reason for this request is because if I run xrandr I get the output as shown below: xrandr: Failed to get size of gamma for output default Screen 0: minimum 320 x 240, current 1024 x 768, maximum 1024 x 768 default connected 1024x768+0+0 0mm x 0mm 1024x768 50.0* 800x600 51.0 52.0 53.0 680x384 54.0 55.0 640x480 56.0 512x384 57.0 400x300 58.0 320x240 59.0 Which means I am not able to configure nVidia to accept smaller resolution. Thank you.

    Read the article

  • 12.04 monitor brightness commands ignored

    - by jarvisschultz
    I cannot change the brightness of the monitor on a laptop either from the command line or from keyboard shortcuts. I have verified that the file in /sys/class/backlight/acpi_video0/brightness is changing. I have also verified that I am running the nvidia drivers. Things I have tried: Adding entries to /etc/default/grub The GRUB_CMDLINE_LINUX_DEFAULT and GRUB_CMDLINE_LINUX entries now read GRUB_CMDLINE_LINUX_DEFAULT="quite nosplash acpi_backlight=vendor" GRUB_CMDLINE_LINUX="noapic" I have also tried various commands here (with an update grub and reboot each time) but nothing has helped I enabled brightness control in /etc/X11/xorg.conf so that it now looks like Section "Device" Identifier "Default Device" Driver "nvidia" Option "NoLogo" "True" Option "RegistryDwords" "EnableBrightnessControl=1" EndSection Investigated installing linux-kamal-mjgbacklight and determined it was not applicable to my system Nothing seems to have made any difference. I am using an Nvidia GeForce GT 330M with driver version 295.40. Does anyone have any other suggestions?

    Read the article

  • Settings messed up after crash

    - by ChocoDeveloper
    After an abrupt shutdown many settings were messed up: #1 Firefox was a mess. Bookmarks were gone, and I couldn't even add new ones. I had to reset firefox from safe mode and install all my addons and configure everything. This was a pain but is now solved. #2 The background in the login screen shows the one I chose with Ubuntu Tweak for a second, and then it puts back the default one. I tried changing it again with Ubuntu Tweak but it's still happening. #3 All my shortcuts in the sidebar were replaced by the default ones. I re-added them manually, also a pain. So how can I solve 2? And in case this happens again, is there a way to fix everything easy and fast?

    Read the article

  • Problem installing Ubuntu 13.10 alongside Windows 8

    - by kustrle
    What I have: Sony Vaio laptop (SVE1512E6EW) with preinstalled Windows 8. I disabled Secure Boot some time ago in BIOS. I already had Ubuntu (previous version) installed on it, but removed it some time ago. After that, the default Windows boot menu is showing up everytime I boot up computer, and the only entry is Windows 8. What I did: Burned Ubuntu 13.10 DVD Restarted computer and booted from it Chosen Install Ubuntu (not Try Ubuntu) Created new ext4 partition from free space Installed Ubuntu on it What happened: After the installation I restarted computer. Windows default boot menu showed up (just as before) and the only entry was still Windows 8. If you have any additional questions I will try to answer them as fast as possible.

    Read the article

  • Monitor resolution can't be saved

    - by Iztok
    Today I installed Lubuntu 13.10 on Vmware Player (inside Windows). I change the Monitor setting (resolution) from default 800x600 to 1680x1050. It works. Beside Apply, I also press Save button. "Changes are saved" appears. But - after restart, the resolution is again in 800x600. I also opened /etc/xdg/lxsession/Lubuntu/autostart and add (it was empty before) one line: @xrandr --mode 1680x1050 After restart the default resolution is back again. Any idea?

    Read the article

  • amixer volume controls applies twice

    - by user214604
    The volume increment or decrement is happening double the intended amount using amixer for my alsa driver using ./amixer -c 0 set Master 1- command. This happens becuase by default volume controls apply for both playback and capture moduels. My alsa driver config doesnt enabled any of the capture controls. even there is no capture enabled, the function from simple_none.c returns true for capture channel. All the capture volume controls are applied to my playback driver. static int is_ops(snd_mixer_elem_t *elem, int dir, int cmd, int val) case SM_OPS_IS_CHANNEL: return (unsigned int) val < s-str[dir].channels; ./amixer -c 0 set Master Playback 10+ ./amixer -c 0 set Master Playback 10 - ./amixer -c 0 set Master Capture 10+ ./amixer -c 0 set Master Capture 10 - I suspect capture is enabled by default in my system for alsa drivers. Let me know what are the things to ensure to disable the capture.

    Read the article

  • Does home directory encryption depend on gnome keyring?

    - by pedorro
    My gnome-keyring has somehow gotten messed up. It prompts for a password (that I know I never provided - yes I chose 'unsafe storage'). None of the possible passwords that I use (including empty) are working. So basically I want to delete the default key so I can start over. I just want to confirm that this isn't somehow tied to my home directory encryption. I want to be sure that if I delete the default key from it, I will still be able to log in normally and decrypt my home directory. It seems likely that they're unrelated as the keyring is within the home directory and is thus itself encrypted, but I just thought I'd ask. Anyone have any thoughts?

    Read the article

  • A Safe Way to Allow Upload of All File Types?

    - by user34682
    By default WordPress restricts the file types that can be uploaded to /uploads using the default Media Manager. I know it is possible to manually extend the allowed file types. I also know it is possible to change functions.php to allow ALL file types to be uploaded. This restriction obviously exists for security concerns - e.g. someone could upload a harmful .exe Would it not be possible to allow secure upload of all filetypes by setting the permissions of the /uploads directory to prevent execution of any of its contents? Thus it wouldn't matter if someone uploaded a harmful file because it would not be executable on the server...

    Read the article

  • Can't view order in magento

    - by koko
    Hi, I've been setting up a fresh magento 1.4.0.1 install, working great so far. I did some test orders just to see. Everything works fine, but when I click on "view order" under "my orders", I get a bunch of error messages: There has been an error processing your request Notice: iconv_substr() [function.iconv-substr]: Unknown error (0) in /data/web/A14237/htdocs/magento/app/code/core/Mage/Core/Helper/String.php on line 98 Trace: #0 [internal function]: mageCoreErrorHandler(8, 'iconv_substr() ...', '/data/web/A1423...', 98, Array) #1 /data/web/A14237/htdocs/magento/app/code/core/Mage/Core/Helper/String.php(98): iconv_substr('1', 0, 50, 'UTF-8') #2 /data/web/A14237/htdocs/magento/app/code/core/Mage/Core/Helper/String.php(173): Mage_Core_Helper_String-substr('1', 0, 50) #3 /data/web/A14237/htdocs/magento/app/code/core/Mage/Core/Helper/String.php(112): Mage_Core_Helper_String-str_split('1', 50) #4 /data/web/A14237/htdocs/magento/app/design/frontend/base/default/template/sales/order/items/renderer/default.phtml(58): Mage_Core_Helper_String-splitInjection('1') #5 /data/web/A14237/htdocs/magento/app/code/core/Mage/Core/Block/Template.php(189): include('/data/web/A1423...') #6 /data/web/A14237/htdocs/magento/app/code/core/Mage/Core/Block/Template.php(225): Mage_Core_Block_Template-fetchView('frontend/base/d...') #7 /data/web/A14237/htdocs/magento/app/code/core/Mage/Core/Block/Template.php(242): Mage_Core_Block_Template-renderView() #8 /data/web/A14237/htdocs/magento/app/code/core/Mage/Core/Block/Abstract.php(674): Mage_Core_Block_Template-_toHtml() #9 /data/web/A14237/htdocs/magento/app/code/core/Mage/Sales/Block/Items/Abstract.php(137): Mage_Core_Block_Abstract-toHtml() #10 /data/web/A14237/htdocs/magento/app/design/frontend/base/default/template/sales/order/items.phtml(52): Mage_Sales_Block_Items_Abstract-getItemHtml(Object(Mage_Sales_Model_Order_Item)) #11 /data/web/A14237/htdocs/magento/app/code/core/Mage/Core/Block/Template.php(189): include('/data/web/A1423...') #12 /data/web/A14237/htdocs/magento/app/code/core/Mage/Core/Block/Template.php(225): Mage_Core_Block_Template-fetchView('frontend/base/d...') #13 /data/web/A14237/htdocs/magento/app/code/core/Mage/Core/Block/Template.php(242): Mage_Core_Block_Template-renderView() #14 /data/web/A14237/htdocs/magento/app/code/core/Mage/Core/Block/Abstract.php(674): Mage_Core_Block_Template-_toHtml() #15 /data/web/A14237/htdocs/magento/app/code/core/Mage/Core/Block/Abstract.php(516): Mage_Core_Block_Abstract-toHtml() #16 /data/web/A14237/htdocs/magento/app/code/core/Mage/Core/Block/Abstract.php(467): Mage_Core_Block_Abstract-_getChildHtml('order_items', true) #17 /data/web/A14237/htdocs/magento/app/design/frontend/base/default/template/sales/order/view.phtml(64): Mage_Core_Block_Abstract-getChildHtml('order_items') #18 /data/web/A14237/htdocs/magento/app/code/core/Mage/Core/Block/Template.php(189): include('/data/web/A1423...') #19 /data/web/A14237/htdocs/magento/app/code/core/Mage/Core/Block/Template.php(225): Mage_Core_Block_Template-fetchView('frontend/base/d...') #20 /data/web/A14237/htdocs/magento/app/code/core/Mage/Core/Block/Template.php(242): Mage_Core_Block_Template-renderView() #21 /data/web/A14237/htdocs/magento/app/code/core/Mage/Core/Block/Abstract.php(674): Mage_Core_Block_Template-_toHtml() #22 /data/web/A14237/htdocs/magento/app/code/core/Mage/Core/Block/Abstract.php(516): Mage_Core_Block_Abstract-toHtml() #23 /data/web/A14237/htdocs/magento/app/code/core/Mage/Core/Block/Abstract.php(463): Mage_Core_Block_Abstract-_getChildHtml('sales.order.vie...', true) #24 /data/web/A14237/htdocs/magento/app/code/core/Mage/Page/Block/Html/Wrapper.php(52): Mage_Core_Block_Abstract-getChildHtml('', true, true) #25 /data/web/A14237/htdocs/magento/app/code/core/Mage/Core/Block/Abstract.php(674): Mage_Page_Block_Html_Wrapper-_toHtml() #26 /data/web/A14237/htdocs/magento/app/code/core/Mage/Core/Block/Text/List.php(43): Mage_Core_Block_Abstract-toHtml() #27 /data/web/A14237/htdocs/magento/app/code/core/Mage/Core/Block/Abstract.php(674): Mage_Core_Block_Text_List-_toHtml() #28 /data/web/A14237/htdocs/magento/app/code/core/Mage/Core/Block/Abstract.php(516): Mage_Core_Block_Abstract-toHtml() #29 /data/web/A14237/htdocs/magento/app/code/core/Mage/Core/Block/Abstract.php(467): Mage_Core_Block_Abstract-_getChildHtml('content', true) #30 /data/web/A14237/htdocs/magento/app/design/frontend/base/default/template/page/2columns-left.phtml(48): Mage_Core_Block_Abstract-getChildHtml('content') #31 /data/web/A14237/htdocs/magento/app/code/core/Mage/Core/Block/Template.php(189): include('/data/web/A1423...') #32 /data/web/A14237/htdocs/magento/app/code/core/Mage/Core/Block/Template.php(225): Mage_Core_Block_Template-fetchView('frontend/base/d...') #33 /data/web/A14237/htdocs/magento/app/code/core/Mage/Core/Block/Template.php(242): Mage_Core_Block_Template-renderView() #34 /data/web/A14237/htdocs/magento/app/code/core/Mage/Core/Block/Abstract.php(674): Mage_Core_Block_Template-_toHtml() #35 /data/web/A14237/htdocs/magento/app/code/core/Mage/Core/Model/Layout.php(536): Mage_Core_Block_Abstract-toHtml() #36 /data/web/A14237/htdocs/magento/app/code/core/Mage/Core/Controller/Varien/Action.php(389): Mage_Core_Model_Layout-getOutput() #37 /data/web/A14237/htdocs/magento/app/code/core/Mage/Sales/controllers/OrderController.php(100): Mage_Core_Controller_Varien_Action-renderLayout() #38 /data/web/A14237/htdocs/magento/app/code/core/Mage/Sales/controllers/OrderController.php(136): Mage_Sales_OrderController-_viewAction() #39 /data/web/A14237/htdocs/magento/app/code/core/Mage/Core/Controller/Varien/Action.php(418): Mage_Sales_OrderController-viewAction() #40 /data/web/A14237/htdocs/magento/app/code/core/Mage/Core/Controller/Varien/Router/Standard.php(254): Mage_Core_Controller_Varien_Action-dispatch('view') #41 /data/web/A14237/htdocs/magento/app/code/core/Mage/Core/Controller/Varien/Front.php(177): Mage_Core_Controller_Varien_Router_Standard-match(Object(Mage_Core_Controller_Request_Http)) #42 /data/web/A14237/htdocs/magento/app/code/core/Mage/Core/Model/App.php(304): Mage_Core_Controller_Varien_Front-dispatch() #43 /data/web/A14237/htdocs/magento/app/Mage.php(596): Mage_Core_Model_App-run(Array) #44 /data/web/A14237/htdocs/magento/index.php(78): Mage::run('', 'store') #45 {main} gtx, koko

    Read the article

  • Grails - Simple hasMany Problem - Using CheckBoxes rather than HTML Select in create.gsp

    - by gav
    My problem is this: I want to create a grails domain instance, defining the 'Many' instances of another domain that it has. I have the actual source in a Google Code Project but the following should illustrate the problem. class Person { String name static hasMany[skills:Skill] static constraints = { id (visible:false) skills (nullable:false, blank:false) } } class Skill { String name String description static constraints = { id (visible:false) name (nullable:false, blank:false) description (nullable:false, blank:false) } } If you use this model and def scaffold for the two Controllers then you end up with a form like this that doesn't work; My own attempt to get this to work enumerates the Skills as checkboxes and looks like this; But when I save the Volunteer the skills are null! This is the code for my save method; def save = { log.info "Saving: " + params.toString() def skills = params.skills log.info "Skills: " + skills def volunteerInstance = new Volunteer(params) log.info volunteerInstance if (volunteerInstance.save(flush: true)) { flash.message = "${message(code: 'default.created.message', args: [message(code: 'volunteer.label', default: 'Volunteer'), volunteerInstance.id])}" redirect(action: "show", id: volunteerInstance.id) log.info volunteerInstance } else { render(view: "create", model: [volunteerInstance: volunteerInstance]) } } This is my log output (I have custom toString() methods); 2010-05-10 21:06:41,494 [http-8080-3] INFO bumbumtrain.VolunteerController - Saving: ["skills":["1", "2"], "name":"Ian", "_skills":["", ""], "create":"Create", "action":"save", "controller":"volunteer"] 2010-05-10 21:06:41,495 [http-8080-3] INFO bumbumtrain.VolunteerController - Skills: [1, 2] 2010-05-10 21:06:41,508 [http-8080-3] INFO bumbumtrain.VolunteerController - Volunteer[ id: null | Name: Ian | Skills [Skill[ id: 1 | Name: Carpenter ] , Skill[ id: 2 | Name: Sound Engineer ] ]] Note that in the final log line the right Skills have been picked up and are part of the object instance. When the volunteer is saved the 'Skills' are ignored and not commited to the database despite the in memory version created clearly does have the items. Is it not possible to pass the Skills at construction time? There must be a way round this? I need a single form to allow a person to register but I want to normalise the data so that I can add more skills at a later time. If you think this should 'just work' then a link to a working example would be great. If I use the HTML Select then it works fine! Such as the following to make the Create page; <tr class="prop"> <td valign="top" class="name"> <label for="skills"><g:message code="volunteer.skills.label" default="Skills" /></label> </td> <td valign="top" class="value ${hasErrors(bean: volunteerInstance, field: 'skills', 'errors')}"> <g:select name="skills" from="${uk.co.bumbumtrain.Skill.list()}" multiple="yes" optionKey="id" size="5" value="${volunteerInstance?.skills}" /> </td> </tr> But I need it to work with checkboxes like this; <tr class="prop"> <td valign="top" class="name"> <label for="skills"><g:message code="volunteer.skills.label" default="Skills" /></label> </td> <td valign="top" class="value ${hasErrors(bean: volunteerInstance, field: 'skills', 'errors')}"> <g:each in="${skillInstanceList}" status="i" var="skillInstance"> <label for="${skillInstance?.name}"><g:message code="${skillInstance?.name}.label" default="${skillInstance?.name}" /></label> <g:checkBox name="skills" value="${skillInstance?.id.toString()}"/> </g:each> </td> </tr> The log output is exactly the same! With both style of form the Volunteer instance is created with the Skills correctly referenced in the 'Skills' variable. When saving, the latter fails with a null reference exception as shown at the top of this question. Hope this makes sense, thanks in advance! Gav

    Read the article

  • "Launch Failed. Binary Not Found." Snow Leopard and Eclipse C/C++ IDE issue.

    - by Alex
    Not a question, I've just scoured the internet in search of a solution for this problem and thought I'd share it with the good folks of SO. I'll put it in plain terms so that it's accessible to newbs. :) (Apologies if this is the wrong place -- just trying to be helpful.) This issue occurs with almost any user OS X Snow Leopard who tries to use the Eclipse C/C++ IDE, but is particularly annoying for the people (like me) who were using the Eclipse C/C++ IDE in Leopard, and were unable to work with Eclipse anymore when they upgraded. The issue occurs When users go to build/compile/link their software. They get the following error: Launch Failed. Binary Not Found. Further, the "binaries" branch in the project window on the left is simply nonexistent. THE PROBLEM: is that GCC 4.2 (the GNU Compiler Collection) that comes with Snow Leopard compiles binaries in 64-bit by default. Unfortunately, the linker that Eclipse uses does not understand 64-bit binaries; it reads 32-bit binaries. There may be other issues here, but in short, they culminate in no binary being generated, at least not one that Eclipse can read, which translates into Eclipse not finding the binaries. Hence the error. One solution is to add an -arch i686 flag when making the file, but manually making the file every time is annoying. Luckily for us, Snow Leopard also comes with GCC 4.0, which compiles in 32 bits by default. So one solution is merely to link this as the default compiler. This is the way I did it. THE SOLUTION: The GCCs are in /usr/bin, which is normally a hidden folder, so you can't see it in the Finder unless you explicitly tell the system that you want to see hidden folders. Anyway, what you want to do is go to the /usr/bin folder and delete the path that links the GCC command with GCC 4.2 and add a path that links the GCC command with GCC 4.0. In other words, when you or Eclipse try to access GCC, we want the command to go to the compiler that builds in 32 bits by default, so that the linker can read the files; we do not want it to go to the compiler that compiles in 64 bits. The best way to do this is to go to Applications/Utilities, and select the app called Terminal. A text prompt should come up. It should say something like "(Computer Name):~ (Username)$ " (with a space for you user input at the end). The way to accomplish the tasks above is to enter the following commands, entering each one in sequence VERBATIM, and pressing enter after each individual line. cd /usr/bin rm cc gcc c++ g++ ln -s gcc-4.0 cc ln -s gcc-4.0 gcc ln -s c++-4.0 c++ ln -s g++-4.0 g++ Like me, you will probably get an error that tells you you don't have permission to access these files. If so, try the following commands instead: cd /usr/bin sudo rm cc gcc c++ g++ sudo ln -s gcc-4.0 cc sudo ln -s gcc-4.0 gcc sudo ln -s c++-4.0 c++ sudo ln -s g++-4.0 g++ Sudo may prompt you for a password. If you've never used sudo before, try just pressing enter. If that doesn't work, try the password for your main admin account. OTHER POSSIBLE SOLUTIONS You may be able to enter build variables into Eclipse. I tried this, but I don't know enough about it. If you want to feel it out, the flag you will probably need is -arch i686. In earnest, GCC-4.0 worked for me all this time, and I don't see any reason to switch now. There may be a way to alter the default for the compiler itself, but once again, I don't know enough about it. Hope this has been helpful and informative. Good coding!

    Read the article

  • just can't get a controller to work

    - by Asaf
    I try to get into mysite/user so that application/classes/controller/user.php should be working, now this is my file tree: code of controller/user.php: <?php defined('SYSPATH') OR die('No direct access allowed.'); class Controller_User extends Controller_Default { public $template = 'user'; function action_index() { //$view = View::factory('user'); //$view->render(TRUE); $this->template->message = 'hello, world!'; } } ?> code of controller/default.php: <?php defined('SYSPATH') OR die('No direct access allowed.'); class Controller_default extends Controller_Template { } bootstrap.php: <?php defined('SYSPATH') or die('No direct script access.'); //-- Environment setup -------------------------------------------------------- /** * Set the default time zone. * * @see http://kohanaframework.org/guide/using.configuration * @see http://php.net/timezones */ date_default_timezone_set('America/Chicago'); /** * Set the default locale. * * @see http://kohanaframework.org/guide/using.configuration * @see http://php.net/setlocale */ setlocale(LC_ALL, 'en_US.utf-8'); /** * Enable the Kohana auto-loader. * * @see http://kohanaframework.org/guide/using.autoloading * @see http://php.net/spl_autoload_register */ spl_autoload_register(array('Kohana', 'auto_load')); /** * Enable the Kohana auto-loader for unserialization. * * @see http://php.net/spl_autoload_call * @see http://php.net/manual/var.configuration.php#unserialize-callback-func */ ini_set('unserialize_callback_func', 'spl_autoload_call'); //-- Configuration and initialization ----------------------------------------- /** * Initialize Kohana, setting the default options. * * The following options are available: * * - string base_url path, and optionally domain, of your application NULL * - string index_file name of your index file, usually "index.php" index.php * - string charset internal character set used for input and output utf-8 * - string cache_dir set the internal cache directory APPPATH/cache * - boolean errors enable or disable error handling TRUE * - boolean profile enable or disable internal profiling TRUE * - boolean caching enable or disable internal caching FALSE */ Kohana::init(array( 'base_url' => '/mysite/', 'index_file' => FALSE, )); /** * Attach the file write to logging. Multiple writers are supported. */ Kohana::$log->attach(new Kohana_Log_File(APPPATH.'logs')); /** * Attach a file reader to config. Multiple readers are supported. */ Kohana::$config->attach(new Kohana_Config_File); /** * Enable modules. Modules are referenced by a relative or absolute path. */ Kohana::modules(array( 'auth' => MODPATH.'auth', // Basic authentication 'cache' => MODPATH.'cache', // Caching with multiple backends 'codebench' => MODPATH.'codebench', // Benchmarking tool 'database' => MODPATH.'database', // Database access 'image' => MODPATH.'image', // Image manipulation 'orm' => MODPATH.'orm', // Object Relationship Mapping 'pagination' => MODPATH.'pagination', // Paging of results 'userguide' => MODPATH.'userguide', // User guide and API documentation )); /** * Set the routes. Each route must have a minimum of a name, a URI and a set of * defaults for the URI. */ Route::set('default', '(<controller>(/<action>(/<id>)))') ->defaults(array( 'controller' => 'welcome', 'action' => 'index', )); /** * Execute the main request. A source of the URI can be passed, eg: $_SERVER['PATH_INFO']. * If no source is specified, the URI will be automatically detected. */ echo Request::instance() ->execute() ->send_headers() ->response; ?> .htaccess: RewriteEngine On RewriteBase /mysite/ RewriteRule ^(application|modules|system) - [F,L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule .* index.php/$0 [PT,L] Trying to go to http://localhost/ makes the "hello world" page, from the welcome.php Trying to go to http://localhost/mysite/user give me this: The requested URL /mysite/user was not found on this server.

    Read the article

  • Ivy resolve not working with dynamic artifact

    - by richever
    I've been using Ivy a bit but I seem to still have a lot to learn. I have two projects. One is a web app and the other is a library upon which the web app depends. The set up is that the library project is compiled to a jar file and published using Ivy to a directory within the project. In the web app build file, I have an ant target that calls the Ivy resolve ant task. What I'd like to do is have the web app using the dynamic resolve mode during development (on developer's local machines) and default resolve mode for test and production builds. Previously I was appending a time stamp to the library archive file so that Ivy would notice changes in file when the web app tried to resolve its dependency on it. Within Eclipse this is cumbersome because, in the web app, the project had to be refreshed and the build path tweaked every time a new library jar was published. Publishing a similarly named jar file every time would, I figure, only require developers to refresh the project. The problem is that the web app is unable to retrieve the dynamic jar file. The output I get looks something like this: resolve: [ivy:configure] :: Ivy 2.1.0 - 20090925235825 :: http://ant.apache.org/ivy/ :: [ivy:configure] :: loading settings :: file = /Users/richard/workspace/webapp/web/WEB-INF/config/ivy/ivysettings.xml [ivy:resolve] :: resolving dependencies :: com.webapp#webapp;[email protected] [ivy:resolve] confs: [default] [ivy:resolve] found com.webapp#library;latest.integration in local [ivy:resolve] :: resolution report :: resolve 142ms :: artifacts dl 0ms --------------------------------------------------------------------- | | modules || artifacts | | conf | number| search|dwnlded|evicted|| number|dwnlded| --------------------------------------------------------------------- | default | 1 | 0 | 0 | 0 || 0 | 0 | --------------------------------------------------------------------- [ivy:resolve] [ivy:resolve] :: problems summary :: [ivy:resolve] :::: WARNINGS [ivy:resolve] :::::::::::::::::::::::::::::::::::::::::::::: [ivy:resolve] :: UNRESOLVED DEPENDENCIES :: [ivy:resolve] :::::::::::::::::::::::::::::::::::::::::::::: [ivy:resolve] :: com.webapp#library;latest.integration: impossible to resolve dynamic revision [ivy:resolve] :::::::::::::::::::::::::::::::::::::::::::::: [ivy:resolve] :::: ERRORS [ivy:resolve] impossible to resolve dynamic revision for com.webapp#library;latest.integration: check your configuration and make sure revision is part of your pattern [ivy:resolve] [ivy:resolve] :: USE VERBOSE OR DEBUG MESSAGE LEVEL FOR MORE DETAILS BUILD FAILED /Users/richard/workspace/webapp/build.xml:71: impossible to resolve dependencies: resolve failed - see output for details The web app resolve target looks like this: <target name="resolve" depends="load-ivy"> <ivy:configure file="${ivy.dir}/ivysettings.xml" /> <ivy:resolve file="${ivy.dir}/ivy.xml" resolveMode="${ivy.resolve.mode}"/> <ivy:retrieve pattern="${lib.dir}/[artifact]-[revision].[ext]" type="jar" sync="true" /> </target> In this case, ivy.resolve.mode has a value of 'dynamic' (without quotes). The web app's Ivy file is simple. It looks like this: <ivy-module version="2.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="http://ant.apache.org/ivy/schemas/ivy.xsd"> <info organisation="com.webapp" module="webapp"/> <dependencies> <dependency name="library" rev="${ivy.revision.default}" revConstraint="${ivy.revision.dynamic}" /> </dependencies> </ivy-module> During development, ivy.revision.dynamic has a value of 'latest.integration'. While, during production or test, 'ivy.revision.default' has a value of '1.0'. Any ideas? Please let me know if there's any more information I need to supply. Thanks!

    Read the article

  • Problems with shutting down JBoss in Eclipse if I change JNDI port

    - by Balint Pato
    1st phase I have a problem shutting down my running JBoss instance under Eclipse since I changed the JNDI port of JBoss. Of course I can shut it down from the console view but not with the stop button (it still searches JNDI port at the default 1099 port). I'm looking forward to any solutions. Thank you! Used environment: JBoss 4.0.2 (using default) Eclipse 3.4.0. (using JBoss Tools 2.1.1.GA) Default ports: 1098, 1099 Changed ports: 11098, 11099 I changed the following part in jbosspath/server/default/conf/jboss-service.xml: <!-- ==================================================================== --> <!-- JNDI --> <!-- ==================================================================== --> <mbean code="org.jboss.naming.NamingService" name="jboss:service=Naming" xmbean-dd="resource:xmdesc/NamingService-xmbean.xml"> <!-- The call by value mode. true if all lookups are unmarshalled using the caller's TCL, false if in VM lookups return the value by reference. --> <attribute name="CallByValue">false</attribute> <!-- The listening port for the bootstrap JNP service. Set this to -1 to run the NamingService without the JNP invoker listening port. --> <attribute name="Port">11099</attribute> <!-- The bootstrap JNP server bind address. This also sets the default RMI service bind address. Empty == all addresses --> <attribute name="BindAddress">${jboss.bind.address}</attribute> <!-- The port of the RMI naming service, 0 == anonymous --> <attribute name="RmiPort">11098</attribute> <!-- The RMI service bind address. Empty == all addresses --> <attribute name="RmiBindAddress">${jboss.bind.address}</attribute> <!-- The thread pool service used to control the bootstrap lookups --> <depends optional-attribute-name="LookupPool" proxy-type="attribute">jboss.system:service=ThreadPool</depends> </mbean> <mbean code="org.jboss.naming.JNDIView" name="jboss:service=JNDIView" xmbean-dd="resource:xmdesc/JNDIView-xmbean.xml"> </mbean> Eclipse setup: About my JBoss Tools preferences: I had a previous version, I got this problem, I read about some bugfix in JbossTools, so updated to 2.1.1.GA. Now the buttons changed, and I've got a new preferences view, but I cannot modify anything...seems to be abnormal as well: Error dialog: The stacktrace: javax.naming.CommunicationException: Could not obtain connection to any of these urls: localhost:1099 [Root exception is javax.naming.CommunicationException: Failed to connect to server localhost:1099 [Root exception is javax.naming.ServiceUnavailableException: Failed to connect to server localhost:1099 [Root exception is java.net.ConnectException: Connection refused: connect]]] at org.jnp.interfaces.NamingContext.checkRef(NamingContext.java:1385) at org.jnp.interfaces.NamingContext.lookup(NamingContext.java:579) at org.jnp.interfaces.NamingContext.lookup(NamingContext.java:572) at javax.naming.InitialContext.lookup(InitialContext.java:347) at org.jboss.Shutdown.main(Shutdown.java:202) Caused by: javax.naming.CommunicationException: Failed to connect to server localhost:1099 [Root exception is javax.naming.ServiceUnavailableException: Failed to connect to server localhost:1099 [Root exception is java.net.ConnectException: Connection refused: connect]] at org.jnp.interfaces.NamingContext.getServer(NamingContext.java:254) at org.jnp.interfaces.NamingContext.checkRef(NamingContext.java:1370) ... 4 more Caused by: javax.naming.ServiceUnavailableException: Failed to connect to server localhost:1099 [Root exception is java.net.ConnectException: Connection refused: connect] at org.jnp.interfaces.NamingContext.getServer(NamingContext.java:228) ... 5 more Caused by: java.net.ConnectException: Connection refused: connect at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:305) at java.net.PlainSocketImpl.connectToAddress(PlainSocketImpl.java:171) at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:158) at java.net.Socket.connect(Socket.java:452) at java.net.Socket.connect(Socket.java:402) at java.net.Socket.<init>(Socket.java:309) at java.net.Socket.<init>(Socket.java:211) at org.jnp.interfaces.TimedSocketFactory.createSocket(TimedSocketFactory.java:69) at org.jnp.interfaces.TimedSocketFactory.createSocket(TimedSocketFactory.java:62) at org.jnp.interfaces.NamingContext.getServer(NamingContext.java:224) ... 5 more Exception in thread "main" 2nd phase: After creating a new Server in File/new/other/server, it did appear in the preferences tab. Now the stop button is working (the server receives the shutdown messages without any additional modification of the jndi port -- there is no opportunity for it now) but it still throws an error message, though different, it's without exception stack trace: "Server JBoss 4.0 Server failed to stop."

    Read the article

  • No EJB receiver available for handling [appName:,modulename:HelloWorldSessionBean,distinctname:]

    - by zoit
    I'm trying to develop my first EJB with an Example I found, I have the next mistake: Exception in thread "main" java.lang.IllegalStateException: No EJB receiver available for handling [appName:,modulename:HelloWorldSessionBean,distinctname:] combination for invocation context org.jboss.ejb.client.EJBClientInvocationContext@41408b80 at org.jboss.ejb.client.EJBClientContext.requireEJBReceiver(EJBClientContext.java:584) at org.jboss.ejb.client.ReceiverInterceptor.handleInvocation(ReceiverInterceptor.java:119) at org.jboss.ejb.client.EJBClientInvocationContext.sendRequest(EJBClientInvocationContext.java:181) at org.jboss.ejb.client.EJBInvocationHandler.doInvoke(EJBInvocationHandler.java:136) at org.jboss.ejb.client.EJBInvocationHandler.doInvoke(EJBInvocationHandler.java:121) at org.jboss.ejb.client.EJBInvocationHandler.invoke(EJBInvocationHandler.java:104) at $Proxy0.sayHello(Unknown Source) at com.ibytecode.client.EJBApplicationClient.main(EJBApplicationClient.java:16) I use JBOSS 7.1, and the code is this: HelloWorld.java package com.ibytecode.business; import javax.ejb.Remote; @Remote public interface HelloWorld { public String sayHello(); } HelloWorldBean.java package com.ibytecode.businesslogic; import com.ibytecode.business.HelloWorld; import javax.ejb.Stateless; /** * Session Bean implementation class HelloWorldBean */ @Stateless public class HelloWorldBean implements HelloWorld { /** * Default constructor. */ public HelloWorldBean() { } public String sayHello() { return "Hello World !!!"; } } EJBApplicationClient.java: package com.ibytecode.client; import javax.naming.Context; import javax.naming.NamingException; import com.ibytecode.business.HelloWorld; import com.ibytecode.businesslogic.HelloWorldBean; import com.ibytecode.clientutility.ClientUtility; public class EJBApplicationClient { public static void main(String[] args) { // TODO Auto-generated method stub HelloWorld bean = doLookup(); System.out.println(bean.sayHello()); // 4. Call business logic } private static HelloWorld doLookup() { Context context = null; HelloWorld bean = null; try { // 1. Obtaining Context context = ClientUtility.getInitialContext(); // 2. Generate JNDI Lookup name String lookupName = getLookupName(); // 3. Lookup and cast bean = (HelloWorld) context.lookup(lookupName); } catch (NamingException e) { e.printStackTrace(); } return bean; } private static String getLookupName() { /* The app name is the EAR name of the deployed EJB without .ear suffix. Since we haven't deployed the application as a .ear, the app name for us will be an empty string */ String appName = ""; /* The module name is the JAR name of the deployed EJB without the .jar suffix. */ String moduleName = "HelloWorldSessionBean"; /*AS7 allows each deployment to have an (optional) distinct name. This can be an empty string if distinct name is not specified. */ String distinctName = ""; // The EJB bean implementation class name String beanName = HelloWorldBean.class.getSimpleName(); // Fully qualified remote interface name final String interfaceName = HelloWorld.class.getName(); // Create a look up string name String name = "ejb:" + appName + "/" + moduleName + "/" + distinctName + "/" + beanName + "!" + interfaceName; return name; } } ClientUtility.java package com.ibytecode.clientutility; import java.util.Properties; import javax.naming.Context; import javax.naming.InitialContext; import javax.naming.NamingException; public class ClientUtility { private static Context initialContext; private static final String PKG_INTERFACES = "org.jboss.ejb.client.naming"; public static Context getInitialContext() throws NamingException { if (initialContext == null) { Properties properties = new Properties(); properties.put("jboss.naming.client.ejb.context", true); properties.put(Context.URL_PKG_PREFIXES, PKG_INTERFACES); initialContext = new InitialContext(properties); } return initialContext; } } properties.file: remote.connectionprovider.create.options.org.xnio.Options.SSL_ENABLED=false remote.connections=default remote.connection.default.host=localhost remote.connection.default.port = 4447 remote.connection.default.connect.options.org.xnio.Options.SASL_POLICY_NOANONYMOUS=false This is what I have. Why I have this?. Thanks so much. Regards

    Read the article

  • Better Way To Use C++ Named Parameter Idiom?

    - by Head Geek
    I've been developing a GUI library for Windows (as a personal side project, no aspirations of usefulness). For my main window class, I've set up a hierarchy of option classes (using the Named Parameter Idiom), because some options are shared and others are specific to particular types of windows (like dialogs). The way the Named Parameter Idiom works, the functions of the parameter class have to return the object they're called on. The problem is that, in the hierarchy, each one has to be a different class -- the createWindowOpts class for standard windows, the createDialogOpts class for dialogs, and the like. I've dealt with that by making all the option classes templates. Here's an example: template <class T> class _sharedWindowOpts: public detail::_baseCreateWindowOpts { public: /////////////////////////////////////////////////////////////// // No required parameters in this case. _sharedWindowOpts() { }; typedef T optType; // Commonly used options optType& at(int x, int y) { mX=x; mY=y; return static_cast<optType&>(*this); }; // Where to put the upper-left corner of the window; if not specified, the system sets it to a default position optType& at(int x, int y, int width, int height) { mX=x; mY=y; mWidth=width; mHeight=height; return static_cast<optType&>(*this); }; // Sets the position and size of the window in a single call optType& background(HBRUSH b) { mBackground=b; return static_cast<optType&>(*this); }; // Sets the default background to this brush optType& background(INT_PTR b) { mBackground=HBRUSH(b+1); return static_cast<optType&>(*this); }; // Sets the default background to one of the COLOR_* colors; defaults to COLOR_WINDOW optType& cursor(HCURSOR c) { mCursor=c; return static_cast<optType&>(*this); }; // Sets the default mouse cursor for this window; defaults to the standard arrow optType& hidden() { mStyle&=~WS_VISIBLE; return static_cast<optType&>(*this); }; // Windows are visible by default optType& icon(HICON iconLarge, HICON iconSmall=0) { mIcon=iconLarge; mSmallIcon=iconSmall; return static_cast<optType&>(*this); }; // Specifies the icon, and optionally a small icon // ...Many others removed... }; template <class T> class _createWindowOpts: public _sharedWindowOpts<T> { public: /////////////////////////////////////////////////////////////// _createWindowOpts() { }; // These can't be used with child windows, or aren't needed optType& menu(HMENU m) { mMenuOrId=m; return static_cast<optType&>(*this); }; // Gives the window a menu optType& owner(HWND hwnd) { mParentOrOwner=hwnd; return static_cast<optType&>(*this); }; // Sets the optional parent/owner }; class createWindowOpts: public _createWindowOpts<createWindowOpts> { public: /////////////////////////////////////////////////////////////// createWindowOpts() { }; }; It works, but as you can see, it requires a noticeable amount of extra work: a type-cast on the return type for each function, extra template classes, etcetera. My question is, is there an easier way to implement the Named Parameter Idiom in this case, one that doesn't require all the extra stuff?

    Read the article

  • Inheriting XML files and modifying values

    - by Veehmot
    This is a question about concept. I have an XML file, let's call it base: <base id="default"> <tags> <tag>tag_one</tag> <tag>tag_two</tag> <tag>tag_three</tag> </tags> <data> <data_a>blue</data_a> <data_b>3</data_b> </data> </base> What I want to do is to be able to extend this XML in another file, modifying individual properties. For example, I want to inherit that file and make a new one with a different data/data_a node: <base id="green" import="default"> <data> <data_a>green</data_a> </data> </base> So far it's pretty simple, it replaces the old data/data_a with the new one. I even can add a new node: <base id="ext" import="default"> <moredata> <data>extended version</data> </moredata> </base> And still it's pretty simple. The problem comes when I want to delete a node or deal with XML Lists (like the tags node). How should I reference a particular index on a list? I was thinking doing something like: <base id="diffList" import="default"> <tags> <tag index="1">this is not anymore tag_two</tag> </tags> </base> And for deleting a node / array index: <base id="deleting" import="default"> <tags> <tag index="2"/> </tags> <data/> </base> <!-- This will result in an XML containing these values: --> <base> <tag>tag_one</tag> <tag>tag_two</tag> </base> But I'm not happy with my solutions. I don't know anything about XSLT or other XML transformation tools, but I think someone must have done this before. The key goal I'm looking for is ease to write the XML by hand (both the base and the "extended"). I'm open to new solutions besides XML, if they are easy to write manually. Thanks for reading.

    Read the article

  • CakePHP access indirectly related model - beginner's question

    - by user325077
    Hi everyone, I am writing a CakePHP application to log the work I do for various clients, but after trying for days I seem unable to get it to do what I want. I have read most of the book CakePHP's website. and googled for all I'm worth, so I presume I am missing something obvious! Every 'log item' belongs to a 'sub-project, which in turn belongs to a 'project', which in turn belongs to a 'sub-client' which finally belongs to a client. These are the 5 MySQL tables I am using: mysql> DESCRIBE log_items; +-----------------+--------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +-----------------+--------------+------+-----+---------+----------------+ | id | int(11) | NO | PRI | NULL | auto_increment | | date | date | NO | | NULL | | | time | time | NO | | NULL | | | time_spent | int(11) | NO | | NULL | | | sub_projects_id | int(11) | NO | MUL | NULL | | | title | varchar(100) | NO | | NULL | | | description | text | YES | | NULL | | | created | datetime | YES | | NULL | | | modified | datetime | YES | | NULL | | +-----------------+--------------+------+-----+---------+----------------+ mysql> DESCRIBE sub_projects; +-------------+--------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +-------------+--------------+------+-----+---------+----------------+ | id | int(11) | NO | PRI | NULL | auto_increment | | name | varchar(100) | NO | | NULL | | | projects_id | int(11) | NO | MUL | NULL | | | created | datetime | YES | | NULL | | | modified | datetime | YES | | NULL | | +-------------+--------------+------+-----+---------+----------------+ mysql> DESCRIBE projects; +----------------+--------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +----------------+--------------+------+-----+---------+----------------+ | id | int(11) | NO | PRI | NULL | auto_increment | | name | varchar(100) | NO | | NULL | | | sub_clients_id | int(11) | NO | MUL | NULL | | | created | datetime | YES | | NULL | | | modified | datetime | YES | | NULL | | +----------------+--------------+------+-----+---------+----------------+ mysql> DESCRIBE sub_clients; +------------+--------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +------------+--------------+------+-----+---------+----------------+ | id | int(11) | NO | PRI | NULL | auto_increment | | name | varchar(100) | NO | | NULL | | | clients_id | int(11) | NO | MUL | NULL | | | created | datetime | YES | | NULL | | | modified | datetime | YES | | NULL | | +------------+--------------+------+-----+---------+----------------+ mysql> DESCRIBE clients; +----------+--------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +----------+--------------+------+-----+---------+----------------+ | id | int(11) | NO | PRI | NULL | auto_increment | | name | varchar(100) | NO | | NULL | | | created | datetime | YES | | NULL | | | modified | datetime | YES | | NULL | | +----------+--------------+------+-----+---------+----------------+ I have set up the following associations in CakePHP: LogItem belongsTo SubProjects SubProject belongsTo Projects Project belongsTo SubClients SubClient belongsTo Clients Client hasMany SubClients SubClient hasMany Projects Project hasMany SubProjects SubProject hasMany LogItems Using 'cake bake' I have created the models, controllers (index, view add, edit and delete) and views, and things seem to function - as in I am able to perform simple CRUD operations successfully. The Question When editing a 'log item' at www.mydomain/log_items/edit I am presented with the view you would all suspect; namely the columns of the log_items table with the appropriate textfields/select boxes etc. I would also like to incorporate select boxes to choose the client, sub-client, project and sub-project in the 'log_items' edit view. Ideally the 'sub-client' select box should populate itself depending upon the 'client' chosen, the 'project' select box should also populate itself depending on the 'sub-client' selected etc, etc. I guess the way to go about populating the select boxes with relevant options is Ajax, but I am unsure of how to go about actually accessing a model from the child view of a indirectly related model, for example how to create a 'sub-client' select box in the 'log_items' edit view. I have have found this example: http://forum.phpsitesolutions.com/php-frameworks/cakephp/ajax-cakephp-dynamically-populate-html-select-dropdown-box-t29.html where someone achieves something similar for US states, counties and cities. However, I noticed in the database schema - which is downloadable from the site above link - that the database tables don't have any foreign keys, so now I'm wondering if I'm going about things in the correct manner. Any pointers and advice would be very much appreciated. Kind regards, Chris

    Read the article

  • 256 Windows Azure Worker Roles, Windows Kinect and a 90's Text-Based Ray-Tracer

    - by Alan Smith
    For a couple of years I have been demoing a simple render farm hosted in Windows Azure using worker roles and the Azure Storage service. At the start of the presentation I deploy an Azure application that uses 16 worker roles to render a 1,500 frame 3D ray-traced animation. At the end of the presentation, when the animation was complete, I would play the animation delete the Azure deployment. The standing joke with the audience was that it was that it was a “$2 demo”, as the compute charges for running the 16 instances for an hour was $1.92, factor in the bandwidth charges and it’s a couple of dollars. The point of the demo is that it highlights one of the great benefits of cloud computing, you pay for what you use, and if you need massive compute power for a short period of time using Windows Azure can work out very cost effective. The “$2 demo” was great for presenting at user groups and conferences in that it could be deployed to Azure, used to render an animation, and then removed in a one hour session. I have always had the idea of doing something a bit more impressive with the demo, and scaling it from a “$2 demo” to a “$30 demo”. The challenge was to create a visually appealing animation in high definition format and keep the demo time down to one hour.  This article will take a run through how I achieved this. Ray Tracing Ray tracing, a technique for generating high quality photorealistic images, gained popularity in the 90’s with companies like Pixar creating feature length computer animations, and also the emergence of shareware text-based ray tracers that could run on a home PC. In order to render a ray traced image, the ray of light that would pass from the view point must be tracked until it intersects with an object. At the intersection, the color, reflectiveness, transparency, and refractive index of the object are used to calculate if the ray will be reflected or refracted. Each pixel may require thousands of calculations to determine what color it will be in the rendered image. Pin-Board Toys Having very little artistic talent and a basic understanding of maths I decided to focus on an animation that could be modeled fairly easily and would look visually impressive. I’ve always liked the pin-board desktop toys that become popular in the 80’s and when I was working as a 3D animator back in the 90’s I always had the idea of creating a 3D ray-traced animation of a pin-board, but never found the energy to do it. Even if I had a go at it, the render time to produce an animation that would look respectable on a 486 would have been measured in months. PolyRay Back in 1995 I landed my first real job, after spending three years being a beach-ski-climbing-paragliding-bum, and was employed to create 3D ray-traced animations for a CD-ROM that school kids would use to learn physics. I had got into the strange and wonderful world of text-based ray tracing, and was using a shareware ray-tracer called PolyRay. PolyRay takes a text file describing a scene as input and, after a few hours processing on a 486, produced a high quality ray-traced image. The following is an example of a basic PolyRay scene file. background Midnight_Blue   static define matte surface { ambient 0.1 diffuse 0.7 } define matte_white texture { matte { color white } } define matte_black texture { matte { color dark_slate_gray } } define position_cylindrical 3 define lookup_sawtooth 1 define light_wood <0.6, 0.24, 0.1> define median_wood <0.3, 0.12, 0.03> define dark_wood <0.05, 0.01, 0.005>     define wooden texture { noise surface { ambient 0.2  diffuse 0.7  specular white, 0.5 microfacet Reitz 10 position_fn position_cylindrical position_scale 1  lookup_fn lookup_sawtooth octaves 1 turbulence 1 color_map( [0.0, 0.2, light_wood, light_wood] [0.2, 0.3, light_wood, median_wood] [0.3, 0.4, median_wood, light_wood] [0.4, 0.7, light_wood, light_wood] [0.7, 0.8, light_wood, median_wood] [0.8, 0.9, median_wood, light_wood] [0.9, 1.0, light_wood, dark_wood]) } } define glass texture { surface { ambient 0 diffuse 0 specular 0.2 reflection white, 0.1 transmission white, 1, 1.5 }} define shiny surface { ambient 0.1 diffuse 0.6 specular white, 0.6 microfacet Phong 7  } define steely_blue texture { shiny { color black } } define chrome texture { surface { color white ambient 0.0 diffuse 0.2 specular 0.4 microfacet Phong 10 reflection 0.8 } }   viewpoint {     from <4.000, -1.000, 1.000> at <0.000, 0.000, 0.000> up <0, 1, 0> angle 60     resolution 640, 480 aspect 1.6 image_format 0 }       light <-10, 30, 20> light <-10, 30, -20>   object { disc <0, -2, 0>, <0, 1, 0>, 30 wooden }   object { sphere <0.000, 0.000, 0.000>, 1.00 chrome } object { cylinder <0.000, 0.000, 0.000>, <0.000, 0.000, -4.000>, 0.50 chrome }   After setting up the background and defining colors and textures, the viewpoint is specified. The “camera” is located at a point in 3D space, and it looks towards another point. The angle, image resolution, and aspect ratio are specified. Two lights are present in the image at defined coordinates. The three objects in the image are a wooden disc to represent a table top, and a sphere and cylinder that intersect to form a pin that will be used for the pin board toy in the final animation. When the image is rendered, the following image is produced. The pins are modeled with a chrome surface, so they reflect the environment around them. Note that the scale of the pin shaft is not correct, this will be fixed later. Modeling the Pin Board The frame of the pin-board is made up of three boxes, and six cylinders, the front box is modeled using a clear, slightly reflective solid, with the same refractive index of glass. The other shapes are modeled as metal. object { box <-5.5, -1.5, 1>, <5.5, 5.5, 1.2> glass } object { box <-5.5, -1.5, -0.04>, <5.5, 5.5, -0.09> steely_blue } object { box <-5.5, -1.5, -0.52>, <5.5, 5.5, -0.59> steely_blue } object { cylinder <-5.2, -1.2, 1.4>, <-5.2, -1.2, -0.74>, 0.2 steely_blue } object { cylinder <5.2, -1.2, 1.4>, <5.2, -1.2, -0.74>, 0.2 steely_blue } object { cylinder <-5.2, 5.2, 1.4>, <-5.2, 5.2, -0.74>, 0.2 steely_blue } object { cylinder <5.2, 5.2, 1.4>, <5.2, 5.2, -0.74>, 0.2 steely_blue } object { cylinder <0, -1.2, 1.4>, <0, -1.2, -0.74>, 0.2 steely_blue } object { cylinder <0, 5.2, 1.4>, <0, 5.2, -0.74>, 0.2 steely_blue }   In order to create the matrix of pins that make up the pin board I used a basic console application with a few nested loops to create two intersecting matrixes of pins, which models the layout used in the pin boards. The resulting image is shown below. The pin board contains 11,481 pins, with the scene file containing 23,709 lines of code. For the complete animation 2,000 scene files will be created, which is over 47 million lines of code. Each pin in the pin-board will slide out a specific distance when an object is pressed into the back of the board. This is easily modeled by setting the Z coordinate of the pin to a specific value. In order to set all of the pins in the pin-board to the correct position, a bitmap image can be used. The position of the pin can be set based on the color of the pixel at the appropriate position in the image. When the Windows Azure logo is used to set the Z coordinate of the pins, the following image is generated. The challenge now was to make a cool animation. The Azure Logo is fine, but it is static. Using a normal video to animate the pins would not work; the colors in the video would not be the same as the depth of the objects from the camera. In order to simulate the pin board accurately a series of frames from a depth camera could be used. Windows Kinect The Kenect controllers for the X-Box 360 and Windows feature a depth camera. The Kinect SDK for Windows provides a programming interface for Kenect, providing easy access for .NET developers to the Kinect sensors. The Kinect Explorer provided with the Kinect SDK is a great starting point for exploring Kinect from a developers perspective. Both the X-Box 360 Kinect and the Windows Kinect will work with the Kinect SDK, the Windows Kinect is required for commercial applications, but the X-Box Kinect can be used for hobby projects. The Windows Kinect has the advantage of providing a mode to allow depth capture with objects closer to the camera, which makes for a more accurate depth image for setting the pin positions. Creating a Depth Field Animation The depth field animation used to set the positions of the pin in the pin board was created using a modified version of the Kinect Explorer sample application. In order to simulate the pin board accurately, a small section of the depth range from the depth sensor will be used. Any part of the object in front of the depth range will result in a white pixel; anything behind the depth range will be black. Within the depth range the pixels in the image will be set to RGB values from 0,0,0 to 255,255,255. A screen shot of the modified Kinect Explorer application is shown below. The Kinect Explorer sample application was modified to include slider controls that are used to set the depth range that forms the image from the depth stream. This allows the fine tuning of the depth image that is required for simulating the position of the pins in the pin board. The Kinect Explorer was also modified to record a series of images from the depth camera and save them as a sequence JPEG files that will be used to animate the pins in the animation the Start and Stop buttons are used to start and stop the image recording. En example of one of the depth images is shown below. Once a series of 2,000 depth images has been captured, the task of creating the animation can begin. Rendering a Test Frame In order to test the creation of frames and get an approximation of the time required to render each frame a test frame was rendered on-premise using PolyRay. The output of the rendering process is shown below. The test frame contained 23,629 primitive shapes, most of which are the spheres and cylinders that are used for the 11,800 or so pins in the pin board. The 1280x720 image contains 921,600 pixels, but as anti-aliasing was used the number of rays that were calculated was 4,235,777, with 3,478,754,073 object boundaries checked. The test frame of the pin board with the depth field image applied is shown below. The tracing time for the test frame was 4 minutes 27 seconds, which means rendering the2,000 frames in the animation would take over 148 hours, or a little over 6 days. Although this is much faster that an old 486, waiting almost a week to see the results of an animation would make it challenging for animators to create, view, and refine their animations. It would be much better if the animation could be rendered in less than one hour. Windows Azure Worker Roles The cost of creating an on-premise render farm to render animations increases in proportion to the number of servers. The table below shows the cost of servers for creating a render farm, assuming a cost of $500 per server. Number of Servers Cost 1 $500 16 $8,000 256 $128,000   As well as the cost of the servers, there would be additional costs for networking, racks etc. Hosting an environment of 256 servers on-premise would require a server room with cooling, and some pretty hefty power cabling. The Windows Azure compute services provide worker roles, which are ideal for performing processor intensive compute tasks. With the scalability available in Windows Azure a job that takes 256 hours to complete could be perfumed using different numbers of worker roles. The time and cost of using 1, 16 or 256 worker roles is shown below. Number of Worker Roles Render Time Cost 1 256 hours $30.72 16 16 hours $30.72 256 1 hour $30.72   Using worker roles in Windows Azure provides the same cost for the 256 hour job, irrespective of the number of worker roles used. Provided the compute task can be broken down into many small units, and the worker role compute power can be used effectively, it makes sense to scale the application so that the task is completed quickly, making the results available in a timely fashion. The task of rendering 2,000 frames in an animation is one that can easily be broken down into 2,000 individual pieces, which can be performed by a number of worker roles. Creating a Render Farm in Windows Azure The architecture of the render farm is shown in the following diagram. The render farm is a hybrid application with the following components: ·         On-Premise o   Windows Kinect – Used combined with the Kinect Explorer to create a stream of depth images. o   Animation Creator – This application uses the depth images from the Kinect sensor to create scene description files for PolyRay. These files are then uploaded to the jobs blob container, and job messages added to the jobs queue. o   Process Monitor – This application queries the role instance lifecycle table and displays statistics about the render farm environment and render process. o   Image Downloader – This application polls the image queue and downloads the rendered animation files once they are complete. ·         Windows Azure o   Azure Storage – Queues and blobs are used for the scene description files and completed frames. A table is used to store the statistics about the rendering environment.   The architecture of each worker role is shown below.   The worker role is configured to use local storage, which provides file storage on the worker role instance that can be use by the applications to render the image and transform the format of the image. The service definition for the worker role with the local storage configuration highlighted is shown below. <?xml version="1.0" encoding="utf-8"?> <ServiceDefinition name="CloudRay" >   <WorkerRole name="CloudRayWorkerRole" vmsize="Small">     <Imports>     </Imports>     <ConfigurationSettings>       <Setting name="DataConnectionString" />     </ConfigurationSettings>     <LocalResources>       <LocalStorage name="RayFolder" cleanOnRoleRecycle="true" />     </LocalResources>   </WorkerRole> </ServiceDefinition>     The two executable programs, PolyRay.exe and DTA.exe are included in the Azure project, with Copy Always set as the property. PolyRay will take the scene description file and render it to a Truevision TGA file. As the TGA format has not seen much use since the mid 90’s it is converted to a JPG image using Dave's Targa Animator, another shareware application from the 90’s. Each worker roll will use the following process to render the animation frames. 1.       The worker process polls the job queue, if a job is available the scene description file is downloaded from blob storage to local storage. 2.       PolyRay.exe is started in a process with the appropriate command line arguments to render the image as a TGA file. 3.       DTA.exe is started in a process with the appropriate command line arguments convert the TGA file to a JPG file. 4.       The JPG file is uploaded from local storage to the images blob container. 5.       A message is placed on the images queue to indicate a new image is available for download. 6.       The job message is deleted from the job queue. 7.       The role instance lifecycle table is updated with statistics on the number of frames rendered by the worker role instance, and the CPU time used. The code for this is shown below. public override void Run() {     // Set environment variables     string polyRayPath = Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), PolyRayLocation);     string dtaPath = Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), DTALocation);       LocalResource rayStorage = RoleEnvironment.GetLocalResource("RayFolder");     string localStorageRootPath = rayStorage.RootPath;       JobQueue jobQueue = new JobQueue("renderjobs");     JobQueue downloadQueue = new JobQueue("renderimagedownloadjobs");     CloudRayBlob sceneBlob = new CloudRayBlob("scenes");     CloudRayBlob imageBlob = new CloudRayBlob("images");     RoleLifecycleDataSource roleLifecycleDataSource = new RoleLifecycleDataSource();       Frames = 0;       while (true)     {         // Get the render job from the queue         CloudQueueMessage jobMsg = jobQueue.Get();           if (jobMsg != null)         {             // Get the file details             string sceneFile = jobMsg.AsString;             string tgaFile = sceneFile.Replace(".pi", ".tga");             string jpgFile = sceneFile.Replace(".pi", ".jpg");               string sceneFilePath = Path.Combine(localStorageRootPath, sceneFile);             string tgaFilePath = Path.Combine(localStorageRootPath, tgaFile);             string jpgFilePath = Path.Combine(localStorageRootPath, jpgFile);               // Copy the scene file to local storage             sceneBlob.DownloadFile(sceneFilePath);               // Run the ray tracer.             string polyrayArguments =                 string.Format("\"{0}\" -o \"{1}\" -a 2", sceneFilePath, tgaFilePath);             Process polyRayProcess = new Process();             polyRayProcess.StartInfo.FileName =                 Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), polyRayPath);             polyRayProcess.StartInfo.Arguments = polyrayArguments;             polyRayProcess.Start();             polyRayProcess.WaitForExit();               // Convert the image             string dtaArguments =                 string.Format(" {0} /FJ /P{1}", tgaFilePath, Path.GetDirectoryName (jpgFilePath));             Process dtaProcess = new Process();             dtaProcess.StartInfo.FileName =                 Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), dtaPath);             dtaProcess.StartInfo.Arguments = dtaArguments;             dtaProcess.Start();             dtaProcess.WaitForExit();               // Upload the image to blob storage             imageBlob.UploadFile(jpgFilePath);               // Add a download job.             downloadQueue.Add(jpgFile);               // Delete the render job message             jobQueue.Delete(jobMsg);               Frames++;         }         else         {             Thread.Sleep(1000);         }           // Log the worker role activity.         roleLifecycleDataSource.Alive             ("CloudRayWorker", RoleLifecycleDataSource.RoleLifecycleId, Frames);     } }     Monitoring Worker Role Instance Lifecycle In order to get more accurate statistics about the lifecycle of the worker role instances used to render the animation data was tracked in an Azure storage table. The following class was used to track the worker role lifecycles in Azure storage.   public class RoleLifecycle : TableServiceEntity {     public string ServerName { get; set; }     public string Status { get; set; }     public DateTime StartTime { get; set; }     public DateTime EndTime { get; set; }     public long SecondsRunning { get; set; }     public DateTime LastActiveTime { get; set; }     public int Frames { get; set; }     public string Comment { get; set; }       public RoleLifecycle()     {     }       public RoleLifecycle(string roleName)     {         PartitionKey = roleName;         RowKey = Utils.GetAscendingRowKey();         Status = "Started";         StartTime = DateTime.UtcNow;         LastActiveTime = StartTime;         EndTime = StartTime;         SecondsRunning = 0;         Frames = 0;     } }     A new instance of this class is created and added to the storage table when the role starts. It is then updated each time the worker renders a frame to record the total number of frames rendered and the total processing time. These statistics are used be the monitoring application to determine the effectiveness of use of resources in the render farm. Rendering the Animation The Azure solution was deployed to Windows Azure with the service configuration set to 16 worker role instances. This allows for the application to be tested in the cloud environment, and the performance of the application determined. When I demo the application at conferences and user groups I often start with 16 instances, and then scale up the application to the full 256 instances. The configuration to run 16 instances is shown below. <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="CloudRay" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="1" osVersion="*">   <Role name="CloudRayWorkerRole">     <Instances count="16" />     <ConfigurationSettings>       <Setting name="DataConnectionString"         value="DefaultEndpointsProtocol=https;AccountName=cloudraydata;AccountKey=..." />     </ConfigurationSettings>   </Role> </ServiceConfiguration>     About six minutes after deploying the application the first worker roles become active and start to render the first frames of the animation. The CloudRay Monitor application displays an icon for each worker role instance, with a number indicating the number of frames that the worker role has rendered. The statistics on the left show the number of active worker roles and statistics about the render process. The render time is the time since the first worker role became active; the CPU time is the total amount of processing time used by all worker role instances to render the frames.   Five minutes after the first worker role became active the last of the 16 worker roles activated. By this time the first seven worker roles had each rendered one frame of the animation.   With 16 worker roles u and running it can be seen that one hour and 45 minutes CPU time has been used to render 32 frames with a render time of just under 10 minutes.     At this rate it would take over 10 hours to render the 2,000 frames of the full animation. In order to complete the animation in under an hour more processing power will be required. Scaling the render farm from 16 instances to 256 instances is easy using the new management portal. The slider is set to 256 instances, and the configuration saved. We do not need to re-deploy the application, and the 16 instances that are up and running will not be affected. Alternatively, the configuration file for the Azure service could be modified to specify 256 instances.   <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="CloudRay" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="1" osVersion="*">   <Role name="CloudRayWorkerRole">     <Instances count="256" />     <ConfigurationSettings>       <Setting name="DataConnectionString"         value="DefaultEndpointsProtocol=https;AccountName=cloudraydata;AccountKey=..." />     </ConfigurationSettings>   </Role> </ServiceConfiguration>     Six minutes after the new configuration has been applied 75 new worker roles have activated and are processing their first frames.   Five minutes later the full configuration of 256 worker roles is up and running. We can see that the average rate of frame rendering has increased from 3 to 12 frames per minute, and that over 17 hours of CPU time has been utilized in 23 minutes. In this test the time to provision 140 worker roles was about 11 minutes, which works out at about one every five seconds.   We are now half way through the rendering, with 1,000 frames complete. This has utilized just under three days of CPU time in a little over 35 minutes.   The animation is now complete, with 2,000 frames rendered in a little over 52 minutes. The CPU time used by the 256 worker roles is 6 days, 7 hours and 22 minutes with an average frame rate of 38 frames per minute. The rendering of the last 1,000 frames took 16 minutes 27 seconds, which works out at a rendering rate of 60 frames per minute. The frame counts in the server instances indicate that the use of a queue to distribute the workload has been very effective in distributing the load across the 256 worker role instances. The first 16 instances that were deployed first have rendered between 11 and 13 frames each, whilst the 240 instances that were added when the application was scaled have rendered between 6 and 9 frames each.   Completed Animation I’ve uploaded the completed animation to YouTube, a low resolution preview is shown below. Pin Board Animation Created using Windows Kinect and 256 Windows Azure Worker Roles   The animation can be viewed in 1280x720 resolution at the following link: http://www.youtube.com/watch?v=n5jy6bvSxWc Effective Use of Resources According to the CloudRay monitor statistics the animation took 6 days, 7 hours and 22 minutes CPU to render, this works out at 152 hours of compute time, rounded up to the nearest hour. As the usage for the worker role instances are billed for the full hour, it may have been possible to render the animation using fewer than 256 worker roles. When deciding the optimal usage of resources, the time required to provision and start the worker roles must also be considered. In the demo I started with 16 worker roles, and then scaled the application to 256 worker roles. It would have been more optimal to start the application with maybe 200 worker roles, and utilized the full hour that I was being billed for. This would, however, have prevented showing the ease of scalability of the application. The new management portal displays the CPU usage across the worker roles in the deployment. The average CPU usage across all instances is 93.27%, with over 99% used when all the instances are up and running. This shows that the worker role resources are being used very effectively. Grid Computing Scenarios Although I am using this scenario for a hobby project, there are many scenarios where a large amount of compute power is required for a short period of time. Windows Azure provides a great platform for developing these types of grid computing applications, and can work out very cost effective. ·         Windows Azure can provide massive compute power, on demand, in a matter of minutes. ·         The use of queues to manage the load balancing of jobs between role instances is a simple and effective solution. ·         Using a cloud-computing platform like Windows Azure allows proof-of-concept scenarios to be tested and evaluated on a very low budget. ·         No charges for inbound data transfer makes the uploading of large data sets to Windows Azure Storage services cost effective. (Transaction charges still apply.) Tips for using Windows Azure for Grid Computing Scenarios I found the implementation of a render farm using Windows Azure a fairly simple scenario to implement. I was impressed by ease of scalability that Azure provides, and by the short time that the application took to scale from 16 to 256 worker role instances. In this case it was around 13 minutes, in other tests it took between 10 and 20 minutes. The following tips may be useful when implementing a grid computing project in Windows Azure. ·         Using an Azure Storage queue to load-balance the units of work across multiple worker roles is simple and very effective. The design I have used in this scenario could easily scale to many thousands of worker role instances. ·         Windows Azure accounts are typically limited to 20 cores. If you need to use more than this, a call to support and a credit card check will be required. ·         Be aware of how the billing model works. You will be charged for worker role instances for the full clock our in which the instance is deployed. Schedule the workload to start just after the clock hour has started. ·         Monitor the utilization of the resources you are provisioning, ensure that you are not paying for worker roles that are idle. ·         If you are deploying third party applications to worker roles, you may well run into licensing issues. Purchasing software licenses on a per-processor basis when using hundreds of processors for a short time period would not be cost effective. ·         Third party software may also require installation onto the worker roles, which can be accomplished using start-up tasks. Bear in mind that adding a startup task and possible re-boot will add to the time required for the worker role instance to start and activate. An alternative may be to use a prepared VM and use VM roles. ·         Consider using the Windows Azure Autoscaling Application Block (WASABi) to autoscale the worker roles in your application. When using a large number of worker roles, the utilization must be carefully monitored, if the scaling algorithms are not optimal it could get very expensive!

    Read the article

  • Varnish 3.0.2 and ISPConfig 3.0.4

    - by Warren Bullock III
    I followed the tutorial The Perfect Server - Ubuntu 11.10 [ISPConfig 3] here. I'm running an Ubuntu 11.04 (Natty Narwhal) server with 1024 RAM on Rackspace. I've gone through and updated to ISPConfig 3.0.4. Everything has been working great up to now when I decided to try and install Varnish. Initially I did an install of Varnish by issuing: apt-get update apt-get upgrade apt-get install varnish Apparently the version that was installed was Varnish 2.x so I went back and added the repositories for packages provided by varnish-cache.org curl http://repo.varnish-cache.org/debian/GPG-key.txt | apt-key add - echo "deb http://repo.varnish-cache.org/ubuntu/ lucid varnish-3.0" >> /etc/apt/sources.list apt-get update apt-get install varnish This updated my version of Varnish to 3.0.2 I then proceeded to make the following changes: vim /etc/default/varnish change DAEMON_OPTS to port 80: vim /etc/apache2/ports.conf NameVirtualHost *:8000 Listen 8000 vim /etc/apache2/sites-available/default <VirtualHost *:8000> vim /etc/apache2/sites-available/ispconfig.vhost Listen 8080 NameVirtualHost *:8080 <VirtualHost _default_:8080> I then proceeded to set my other vhosts to use 8000 (the apache2 port) so with all this set I reset both Apache2 and Varnish to test. I used Firebug in Firefox 11.0 The output from what I see doesn't seem to indicate that Varnish is working completely correct: First of all I see: X-Varnish 1644834493 but I've heard that unless you have two timestamps side by side than it's probably not working correctly so for example I was thinking I might see something like: X-Varnish 1644834493 1644837493 Also if I noticed this in the output which seems to be inconstant: X-Drupal-Cache MISS There are times when it will say HIT as well.... So the question here that I have is I think Varnish is partially working, however, why don't I see two timestamps on X-Varnish like I'm thinking I should and does the output of the screenshot I have look correct? If Varnish isn't working can someone tell me what I might being doing wrong? Thanks in advance.

    Read the article

  • Configuring VirtualBox host only networking: OSX host, Ubuntu guest

    - by Greg K
    I have a Ubuntu guest configured with two interfaces, eth0 is using NAT and works fine, I can access the net. The second interface eth1 is set to host only networking and VirtualBox has created a vboxnet0 virtual adapter on the host. I've configured vboxnet0 in VirtualBox adapter settings with the following: ip 192.168.21.20 subnet 255.255.255.0 Once the VM guest is running, ifconfig on OSX has vboxnet0 setup as: vboxnet0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> mtu 1500 ether 0a:00:27:00:00:00 inet 192.168.21.20 netmask 0xffffff00 broadcast 192.168.21.255 In the guest, eth0 is set to use DHCP, I've statically assigned eth1 to 192.168.21.20 (is this a mistake?): auto eth1 iface eth1 inet static address 192.168.21.20 netmask 255.255.255.0 network 192.168.21.0 broadcast 192.168.21.255 gateway 192.168.21.1 There is no device on 192.168.21.1 - what should I set my gateway to? In the guest the routes look like so: Destination Gateway Genmask Flags Metric Ref Use Iface 192.168.21.0 * 255.255.255.0 U 0 0 0 eth1 10.0.2.0 * 255.255.255.0 U 0 0 0 eth0 default 10.0.2.2 0.0.0.0 UG 100 0 0 eth0 default 192.168.21.1 0.0.0.0 UG 100 0 0 eth1 Route table on OSX: $ netstat -nr Routing tables Internet: Destination Gateway Flags Refs Use Netif Expire default 10.77.36.1 UGSc 28 0 en1 10.77.36/22 link#5 UCS 5 0 en1 10.77.39.38 127.0.0.1 UHS 1 2236 lo0 10.77.39.255 link#5 UHLWbI 1 66 en1 127 127.0.0.1 UCS 0 0 lo0 127.0.0.1 127.0.0.1 UH 1 8642 lo0 169.254 link#5 UCS 0 0 en1 192.168.21 link#7 UC 2 0 vboxnet 192.168.21.20 a:0:27:0:0:0 UHLWI 0 4 lo0 192.168.21.255 link#7 UHLWbI 2 64 vboxnet I can't SSH from the host to the guest (I used to be able to when the VM was configured with a bridged connection): $ ssh 192.168.21.20 ssh: connect to host 192.168.21.20 port 22: Connection refused What have I done wrong here? TIA

    Read the article

  • Apache2 configuration error: "<VirtualHost> was not closed" error.

    - by Chris
    So I've already checked through my config file and I really can't see an instance where any tag hasn't been properly closed...but I keep getting this configuration error...Would you mind taking a look through the error and the config file below? Any assistance would be greatly appreciated. FYI, I've already googled the life out of the error and looked through the log extensively, I really can't find anything. Error: apache2: Syntax error on line 236 of /etc/apache2/apache2.conf: syntax error on line 1 of /etc/apache2/sites-enabled/000-default: /etc/apache2/sites-enabled/000-default:1: was not closed. Line 236 of apache2.conf: Include the virtual host configurations: Include /etc/apache2/sites-enabled/ Contents of 000-default: <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot /var/www <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost> <VirtualHost *:443> SetEnvIf Request_URI "^/u" dontlog ErrorLog /var/log/apache2/error.log Loglevel warn SSLEngine On SSLCertificateFile /etc/apache2/ssl/apache.pem ProxyRequests Off <Proxy *> AuthUserFile /srv/ajaxterm/.htpasswd AuthName EnterPassword AuthType Basic require valid-user Order Deny,allow Allow from all </Proxy> ProxyPass / http://localhost:8022/ ProxyPassReverse / http://localhost:8022/ </VirtualHost>

    Read the article

  • Connectivity with SQL Server Express 2008 r2 and SQL Server 2000 on same machine

    - by Jim R
    At first glance this may same a duplicate of Installing both SQL Server 2000 and SQL Server 2008 on the same machine, but it is not. I have SQL Server 2000 and SQL Server 2008 R2 installed on the same machine and working fine. My problem lies with connecting to the 2008 R2 server from a remote machine. My connectivity needs to be TCP. The legacy installation or SQL 2000 uses the default port of 1433. The named instance is by default configured to use 'Shared Memory' and is working fine. When I configured the 2008 R2 server to use 1433 (I did not think that thru) the service refused to start becasue 1433 was already in use by the legacy SQL 2000 default instance. Doh! What I want to do is have both servers available simultaneously via TCP. both servers need not be on the same port, put if I cannot run them on the same port, then how do I configure the clients? Is there not some kind of proxy available that can monitor the 1433 port and pass the request thru to the correct SQL instance by name? Is this capability built into SQL server already? Thanks, Jim

    Read the article

  • IIS 7 - 403 Access Denied error on wwwroot

    - by cparker4486
    Hi, I'm trying to setup a redirect from http://mail.mydomain.com to https://mail.mydomain.com/owa. I've been unsuccessful in doing this by using IIS's HTTP Redirect so I looked to other options. The one I settled on is to create a default document in the wwwroot folder to handle the redirect. I created a file called index.aspx (and added index.aspx to the list of default documents) and put the following code in it: <script runat="server"> private void Page_Load(object sender, System.EventArgs e) { Response.Status = "301 Moved Permanently"; Response.AddHeader("Location","https://mail.mydomain.com/owa"); } </script> Instead of getting a redirect I get: 403 - Forbidden: Access is denied. You do not have permission to view this directory or page using the credentials that you supplied. I've been trying to find an answer to this but have been unsuccessful so far. One thing I did try was to add the Everyone group to wwwroot with read access. No change. The AppPool for Default Web Site is DefaultAppPool and the Identity is ApplicationPoolIdentity. (I don't know what these things are but maybe knowing this will help you.) Thanks!

    Read the article

< Previous Page | 195 196 197 198 199 200 201 202 203 204 205 206  | Next Page >