Search Results

Search found 29820 results on 1193 pages for 'default implementation'.

Page 143/1193 | < Previous Page | 139 140 141 142 143 144 145 146 147 148 149 150  | Next Page >

  • Can you set the default button of a panel with a button that is not in that panel but in another content placeholder in a master page?

    - by Geezuz
    Can you set the default button of a panel with a button that is not in that panel but in another content placeholder within a master page? I have tried this but I get the following error: The DefaultButton of 'pnlTmp' must be the ID of a control of type IButtonControl. I have also tried setting the panels DefaultButton this way : pnlTmp.DefaultButton = btnContinue.UniqueID This gave me the same error. Any help would be great.

    Read the article

  • bonding module parameters are not shown in /sys/module/bonding/parameters/

    - by c4f4t0r
    I have a server with Suse 11 sp1 kernel 2.6.32.54-0.3-default, with modinfo bonding i see all parameters, but under /sys/module/bonding/parameters/ not modinfo bonding | grep ^parm parm: max_bonds:Max number of bonded devices (int) parm: num_grat_arp:Number of gratuitous ARP packets to send on failover event (int) parm: num_unsol_na:Number of unsolicited IPv6 Neighbor Advertisements packets to send on failover event (int) parm: miimon:Link check interval in milliseconds (int) parm: updelay:Delay before considering link up, in milliseconds (int) parm: downdelay:Delay before considering link down, in milliseconds (int) parm: use_carrier:Use netif_carrier_ok (vs MII ioctls) in miimon; 0 for off, 1 for on (default) (int) parm: mode:Mode of operation : 0 for balance-rr, 1 for active-backup, 2 for balance-xor, 3 for broadcast, 4 for 802.3ad, 5 for balance-tlb, 6 for balance-alb (charp) parm: primary:Primary network device to use (charp) parm: lacp_rate:LACPDU tx rate to request from 802.3ad partner (slow/fast) (charp) parm: ad_select:803.ad aggregation selection logic: stable (0, default), bandwidth (1), count (2) (charp) parm: xmit_hash_policy:XOR hashing method: 0 for layer 2 (default), 1 for layer 3+4 (charp) parm: arp_interval:arp interval in milliseconds (int) parm: arp_ip_target:arp targets in n.n.n.n form (array of charp) parm: arp_validate:validate src/dst of ARP probes: none (default), active, backup or all (charp) parm: fail_over_mac:For active-backup, do not set all slaves to the same MAC. none (default), active or follow (charp) in /sys/module/bonding/parameters ls -l /sys/module/bonding/parameters/ total 0 -rw-r--r-- 1 root root 4096 2013-10-17 11:22 num_grat_arp -rw-r--r-- 1 root root 4096 2013-10-17 11:22 num_unsol_na I found some of this parameters under /sys/class/net/bond0/bonding/, but when i try to change one i got the following error echo layer2+3 > /sys/class/net/bond0/bonding/xmit_hash_policy -bash: echo: write error: Operation not permitted

    Read the article

  • Where can I find linux-kernel-headers-x.x.x.x for SUSE?

    - by Landy
    I'm installing VMware Workstation on a SLED 11 SP1, and the installation is blocked by an error message "Kernel headers for version 2.6.32.27-0.2-default were not found". If you installed them in a non-default path you can specify the path below. Otherwise refer to your distribution's documentation for installation instructions and click Refresh to search again in default locations. The output of rpm -qa | grep kernel is kernel-default-2.6.32.27-0.2.2 kernel-default-base-2.6.32.27-0.2.2 linux-kernel-headers-2.6.32-1.4.13 kernel-default-extra-2.6.32.27-0.2.2 nfs-kernel-server-1.2.1-2.10.1 I had met this issue in Ubuntu and I installed the required linux header via apt-get then the issue disappeared. But in SLED, I didn't find the rpm package in SUSE's software repository, and I also google "linux-kernel-headers-2.6.32.27" but did not match any documents. Any suggestion will be highly appreciated. Thanks. The output result of zypper se kernel | grep kernel is i | linux-kernel-headers | Linux Kernel Headers | package | linux-kernel-headers | Linux Kernel Headers | srcpackage

    Read the article

  • Booting the server redis no errors

    - by Tylër
    The redis but usually begins with the following errors: tyler @ tyler-vortex: ~ / pens $. / src / redis-server [3690] Dec 01 10:56:05 # Warning: the specified config file, using the default config. In order to Specify a config file use 'redis-server / path / to / redis.conf' [3690] Dec 01 10:56:05 # Unable to set the max number of files limit to 10032 (Operation not permitted), setting the max configuration to 992 clients. Others errors founds: tyler@tyler-vortex:~/redis$ sudo ./utils/install_server.sh Welcome to the redis service installer This script will help you easily set up a running redis server Please select the redis port for this instance: [6379] Selecting default: 6379 Please select the redis config file name [/etc/redis/6379.conf] Selected default - /etc/redis/6379.conf Please select the redis log file name [/var/log/redis_6379.log] Selected default - /var/log/redis_6379.log Please select the data directory for this instance [/var/lib/redis/6379] Selected default - /var/lib/redis/6379 Please select the redis executable path [/usr/local/bin/redis-server] cat: ./redis.conf.tpl: Arquivo ou diretório não encontrado cat: ./redis_init_script.tpl: Arquivo ou diretório não encontrado ERROR: Could not write init script to /tmp/6379.conf. Aborting! Furthermore, I would like to know how to configure it not to consume so much RAM. Follow the memory configuration of our website, but the settings of "vm-*" does not exist in the file redis.conf. http://redis.io/topics/virtual-memory You have to create them? * Edit: I installed. After that, I believe that I no longer have access via. / Src / redis-server, because it happens: tyler@tyler-vortex:~$ cd redis/ tyler@tyler-vortex:~/redis$ ./src/redis-server [2616] 01 Dec 22:29:30 # Warning: no config file specified, using the default config. In order to specify a config file use 'redis-server /path/to/redis.conf' [2616] 01 Dec 22:29:30 # Opening port 6379: bind: Address already in use tyler@tyler-vortex:~/redis$ But there's another detail, the redistribution starts with the system .. redis 127.0.0.1:6379> exit tyler@tyler-vortex:~/redis$ ./src/redis-cli redis 127.0.0.1:6379> exit ... but how can I now see that the communication had before you installed from. sh?

    Read the article

  • rpm file conflict after alien conversion

    - by Zitrax
    I have a program for which I generate a .deb file. The .deb file works fine on the systems I have tried it on (also tested with lintian). Previously it has worked to use alien to convert this to .rpm and install it on Suse. However it is now about a year since I tried it the last time and now I get an error when trying to install the alien made rpm on Fedora 11, I get this error: file /usr/share/icons/default.kde from install of testpkg-0.2-2.i386 conflicts with file from package kdelibs3-3.5.10-13.fc11.1.i586 Listing the content of the rpm file: $ rpm -qlp testpkg-0.2-2.i386.rpm / /usr /usr/games /usr/games/testpkg /usr/lib /usr/lib/libfmod-3.75.so /usr/share /usr/share/app-install /usr/share/app-install/icons /usr/share/app-install/icons/testpkg.png /usr/share/applications /usr/share/applications/testpkg.desktop /usr/share/doc /usr/share/doc/testpkg /usr/share/doc/testpkg/changelog.gz /usr/share/doc/testpkg/copyright /usr/share/games /usr/share/games/testpkg /usr/share/games/testpkg/images /usr/share/games/testpkg/images/bb.dat /usr/share/games/testpkg/images/bb_bg.dat /usr/share/games/testpkg/images/bubblemad_8x8.png /usr/share/games/testpkg/images/goldfont.png /usr/share/games/testpkg/lvl /usr/share/games/testpkg/lvl/lvl001.txt /usr/share/games/testpkg/lvl/lvl002.txt /usr/share/games/testpkg/lvl/lvl003.txt /usr/share/games/testpkg/lvl/lvl004.txt /usr/share/games/testpkg/lvl/lvl005.txt /usr/share/games/testpkg/lvl/lvl006.txt /usr/share/games/testpkg/lvl/lvl007.txt /usr/share/games/testpkg/music /usr/share/games/testpkg/music/alfa.it /usr/share/games/testpkg/music/beta.it /usr/share/games/testpkg/sounds /usr/share/games/testpkg/sounds/bounce.wav /usr/share/games/testpkg/sounds/click.wav /usr/share/games/testpkg/sounds/warning.wav /usr/share/icons /usr/share/icons/default.kde /usr/share/icons/default.kde/16x16 /usr/share/icons/default.kde/16x16/apps /usr/share/icons/default.kde/16x16/apps/testpkg.png /usr/share/man /usr/share/man/man6 /usr/share/man/man6/testpkg.6.gz Am I wrong in putting the kde icons in /usr/share/icons/default.kde which seem to be a symbolic link ? It's a symbolic link on both Kubuntu 9.10 and Fedora 11 though. Sounds like a common situation that the same directory is needed for different packages, so why is it a conflict ?

    Read the article

  • Converting Lighttpd config to NginX with php-fpm

    - by Le Dude
    Having so much issue with NginX configuration since I'm new with NginX. Been using Lighttpd for quite sometime. Here are the base info. New Machine - CentOS 6.3 64 Bit - NginX 1.2.4-1.e16.ngx - Php-FPM 5.3.18-1.e16.remi Old Machine - CentOS 6.2 64Bit - Lighttpd 1.4.25-3.e16 Original Lighttpd config file: ####################################################################### ## ## /etc/lighttpd/lighttpd.conf ## ## check /etc/lighttpd/conf.d/*.conf for the configuration of modules. ## ####################################################################### ####################################################################### ## ## Some Variable definition which will make chrooting easier. ## ## if you add a variable here. Add the corresponding variable in the ## chroot example aswell. ## var.log_root = "/var/log/lighttpd" var.server_root = "/var/www" var.state_dir = "/var/run" var.home_dir = "/var/lib/lighttpd" var.conf_dir = "/etc/lighttpd" ## ## run the server chrooted. ## ## This requires root permissions during startup. ## ## If you run Chrooted set the the variables to directories relative to ## the chroot dir. ## ## example chroot configuration: ## #var.log_root = "/logs" #var.server_root = "/" #var.state_dir = "/run" #var.home_dir = "/lib/lighttpd" #var.vhosts_dir = "/vhosts" #var.conf_dir = "/etc" # #server.chroot = "/srv/www" ## ## Some additional variables to make the configuration easier ## ## ## Base directory for all virtual hosts ## ## used in: ## conf.d/evhost.conf ## conf.d/simple_vhost.conf ## vhosts.d/vhosts.template ## var.vhosts_dir = server_root + "/vhosts" ## ## Cache for mod_compress ## ## used in: ## conf.d/compress.conf ## var.cache_dir = "/var/cache/lighttpd" ## ## Base directory for sockets. ## ## used in: ## conf.d/fastcgi.conf ## conf.d/scgi.conf ## var.socket_dir = home_dir + "/sockets" ## ####################################################################### ####################################################################### ## ## Load the modules. include "modules.conf" ## ####################################################################### ####################################################################### ## ## Basic Configuration ## --------------------- ## server.port = 80 ## ## Use IPv6? ## #server.use-ipv6 = "enable" ## ## bind to a specific IP ## #server.bind = "localhost" ## ## Run as a different username/groupname. ## This requires root permissions during startup. ## server.username = "lighttpd" server.groupname = "lighttpd" ## ## enable core files. ## #server.core-files = "disable" ## ## Document root ## server.document-root = server_root + "/lighttpd" ## ## The value for the "Server:" response field. ## ## It would be nice to keep it at "lighttpd". ## #server.tag = "lighttpd" ## ## store a pid file ## server.pid-file = state_dir + "/lighttpd.pid" ## ####################################################################### ####################################################################### ## ## Logging Options ## ------------------ ## ## all logging options can be overwritten per vhost. ## ## Path to the error log file ## server.errorlog = log_root + "/error.log" ## ## If you want to log to syslog you have to unset the ## server.errorlog setting and uncomment the next line. ## #server.errorlog-use-syslog = "enable" ## ## Access log config ## include "conf.d/access_log.conf" ## ## The debug options are moved into their own file. ## see conf.d/debug.conf for various options for request debugging. ## include "conf.d/debug.conf" ## ####################################################################### ####################################################################### ## ## Tuning/Performance ## -------------------- ## ## corresponding documentation: ## http://www.lighttpd.net/documentation/performance.html ## ## set the event-handler (read the performance section in the manual) ## ## possible options on linux are: ## ## select ## poll ## linux-sysepoll ## ## linux-sysepoll is recommended on kernel 2.6. ## server.event-handler = "linux-sysepoll" ## ## The basic network interface for all platforms at the syscalls read() ## and write(). Every modern OS provides its own syscall to help network ## servers transfer files as fast as possible ## ## linux-sendfile - is recommended for small files. ## writev - is recommended for sending many large files ## server.network-backend = "linux-sendfile" ## ## As lighttpd is a single-threaded server, its main resource limit is ## the number of file descriptors, which is set to 1024 by default (on ## most systems). ## ## If you are running a high-traffic site you might want to increase this ## limit by setting server.max-fds. ## ## Changing this setting requires root permissions on startup. see ## server.username/server.groupname. ## ## By default lighttpd would not change the operation system default. ## But setting it to 2048 is a better default for busy servers. ## ## With SELinux enabled, this is denied by default and needs to be allowed ## by running the following once : setsebool -P httpd_setrlimit on server.max-fds = 2048 ## ## Stat() call caching. ## ## lighttpd can utilize FAM/Gamin to cache stat call. ## ## possible values are: ## disable, simple or fam. ## server.stat-cache-engine = "simple" ## ## Fine tuning for the request handling ## ## max-connections == max-fds/2 (maybe /3) ## means the other file handles are used for fastcgi/files ## server.max-connections = 1024 ## ## How many seconds to keep a keep-alive connection open, ## until we consider it idle. ## ## Default: 5 ## #server.max-keep-alive-idle = 5 ## ## How many keep-alive requests until closing the connection. ## ## Default: 16 ## #server.max-keep-alive-requests = 18 ## ## Maximum size of a request in kilobytes. ## By default it is unlimited (0). ## ## Uploads to your server cant be larger than this value. ## #server.max-request-size = 0 ## ## Time to read from a socket before we consider it idle. ## ## Default: 60 ## #server.max-read-idle = 60 ## ## Time to write to a socket before we consider it idle. ## ## Default: 360 ## #server.max-write-idle = 360 ## ## Traffic Shaping ## ----------------- ## ## see /usr/share/doc/lighttpd/traffic-shaping.txt ## ## Values are in kilobyte per second. ## ## Keep in mind that a limit below 32kB/s might actually limit the ## traffic to 32kB/s. This is caused by the size of the TCP send ## buffer. ## ## per server: ## #server.kbytes-per-second = 128 ## ## per connection: ## #connection.kbytes-per-second = 32 ## ####################################################################### ####################################################################### ## ## Filename/File handling ## ------------------------ ## ## files to check for if .../ is requested ## index-file.names = ( "index.php", "index.rb", "index.html", ## "index.htm", "default.htm" ) ## index-file.names += ( "index.xhtml", "index.html", "index.htm", "default.htm", "index.php" ) ## ## deny access the file-extensions ## ## ~ is for backupfiles from vi, emacs, joe, ... ## .inc is often used for code includes which should in general not be part ## of the document-root url.access-deny = ( "~", ".inc" ) ## ## disable range requests for pdf files ## workaround for a bug in the Acrobat Reader plugin. ## $HTTP["url"] =~ "\.pdf$" { server.range-requests = "disable" } ## ## url handling modules (rewrite, redirect) ## #url.rewrite = ( "^/$" => "/server-status" ) #url.redirect = ( "^/wishlist/(.+)" => "http://www.example.com/$1" ) ## ## both rewrite/redirect support back reference to regex conditional using %n ## #$HTTP["host"] =~ "^www\.(.*)" { # url.redirect = ( "^/(.*)" => "http://%1/$1" ) #} ## ## which extensions should not be handle via static-file transfer ## ## .php, .pl, .fcgi are most often handled by mod_fastcgi or mod_cgi ## static-file.exclude-extensions = ( ".php", ".pl", ".fcgi", ".scgi" ) ## ## error-handler for status 404 ## #server.error-handler-404 = "/error-handler.html" #server.error-handler-404 = "/error-handler.php" ## ## Format: <errorfile-prefix><status-code>.html ## -> ..../status-404.html for 'File not found' ## #server.errorfile-prefix = "/srv/www/htdocs/errors/status-" ## ## mimetype mapping ## include "conf.d/mime.conf" ## ## directory listing configuration ## include "conf.d/dirlisting.conf" ## ## Should lighttpd follow symlinks? ## server.follow-symlink = "enable" ## ## force all filenames to be lowercase? ## #server.force-lowercase-filenames = "disable" ## ## defaults to /var/tmp as we assume it is a local harddisk ## server.upload-dirs = ( "/var/tmp" ) ## ####################################################################### ####################################################################### ## ## SSL Support ## ------------- ## ## To enable SSL for the whole server you have to provide a valid ## certificate and have to enable the SSL engine.:: ## ## ssl.engine = "enable" ## ssl.pemfile = "/path/to/server.pem" ## ## The HTTPS protocol does not allow you to use name-based virtual ## hosting with SSL. If you want to run multiple SSL servers with ## one lighttpd instance you must use IP-based virtual hosting: :: ## ## $SERVER["socket"] == "10.0.0.1:443" { ## ssl.engine = "enable" ## ssl.pemfile = "/etc/ssl/private/www.example.com.pem" ## server.name = "www.example.com" ## ## server.document-root = "/srv/www/vhosts/example.com/www/" ## } ## ## If you have a .crt and a .key file, cat them together into a ## single PEM file: ## $ cat /etc/ssl/private/lighttpd.key /etc/ssl/certs/lighttpd.crt \ ## > /etc/ssl/private/lighttpd.pem ## #ssl.pemfile = "/etc/ssl/private/lighttpd.pem" ## ## optionally pass the CA certificate here. ## ## #ssl.ca-file = "" ## ####################################################################### ####################################################################### ## ## custom includes like vhosts. ## #include "conf.d/config.conf" #include_shell "cat /etc/lighttpd/vhosts.d/*.conf" ## ####################################################################### ####################################################################### ### Custom Added by me #url.rewrite-once = (".*\.(js|ico|gif|jpg|png|css|jar|class)$" => "$0", "" => "/index.php") url.rewrite-once = ( ".*\?(.*)$" => "/index.php?$1", "^/js/.*$" => "$0", "^.*\.(js|ico|gif|jpg|png|css|swf |jar|class)$" => "$0", "" => "/index.php" ) # expire.url = ( "" => "access 1 days" ) include "myvhost-vhosts.conf" ####################################################################### Here is my Vhost file for lighttpd $HTTP["host"] =~ "192.168.8.35$" { server.document-root = "/var/www/lighttpd/qc41022012/public" server.errorlog = "/var/log/lighttpd/error.log" accesslog.filename = "/var/log/lighttpd/access.log" server.error-handler-404 = "/e404.php" } and here is my nginx.conf file user nginx; worker_processes 5; error_log /var/log/nginx/error.log warn; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/testsite/logs/access.log main; sendfile on; #tcp_nopush on; keepalive_timeout 65; #gzip on; # include /etc/nginx/conf.d/*.conf; ## I added this ## include /etc/nginx/sites-available/*; } Here is my NginX Vhost file server { server_name 192.168.8.91; access_log /var/log/nginx/myapps/logs/access.log; error_log /var/log/nginx/myapps/logs/error.log; root /var/www/html/myapps/public; location / { index index.html index.htm index.php; } location = /favicon.ico { return 204; access_log off; log_not_found off; } # location ~ \.php$ { # try_files $uri /index.php; # include /etc/nginx/fastcgi_params; # fastcgi_pass 127.0.0.1:9000; # fastcgi_index index.php; # fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; # fastcgi_param SCRIPT_NAME $fastcgi_script_name; location ~ \.php.*$ { rewrite ^(.*.php)/ $1 last; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; include fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; # fastcgi_intercept_errors on; # fastcgi_param SCRIPT_FILENAME $document_root/index.php; # fastcgi_param PATH_INFO $uri; # fastcgi_pass 127.0.0.1:9000; # include fastcgi_params; } } We have a custom apps that we created that works great with lighttpd. I went through some headache also when we were trying to figure out how to make it work with lighttpd. this is the line that helps make it work in lighttpd. url.rewrite-once = ( ".*\?(.*)$" => "/index.php?$1", "^/js/.*$" => "$0", "^.*\.(js|ico|gif|jpg|png|css|swf |jar|class)$" => "$0", "" => "/index.php" ) but I couldn't figure out how to make it works in NginX. The webserver run just fine when we use the phpinfo.php test file. However as soon as I point it to my apps, nothing comes up. Check the error.log file and there's no error. Very mind boggling. I spent over 1 week trying to figure it out with no luck.. Please help?

    Read the article

  • Thunderbird doesn't show folders on a new Dovecot install

    - by Zoran Zaric
    Hey, I set up a new mailserver with postfix and Dovecot some days ago, everything is working except for Thunderbird not showing any folders. Evolution shows me all folders. I migrated from a Courier install using imapsync. In the filesystem the folders don't have a INBOX in their name, so the tho folders ar called .Folder 1 not .INBOX.Folder 1. This is the output of dovecot -n: # 1.0.10: /etc/dovecot/dovecot.conf Warning: mail_extra_groups setting was often used insecurely so it is now deprecated, use mail_access_groups or mail_privileged_group instead base_dir: /var/run/dovecot/ log_timestamp: “%Y-%m-%d %H:%M:%S ” protocols: imap pop3 listen(default): *:143 listen(imap): *:143 listen(pop3): *:110 disable_plaintext_auth: no login_dir: /var/run/dovecot//login login_executable(default): /usr/lib/dovecot/imap-login login_executable(imap): /usr/lib/dovecot/imap-login login_executable(pop3): /usr/lib/dovecot/pop3-login first_valid_uid: 1001 last_valid_uid: 1001 mail_extra_groups: vmail mail_access_groups: vmail mail_location: maildir:/var/vmail/%d/%u maildir_copy_with_hardlinks: yes mail_executable(default): /usr/lib/dovecot/imap mail_executable(imap): /usr/lib/dovecot/imap mail_executable(pop3): /usr/lib/dovecot/pop3 mail_plugin_dir(default): /usr/lib/dovecot/modules/imap mail_plugin_dir(imap): /usr/lib/dovecot/modules/imap mail_plugin_dir(pop3): /usr/lib/dovecot/modules/pop3 pop3_uidl_format(default): pop3_uidl_format(imap): pop3_uidl_format(pop3): %08Xu%08Xv auth default: user: nobody passdb: driver: sql args: /etc/dovecot/dovecot-sql.conf userdb: driver: sql args: /etc/dovecot/dovecot-sql.conf socket: type: listen client: path: /var/spool/postfix/private/auth mode: 432 user: postfix group: postfix master: path: /var/run/dovecot/auth-master mode: 432 user: vmail group: vmail Thanks!

    Read the article

  • How to set up IP forwarding on Nexenta (Solaris)?

    - by Gleb
    I am trying to set up IP forwarding on my Nexenta box: root@hdd:~# uname -a SunOS hdd 5.11 NexentaOS_134f i86pc i386 i86pc Solaris The box has 2 network interfaces: root@hdd:~# ifconfig -a lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1 inet 127.0.0.1 netmask ff000000 e1000g1: flags=1001100843<UP,BROADCAST,RUNNING,MULTICAST,ROUTER,IPv4,FIXEDMTU> mtu 1500 index 2 inet 192.168.12.2 netmask ffffff00 broadcast 192.168.12.255 ether 68:5:ca:9:51:b8 myri10ge0: flags=1100843<UP,BROADCAST,RUNNING,MULTICAST,ROUTER,IPv4> mtu 9000 index 3 inet 10.10.10.10 netmask ffffff00 broadcast 10.10.10.255 ether 0:60:dd:47:87:2 lo0: flags=2002000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv6,VIRTUAL> mtu 8252 index 1 inet6 ::1/128 192.168.12.0 is my normal LAN with 192.168.12.1 being the firewall/gateway 10.10.10.0 is a separate LAN for iSCSI (with no internet access) I want to set up IP forwarding so that a computer on 10.10.10.0 will be able to access the internet by using 10.10.10.10 as a gateway (I don't need any port forwarding) I have turned on IP forwarding: root@hdd:~# routeadm Configuration Current Current Option Configuration System State --------------------------------------------------------------- IPv4 routing disabled disabled IPv6 routing disabled disabled IPv4 forwarding enabled enabled IPv6 forwarding disabled disabled Routing services "route:default ripng:default" Routing daemons: STATE FMRI disabled svc:/network/routing/rdisc:default disabled svc:/network/routing/route:default disabled svc:/network/routing/legacy-routing:ipv4 disabled svc:/network/routing/legacy-routing:ipv6 disabled svc:/network/routing/ripng:default online svc:/network/routing/ndp:default But when I dry to start ipnat, I get an error: root@hdd:~# ipnat -CF -f /etc/ipf/ipnat.conf ioctl(SIOCGNATS): I/O error Here is the config: root@hdd:~# cat /etc/ipf/ipnat.conf #!/sbin/ipnat -f - # map e1000g1 10.10.10.10/24 -> 192.168.12.2/32 So the question is how to fix this.. Thanks in advance!

    Read the article

  • apache name virtual host - two domains and SSL

    - by Tom
    I'm trying to setup Apache(2.2.3) to run two websites with SSL using both different domains and IP addresses. Both websites run fine on port 80 but when I tried to enable SSL for website2 I get a ssl_error_bad_cert_domain error; website2 picks up the SSL cert for website1. Here is my setup in httpd.conf: # Website1 NameVirtualHost 192.168.10.1:80 <VirtualHost 192.168.10.1:80> DocumentRoot /var/www/html ServerName www.website1.org </VirtualHost> NameVirtualHost 192.168.10.1:443 <VirtualHost 192.168.10.1:443> SSLEngine On SSLCertificateFile conf/ssl/website1.cer SSLCertificateKeyFile conf/ssl/website1.key </VirtualHost> # Website2 NameVirtualHost 192.168.10.2:80 <VirtualHost 192.168.10.2:80> DocumentRoot /var/www/html/chart ServerName www.website2.org </VirtualHost> NameVirtualHost 192.168.10.2:443 <VirtualHost 192.168.10.2:443> SSLEngine On SSLCertificateFile conf/ssl/website2.cer SSLCertificateKeyFile conf/ssl/website2.key </VirtualHost> Update: In answer to Shane (this wouldn't fit in comment box) here is the output from apachectl -S: VirtualHost configuration: 192.168.10.2:80 is a NameVirtualHost default server www.website2.org (/etc/httpd/conf/httpd.conf:1033) port 80 namevhost www.website2.org (/etc/httpd/conf/httpd.conf:1033) 192.168.10.2:443 is a NameVirtualHost default server bogus_host_without_reverse_dns (/etc/httpd/conf/httpd.conf:1040) port 443 namevhost bogus_host_without_reverse_dns (/etc/httpd/conf/httpd.conf:1040) 192.168.10.1:80 is a NameVirtualHost default server www.website1.org (/etc/httpd/conf/httpd.conf:1017) port 80 namevhost www.website1.org (/etc/httpd/conf/httpd.conf:1017) 192.168.10.1:443 is a NameVirtualHost default server bogus_host_without_reverse_dns (/etc/httpd/conf/httpd.conf:1024) port 443 namevhost bogus_host_without_reverse_dns (/etc/httpd/conf/httpd.conf:1024) wildcard NameVirtualHosts and _default_ servers: _default_:443 192.168.10.1 (/etc/httpd/conf.d/ssl.conf:81) Syntax OK

    Read the article

  • Wrong Outlook anywhere settings

    - by Ken Guru
    Hey all I wanted to enable NTLM authentication on OutlookAnywhere, and after doing the command Set-OutlookAnywhere -IISAuthenticationMethods Basic,NTLM, my settings got changed. This is a dump before I run the command: [PS] C:\Windows\system32Get-OutlookAnywhere ServerName : EXCAS01 SSLOffloading : False ExternalHostname : ClientAuthenticationMethod : Basic IISAuthenticationMethods : {Basic} MetabasePath : IIS:///W3SVC/1/ROOT/Rpc Path : C:\Windows\System32\RpcProxy Server : EXCAS01 AdminDisplayName : ExchangeVersion : 0.1 (8.0.535.0) Name : Rpc (Default Web Site) DistinguishedName : CN=Rpc (Default Web Site),CN=HTTP,CN=Protocols,CN= EXCAS01,CN=Servers,CN=Exchange Administrative Grou p (FYDIBOHF23SPDLT),CN=Administrative Groups,CN=Fi rst Organization,CN=Microsoft Exchange,CN=Services ,CN=Configuration,DC=asp,DC=ssc,DC=no Identity : EXCAS01\Rpc (Default Web Site) Guid : 289b4865-caf1-4412-95ee-6fb0dff55e8b ObjectCategory : asp.ssc.no/Configuration/Schema/ms-Exch-Rpc-Http-V irtual-Directory ObjectClass : {top, msExchVirtualDirectory, msExchRpcHttpVirtual Directory} WhenChanged : 05.01.2011 16:59:55 WhenCreated : 27.11.2009 11:20:12 OriginatingServer : IsValid : True Noticde the settings for "Name", "DistinguishedName", and "Identity". After I run the command, I ended up with this: [PS] C:\Windows\system32Get-OutlookAnywhere ServerName : EXCAS01 SSLOffloading : False ExternalHostname : ClientAuthenticationMethod : Basic IISAuthenticationMethods : {Basic, Ntlm} MetabasePath : IIS:///W3SVC/1/ROOT/Rpc Path : C:\Windows\System32\RpcProxy Server : EXCAS01 AdminDisplayName : ExchangeVersion : 0.1 (8.0.535.0) Name : EXCAS01 DistinguishedName : CN=EXCAS01,CN=HTTP,CN=Protocols,CN=EXCAS01,CN=Serv ers,CN=Exchange Administrative Group (FYDIBOHF23SP DLT),CN=Administrative Groups,CN=First Organizatio n,CN=Microsoft Exchange,CN=Services,CN=Configurati on,DC=asp,DC=ssc,DC=no Identity : EXCAS01\EXCAS01 Guid : 289b4865-caf1-4412-95ee-6fb0dff55e8b ObjectCategory : asp.ssc.no/Configuration/Schema/ms-Exch-Rpc-Http-V irtual-Directory ObjectClass : {top, msExchVirtualDirectory, msExchRpcHttpVirtual Directory} WhenChanged : 06.01.2011 09:43:50 WhenCreated : 27.11.2009 11:20:12 OriginatingServer : ASP-DC-2. IsValid : True Now, the "Name", "DistinguishedName" and "Identity" has changed, and when I try to change it back by running "Set-OutlookAnywhere -Identity "EXCAS01\Rpc (Default Web Site)", I get the following error: [PS] C:\Windows\system32Set-OutlookAnywhere -Identity "EXCAS01\Rpc (Default Web Site)" Set-OutlookAnywhere : The operation could not be performed because object 'EXCA S01\Rpc (Default Web Site)' could not be found on domain controller 'ASP-DC-2.'. Remember, the RPC over HTTP works fine with Basic authentication (even with the wrong settings), but NTLM still doesnt work. How do I change back the settings?

    Read the article

  • What is /etc/apache2/sites-available used for and is it necessary?

    - by Mariane
    I have 3 sites, each with a specific IP, running on apache2 (up-to-date Ubuntu). To put a site online, I just created a file in: /etc/apache2/sites-enabled and in this file I told apache which directory was the root directory for this site, and to which IP it should correspond. So I have 000-default 001-www.lapf.eu 002-www.felkin.info 003-www.seidhr.fr in this directory. My first site, lapf suddenly lost contact with its database after the domain name was transferred from another registrar unto the registrar who is also hosting the site's data. Then I did an update, and I reinstalled mysql-server and mysql-common, and I did I-have-forgotten-what to reinstall the locales (uft8 and such) which had vanished for some reason. This fixed my first site. Now I noticed that the other 2 sites are offline. Pointing a browser to them just hangs until timeout. They used to function, and their domain names did not move, they are still registered at the same place. The files are still in /etc/apache2/sites-enabled I noticed another directory: /etc/apache2/sites-available with just defaut and default.ssl in it. Why are there 2 directories, sites-enabled and sites-available? Should I copy the files from "sites-enabled" into "sites-available"? Or should I put a modified version of each in "sites-available"? command: "apache2ctl -S" VirtualHost configuration: 92.243.20.169:80 Charlotte (/etc/apache2/sites-enabled/001-www.lapf.eu:1) 92.243.21.141:80 xvm-21-141.ghst.net (/etc/apache2/sites-enabled/002-www.felkin.info:1) 92.243.4.114:80 xvm-4-114.ghst.net (/etc/apache2/sites-enabled/003-www.seidhr.fr:1) wildcard NameVirtualHosts and default servers: *:80 is a NameVirtualHost default server Charlotte (/etc/apache2/sites-enabled/000-default:1) port 80 namevhost Charlotte (/etc/apache2/sites-enabled/000-default:1) Syntax OK

    Read the article

  • Motion - takes snapshot without motion detected

    - by Emmanuel Brunet
    I've been installed the standard motion 3.2.12 package on debian 7.5. I would like to get snapshot ONLY when motion is detected, but it still saves a picture every second without any activity in front of the camera. I'm using a TENVIS JPT3815W IP camera motion.conf here is my configuration file setup_mode off target_dir /media/videos/log/webcam netcam_url http://webcam/snapshot.cgi netcam_tolerant_check on netcam_userpass admin:alpha1237 # Output frames at 1 fps when no motion is detected and increase to the # rate given by webcam_maxrate when motion is detected (default: off) webcam_motion off output_all off # detection settings 1-255 default 32 noise_level 50 # Maximum framerate for webcam streams (default: 1) webcam_maxrate 25 pre_capture 0 framerate 25 gap 30 locate on mail [email protected] text_right "FRONT CAMERA %Y/%m/%d - %T" text_double on ffmpeg_cap_new on ffmpeg_cap_motion on ffmpeg_video_codec mpeg4 output_motion off snapshot_interval 0 # Quality of the jpeg (in percent) images produced (default: 50) quality 90 # Restrict webcam connections to localhost only (default: on) webcam_localhost off # Limits the number of images per connection (default: 0 = unlimited) # Number can be defined by multiplying actual webcam rate by desired number of seconds # Actual webcam rate is the smallest of the numbers framerate and webcam_maxrate webcam_limit 0 Issue when I start motion images are stored in /media/videos/log/webcam nearly every second. I hjust want to get images when a motion is detected and the according video clip Any idea where the configuration fails ?

    Read the article

  • How to set static ip address on vmware for NAT guest vms from an ubuntu Host dhcp server?

    - by javadba
    I need to configure various linux flavor NAT'ed guest vm's to have static ip addresses provided by the Ubuntu host. The vmware documentation punts on this topic, deferring to "see the man pages for your linux distribution". But the generic pages for "my linux distro" do not know about the special stuff for vmware e.g. vmnet8. Pointers from someone who just knows how to do this would be much appreciated. Here is the /etc/vmware/vmnet8/dhcpd/dhcpd.conf: allow unknown-clients; default-lease-time 1800; # default is 30 minutes max-lease-time 7200; # default is 2 hours subnet 192.168.238.0 netmask 255.255.255.0 { range 192.168.238.128 192.168.238.254; option broadcast-address 192.168.238.255; option domain-name-servers 192.168.238.2; option domain-name localdomain; default-lease-time 1800; # default is 30 minutes max-lease-time 7200; # default is 2 hours option netbios-name-servers 192.168.238.2; option routers 192.168.238.2; } host vmnet8 { hardware ethernet 00:50:56:C0:00:08; fixed-address 192.168.238.1; option domain-name-servers 0.0.0.0; option domain-name ""; option routers 0.0.0.0; } Fromt the dhcpd.conf documentation, we are supposed to add an entry for static hosts similar to the following: host mystatichostonee { hardware ethernet 00:20:6B:C7:9B:E4; fixed-address 192.168.238.101; } host mystatichosttwo { hardware ethernet 00:23:7a:C7:9c:F2; fixed-address 192.168.238.102; } But notice that the vmnet8 entry in the vmware-generated dhcpd.conf already is set to fixed-address. I don't know how to add the specifics for my hosts to that vmnet8 entry: do they become nested?

    Read the article

  • Managing Internal Yum Repository Groups

    - by elmt
    What is the best method for handling yum groups dependencies? For example, take this comps.xml file <comps> <group> <id>production</id> <name>Production</name> <default>true</default> <description>Packages required to run</description> <uservisible>true</uservisible> <packagelist> <packagereq type="default">ssh</packagereq> </packagelist> </group> <group> <id>development</id> <name>Development</name> <default>false</default> <description>Packages required to develop</description> <uservisible>true</uservisible> <packagelist> <packagereq type="default">gcc</packagereq> </packagelist> </group> </comps> which is packaged with createrepo -g comps.xml x86_64. The ssh and gcc rpms are not installed in the x86_64 directory. If I run yum groupinstall development, yum is smart enough to pull the gcc package from the RHEL repo even though the groups are defined in my internal repository. However, is this the proper way of doing this, or should I copy the rpms to my local repository and recreate the repo?

    Read the article

  • gitweb refusing to blame

    - by Slipp D. Thompson
    I'm attempting to get gitweb (git 1.8.4.2, via git instaweb) in a project dir on my Debian server to offer blame views. In my /etc/gitweb.conf: … # default logo, favicon, etc. settings $feature{'blame'}{'default'} = [1]; $feature{'pickaxe'}{'default'} = [1]; $feature{'snapshot'}{'default'} = ['tgz', 'txz', 'zip']; $feature{'highlight'}{'default'} = [1]; $feature{'pathinfo'}{'default'} = [1]; In my global config file: [gitweb] blame = true snapshot = tgz, txz, zip patches = 256 avatar = gravatar [instaweb] local = false httpd = apache2 -f port = 4321 In my project's .git/config file: [gitweb] blame = true And yet, when I try to load a git blame view (via hand-modifying the URL to http://myserversip:4321/?p=.git;a=blame;f=Tests/InchCoordProxyTests.m;h=b4b2…;hb=53b4, since blame action links don't show up): Doing a quick search for “Blame view not allowed” in the gitweb.cgi source reveals plainly that the gitweb_check_feature('blame') conditional is failing. What am I doing wrong? Or, is there a way to verbosely print out why gitweb is doing what it's doing (e.g. which config files were read, which settings were loaded from each file, etc.)?

    Read the article

  • Mysql: Working With 192 Trillion Records... (Yes, 192 Trillion)

    - by Sarah
    Here's the question... Considering 192 trillion records, what should my considerations be? My main concern is speed. Here's the table... CREATE TABLE `ref` ( `id` INTEGER(13) AUTO_INCREMENT DEFAULT NOT NULL, `rel_id` INTEGER(13) NOT NULL, `p1` INTEGER(13) NOT NULL, `p2` INTEGER(13) DEFAULT NULL, `p3` INTEGER(13) DEFAULT NULL, `s` INTEGER(13) NOT NULL, `p4` INTEGER(13) DEFAULT NULL, `p5` INTEGER(13) DEFAULT NULL, `p6` INTEGER(13) DEFAULT NULL, PRIMARY KEY (`id`), KEY (`s`), KEY (`rel_id`), KEY (`p3`), KEY (`p4`) ); Here's the queries... SELECT id, s FROM ref WHERE red_id="$rel_id" AND p3="$p3" AND p4="$p4" SELECT rel_id, p1, p2, p3, p4, p5, p6 FROM ref WHERE id="$id" INSERT INTO rel (rel_id, p1, p2, p3, s, p4, p5, p6) VALUES ("$rel_id", "$p1", "$p2", "$p3", "$s", "$p4", "$p5", "$p6") Here's some notes... The SELECT's will be done much more frequently than the INSERT. However, occasionally I want to add a few hundred records at a time. Load-wise, there will be nothing for hours then maybe a few thousand queries all at once. Don't think I can normalize any more (need the p values in a combination) The database as a whole is very relational. This will be the largest table by far (next largest is about 900k) UPDATE (08/11/2010) Interestingly, I've been given a second option... Instead of 192 trillion I could store 2.6*10^16 (15 zeros, meaning 26 Quadrillion)... But in this second option I would only need to store one bigint(18) as the index in a table. That's it - just the one column. So I would just be checking for the existence of a value. Occasionally adding records, never deleting them. So that makes me think there must be a better solution then mysql for simply storing numbers... Given this second option, should I take it or stick with the first... [edit] Just got news of some testing that's been done - 100 million rows with this setup returns the query in 0.0004 seconds [/edit]

    Read the article

  • Parallelism in .NET – Part 11, Divide and Conquer via Parallel.Invoke

    - by Reed
    Many algorithms are easily written to work via recursion.  For example, most data-oriented tasks where a tree of data must be processed are much more easily handled by starting at the root, and recursively “walking” the tree.  Some algorithms work this way on flat data structures, such as arrays, as well.  This is a form of divide and conquer: an algorithm design which is based around breaking up a set of work recursively, “dividing” the total work in each recursive step, and “conquering” the work when the remaining work is small enough to be solved easily. Recursive algorithms, especially ones based on a form of divide and conquer, are often a very good candidate for parallelization. This is apparent from a common sense standpoint.  Since we’re dividing up the total work in the algorithm, we have an obvious, built-in partitioning scheme.  Once partitioned, the data can be worked upon independently, so there is good, clean isolation of data. Implementing this type of algorithm is fairly simple.  The Parallel class in .NET 4 includes a method suited for this type of operation: Parallel.Invoke.  This method works by taking any number of delegates defined as an Action, and operating them all in parallel.  The method returns when every delegate has completed: Parallel.Invoke( () => { Console.WriteLine("Action 1 executing in thread {0}", Thread.CurrentThread.ManagedThreadId); }, () => { Console.WriteLine("Action 2 executing in thread {0}", Thread.CurrentThread.ManagedThreadId); }, () => { Console.WriteLine("Action 3 executing in thread {0}", Thread.CurrentThread.ManagedThreadId); } ); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Running this simple example demonstrates the ease of using this method.  For example, on my system, I get three separate thread IDs when running the above code.  By allowing any number of delegates to be executed directly, concurrently, the Parallel.Invoke method provides us an easy way to parallelize any algorithm based on divide and conquer.  We can divide our work in each step, and execute each task in parallel, recursively. For example, suppose we wanted to implement our own quicksort routine.  The quicksort algorithm can be designed based on divide and conquer.  In each iteration, we pick a pivot point, and use that to partition the total array.  We swap the elements around the pivot, then recursively sort the lists on each side of the pivot.  For example, let’s look at this simple, sequential implementation of quicksort: public static void QuickSort<T>(T[] array) where T : IComparable<T> { QuickSortInternal(array, 0, array.Length - 1); } private static void QuickSortInternal<T>(T[] array, int left, int right) where T : IComparable<T> { if (left >= right) { return; } SwapElements(array, left, (left + right) / 2); int last = left; for (int current = left + 1; current <= right; ++current) { if (array[current].CompareTo(array[left]) < 0) { ++last; SwapElements(array, last, current); } } SwapElements(array, left, last); QuickSortInternal(array, left, last - 1); QuickSortInternal(array, last + 1, right); } static void SwapElements<T>(T[] array, int i, int j) { T temp = array[i]; array[i] = array[j]; array[j] = temp; } Here, we implement the quicksort algorithm in a very common, divide and conquer approach.  Running this against the built-in Array.Sort routine shows that we get the exact same answers (although the framework’s sort routine is slightly faster).  On my system, for example, I can use framework’s sort to sort ten million random doubles in about 7.3s, and this implementation takes about 9.3s on average. Looking at this routine, though, there is a clear opportunity to parallelize.  At the end of QuickSortInternal, we recursively call into QuickSortInternal with each partition of the array after the pivot is chosen.  This can be rewritten to use Parallel.Invoke by simply changing it to: // Code above is unchanged... SwapElements(array, left, last); Parallel.Invoke( () => QuickSortInternal(array, left, last - 1), () => QuickSortInternal(array, last + 1, right) ); } This routine will now run in parallel.  When executing, we now see the CPU usage across all cores spike while it executes.  However, there is a significant problem here – by parallelizing this routine, we took it from an execution time of 9.3s to an execution time of approximately 14 seconds!  We’re using more resources as seen in the CPU usage, but the overall result is a dramatic slowdown in overall processing time. This occurs because parallelization adds overhead.  Each time we split this array, we spawn two new tasks to parallelize this algorithm!  This is far, far too many tasks for our cores to operate upon at a single time.  In effect, we’re “over-parallelizing” this routine.  This is a common problem when working with divide and conquer algorithms, and leads to an important observation: When parallelizing a recursive routine, take special care not to add more tasks than necessary to fully utilize your system. This can be done with a few different approaches, in this case.  Typically, the way to handle this is to stop parallelizing the routine at a certain point, and revert back to the serial approach.  Since the first few recursions will all still be parallelized, our “deeper” recursive tasks will be running in parallel, and can take full advantage of the machine.  This also dramatically reduces the overhead added by parallelizing, since we’re only adding overhead for the first few recursive calls.  There are two basic approaches we can take here.  The first approach would be to look at the total work size, and if it’s smaller than a specific threshold, revert to our serial implementation.  In this case, we could just check right-left, and if it’s under a threshold, call the methods directly instead of using Parallel.Invoke. The second approach is to track how “deep” in the “tree” we are currently at, and if we are below some number of levels, stop parallelizing.  This approach is a more general-purpose approach, since it works on routines which parse trees as well as routines working off of a single array, but may not work as well if a poor partitioning strategy is chosen or the tree is not balanced evenly. This can be written very easily.  If we pass a maxDepth parameter into our internal routine, we can restrict the amount of times we parallelize by changing the recursive call to: // Code above is unchanged... SwapElements(array, left, last); if (maxDepth < 1) { QuickSortInternal(array, left, last - 1, maxDepth); QuickSortInternal(array, last + 1, right, maxDepth); } else { --maxDepth; Parallel.Invoke( () => QuickSortInternal(array, left, last - 1, maxDepth), () => QuickSortInternal(array, last + 1, right, maxDepth)); } We no longer allow this to parallelize indefinitely – only to a specific depth, at which time we revert to a serial implementation.  By starting the routine with a maxDepth equal to Environment.ProcessorCount, we can restrict the total amount of parallel operations significantly, but still provide adequate work for each processing core. With this final change, my timings are much better.  On average, I get the following timings: Framework via Array.Sort: 7.3 seconds Serial Quicksort Implementation: 9.3 seconds Naive Parallel Implementation: 14 seconds Parallel Implementation Restricting Depth: 4.7 seconds Finally, we are now faster than the framework’s Array.Sort implementation.

    Read the article

  • Backing up SQL Azure

    - by Herve Roggero
    That's it!!! After many days and nights... and an amazing set of challenges, I just released the Enzo Backup for SQL Azure BETA product (http://www.bluesyntax.net). Clearly, that was one of the most challenging projects I have done so far. Why??? Because to create a highly redundant system, expecting failures at all times for an operation that could take anywhere from a couple of minutes to a couple of hours, and still making sure that the operation completes at some point was remarkably challenging. Some routines have more error trapping that actual code... Here are a few things I had to take into account: Exponential Backoff (explained in another post) Dual dynamic determination of number of rows to backup  Dynamic reduction of batch rows used to restore the data Implementation of a flexible BULK Insert API that the tool could use Implementation of a custom Storage REST API to handle automatic retries Automatic data chunking based on blob sizes Compression of data Implementation of the Task Parallel Library at multiple levels including deserialization of Azure Table rows and backup/restore operations Full or Partial Restore operations Implementation of a Ghost class to serialize/deserialize data tables And that's just a partial list... I will explain what some of those mean in future blob posts. A lot of the complexities had to do with implementing a form of retry logic, depending on the resource and the operation.

    Read the article

  • Talking JavaOne with Rock Star Charles Nutter

    - by Janice J. Heiss
    JavaOne Rock Stars, conceived in 2005, are the top rated speakers from the JavaOne Conference. They are awarded by their peers who through conference surveys recognize them for their outstanding sessions and speaking ability. Over the years many of the world’s leading Java developers have been so recognized.We spoke with distinguished Rock Star, Charles Nutter. A JRuby Update from Charles NutterCharles Nutter of Red Hat is well known as a lead developer of JRuby, a Ruby implementation of Java that is tightly integrated with Java to allow for the embedding of the interpreter into any Java application with full two-way access between the Java and the Ruby code. Nutter is giving the following sessions at this year’s JavaOne: CON7257 – “JVM Bytecode for Dummies (and the Rest of Us Too)” CON7284 – “Implementing Ruby: The Long, Hard Road” CON7263 – “JVM JIT for Dummies” BOF6682 – “I’ve Got 99 Languages, but Java Ain’t One” CON6575 – “Polyglot for Dummies” (Both with Thomas Enebo) I asked Nutter, to give us the latest on JRuby. “JRuby seems to have hit a tipping point this past year,” he explained, “moving from ‘just another Ruby implementation’ to ‘the best Ruby implementation for X,’ where X may be performance, scaling, big data, stability, reliability, security, and a number of other features important for today's applications. We're currently wrapping up JRuby 1.7, which improves support for Ruby 1.9 APIs, solves a number of user issues and concurrency challenges, and utilizes invokedynamic to outperform all other Ruby implementations by a wide margin. JRuby just gets better and better.” When asked what he thought about the rapid growth of alternative languages for the JVM, he replied, “I'm very intrigued by efforts to bring a high-performance JavaScript runtime to the JVM. There's really no reason the JVM couldn't be the fastest platform for running JavaScript with the right implementation, and I'm excited to see that happen.”And what is Nutter working on currently? “Aside from JRuby 1.7 wrap-up,” he explained, “I'm helping the Hotspot developers investigate invokedynamic performance issues and test-driving their new invokedynamic code in Java 8. I'm also starting to explore ways to improve the general state of dynamic languages on the JVM using JRuby as a guide, and to help the JVM become a better platform for all kinds of languages.”

    Read the article

  • Success Quote: A Hybrid Approach for Success

    - by Lauren Clark
    We recently received this quote from a project that successfully used OUM: “On our project, we applied a combination of the Oracle Unified Method (OUM) and the client's methodology. The project was organized by OUM's phases and a subset of OUM's processes, tasks, and templates. Using a hybrid of the two methods resulted in an implementation approach that was optimized for the client-specific requirements for this project." This hybrid approach is an excellent example of using OUM in the flexible and scalable manner in which it was intended. The project team was able to scale OUM to be fit-for-purpose for their given situation. It's great to see how merging what was needed out of OUM with the client’s methodology resulted in an implementation approach that more closely aligned to the business needs. Successfully scaling OUM is dependent on the needs of the particular project and/or engagement. The key is to use no more than is necessary to satisfy the requirements of the implementation and appropriately address risks. For more information, check out the "Tailoring OUM for Your Project" page, which can be accessed by first clicking on the "OUM should be scaled to fit your implementation" link on the OUM homepage and then drilling into the link on the subsequent page. Have you used OUM in conjunction with a partner or customer methodology? Please share your experiences with us.

    Read the article

  • Talking JavaOne with Rock Star Charles Nutter

    - by Janice J. Heiss
    JavaOne Rock Stars, conceived in 2005, are the top rated speakers from the JavaOne Conference. They are awarded by their peers who through conference surveys recognize them for their outstanding sessions and speaking ability. Over the years many of the world’s leading Java developers have been so recognized.We spoke with distinguished Rock Star, Charles Nutter. A JRuby Update from Charles NutterCharles Nutter of Red Hat is well known as a lead developer of JRuby, a Ruby implementation of Java that is tightly integrated with Java to allow for the embedding of the interpreter into any Java application with full two-way access between the Java and the Ruby code. Nutter is giving the following sessions at this year’s JavaOne: CON7257 – “JVM Bytecode for Dummies (and the Rest of Us Too)” CON7284 – “Implementing Ruby: The Long, Hard Road” CON7263 – “JVM JIT for Dummies” BOF6682 – “I’ve Got 99 Languages, but Java Ain’t One” CON6575 – “Polyglot for Dummies” (Both with Thomas Enebo) I asked Nutter, to give us the latest on JRuby. “JRuby seems to have hit a tipping point this past year,” he explained, “moving from ‘just another Ruby implementation’ to ‘the best Ruby implementation for X,’ where X may be performance, scaling, big data, stability, reliability, security, and a number of other features important for today's applications. We're currently wrapping up JRuby 1.7, which improves support for Ruby 1.9 APIs, solves a number of user issues and concurrency challenges, and utilizes invokedynamic to outperform all other Ruby implementations by a wide margin. JRuby just gets better and better.” When asked what he thought about the rapid growth of alternative languages for the JVM, he replied, “I'm very intrigued by efforts to bring a high-performance JavaScript runtime to the JVM. There's really no reason the JVM couldn't be the fastest platform for running JavaScript with the right implementation, and I'm excited to see that happen.”And what is Nutter working on currently? “Aside from JRuby 1.7 wrap-up,” he explained, “I'm helping the Hotspot developers investigate invokedynamic performance issues and test-driving their new invokedynamic code in Java 8. I'm also starting to explore ways to improve the general state of dynamic languages on the JVM using JRuby as a guide, and to help the JVM become a better platform for all kinds of languages.” Originally published on blogs.oracle.com/javaone.

    Read the article

  • Do you want to become an Oracle certified Expert in WebLogic & ADF?

    - by JuergenKress
    Hands-on Bootcamps Training Roadshows FY14 free hands-on training for community members ADF & ADF Mobile Bootcamps & WebLogic Bootcamps. For all WebLogic & ADF experts, we offer 100 free vouchers worth $195 to become an Oracle certified expert. To receive a WebLogic & ADF voucher please send an e-mail with the screenshot of your WebLogic Server 12c PreSales Specialist or ADF 11g PreSales Specialist certificate to [email protected] including your Name, Company, e-mail and Country with the e-mail subject free WebLogic & ADF voucher! Or attend a local free "Test-Fest". WebLogic ADF Pre-Sales assessment (free online test) Preparation: WebLogic 12c PreSales Specialist (OPN account required – need help?) ADF 11g PreSales Specialist (OPN account required – need help?) Implementation assessment Preparation: WebLogic 12c Implementation Specialist WebLogic Bootcamp training material (Community membership required) WebLogic Knowledge Zone Overview ADF 11g Implementation Specialist ADF 11g bootcamp training material (Community membership required) ADF Knowledge Zone Overview Free vouchers are reserved for partners from Europe, Middle East and Africa. Any other countries please contact your local partner manager! Vouchers are only valid until quarter end! WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: education,Specialization,Implementation Specialist,OPN,OOW,Oracle OpenWorld,WebLogic,WebLogic Community,Oracle,Jürgen Kress

    Read the article

  • Where to find Oracle Training for BI & EPM Partners

    - by Mike.Hallett(at)Oracle-BI&EPM
    We run both “Live Virtual Training” (web-based classes) as well as “In Class Training” in most countries around Europe, Middle East and Africa. Some of these are subsidised for OPN partners, while others are available at a discount (usually 25%) to OPN partners via OU (Oracle University).  To see what is scheduled for in-depth hands-on implementation training for partners see:   Oracle Business Intelligence Enterprise Edition Plus Implementation Boot Camp For example, these are some of the OBI11g Boot-camps we currently have scheduled: 11 - 15 June 2012 Bucharest, Romania 21 - 23 August 2012 Johannesburg, South Africa 24 - 28 September 2012 Utrecht, Netherlands Oracle Essbase Implementation Boot Camp Oracle GoldenGate Implementation Boot Camp Hyperion Planning Boot Camp   Hyperion Financial Management Boot Camp   Oracle Business Intelligence Applications for ERP Boot Camp     You can also selectively filter search for courses via the Partner Events Calendar @ http://events.oracle.com/search/search?group=Events&keyword=OPN+Only   Otherwise, it is worth checking the Oracle Partner Enablement BLOG for any BI / EPM news, especially the sub-Blogs on the right for each country.   There is also a monthly Partner Enablement Update (PDF) to find out the latest partner training on Oracle's new products and new releases.

    Read the article

  • Can I remove the systems from a component entity system?

    - by nathan
    After reading a lot about entity/component based engines. I feel like there is no real definition for this kind of engine. Reading this thread: Implementing features in an Entity System and the linked article made me think a lot. I did not feel that comfortable using System concept so I'll write something else, inspired by this pattern. I'd like to know if you think it's a good way to organize game code and what improvements can be made. Regarding a more strict implementation of entity/component based engine, is my solution viable? Do I risk getting stuck at any point due to the lack of flexibility of this implementation (or anything else)? My engine, as for entity/component patterns has entities and components, no systems since the game logic is handled by components. Also, I think the main difference is the fact that my engine will use inherence and OOP concepts in general, I mean, I don't try to minimize them. Entity: an entity is an abstract class. It holds his position, width and height, scale and a list of linked components. The current implementation can be found here (java). Every frame, the entity will be updated (i.e all the components linked to this entity will be updated), and rendered, if a render component is specified. Component: like for entity, a component is an abstract class that must be extended to create new components. The behavior of an entity is created through his components collection. The component implementation can be found here. Components are updated when the owning entity is updated or for only one specific component (render component), rendered. Here is an example of a logic component (i.e not a renderable component, a component that's updated each frame) in charge of listening for keyboard events and a render component in charge of display a plain sprite (i.e not animated).

    Read the article

  • For a Javascript library, what is the best or standard way to support extensibility

    - by Michael Best
    Specifically, I want to support "plugins" that modify the behavior of parts of the library. I couldn't find much information on the web about this subject. But here are my ideas for how a library could be extensible. The library exports an object with both public and "protected" functions. A plugin can replace any of those functions, thus modifying the library's behavior. Advantages of this method are that it's simple and that the plugin's functions can have full access to the library's "protected" functions. Disadvantages are that the library may be harder to maintain with a larger set of exposed functions and it could be hard to debug if multiple plugins are involved (how to know which plugin modified which function?). The library provides an "add plugin" function that accepts an object with a specific interface. Internally, the library will use the plugin instead of it's own code if appropriate. With this method, the internals of the library can be rearranged more freely as long as it still supports the same plugin interface. This could also support having different plugin interfaces to modify different parts of the library. A disadvantage of this method is that the plugins may have to re-implement code that is already part of the library since the library's internal functions are not exported. The library provides a "set implementation" function that accepts an object inherited from a specific base object. The library's public API calls functions in the implementation object for any functionality that can be modified and the base implementation object includes the core functionality, with both external (to the API) and internal functions. A plugin creates a new implementation object, which inherits from the base object and replaces any functions it wants to modify. This combines advantages and disadvantages of both the other methods.

    Read the article

< Previous Page | 139 140 141 142 143 144 145 146 147 148 149 150  | Next Page >