Search Results

Search found 29235 results on 1170 pages for 'dynamic management objects'.

Page 384/1170 | < Previous Page | 380 381 382 383 384 385 386 387 388 389 390 391  | Next Page >

  • Using a mounted NTFS share with nginx

    - by Hoff
    I have set up a local testing VM with Ubuntu Server 12.04 LTS and the LEMP stack. It's kind of an unconventional setup because instead of having all my PHP scripts on the local machine, I've mounted an NTFS share as the document root because I do my development on Windows. I had everything working perfectly up until this morning, now I keep getting a dreaded 'File not found.' error. I am almost certain this must be somehow permission related, because if I copy my site over to /var/www, nginx and php-fpm have no problems serving my PHP scripts. What I can't figure out is why all of a sudden (after a reboot of the server), no PHP files will be served but instead just the 'File not found.' error. Static files work fine, so I think it's PHP that is causing the headache. Both nginx and php-fpm are configured to run as the user www-data: root@ubuntu-server:~# ps aux | grep 'nginx\|php-fpm' root 1095 0.0 0.0 5816 792 ? Ss 11:11 0:00 nginx: master process /opt/nginx/sbin/nginx -c /etc/nginx/nginx.conf www-data 1096 0.0 0.1 6016 1172 ? S 11:11 0:00 nginx: worker process www-data 1098 0.0 0.1 6016 1172 ? S 11:11 0:00 nginx: worker process root 1130 0.0 0.4 175560 4212 ? Ss 11:11 0:00 php-fpm: master process (/etc/php5/php-fpm.conf) www-data 1131 0.0 0.3 175560 3216 ? S 11:11 0:00 php-fpm: pool www www-data 1132 0.0 0.3 175560 3216 ? S 11:11 0:00 php-fpm: pool www www-data 1133 0.0 0.3 175560 3216 ? S 11:11 0:00 php-fpm: pool www root 1686 0.0 0.0 4368 816 pts/1 S+ 11:11 0:00 grep --color=auto nginx\|php-fpm I have mounted the NTFS share at /mnt/webfiles by editing /etc/fstab and adding the following line: //192.168.0.199/c$/Websites/ /mnt/webfiles cifs username=Jordan,password=mypasswordhere,gid=33,uid=33 0 0 Where gid 33 is the www-data group and uid 33 is the user www-data. If I list the contents of one of my sites you can in fact see that they belong to the user www-data: root@ubuntu-server:~# ls -l /mnt/webfiles/nTv5-2.0 total 8 drwxr-xr-x 0 www-data www-data 0 Jun 6 19:12 app drwxr-xr-x 0 www-data www-data 0 Aug 22 19:00 assets -rwxr-xr-x 0 www-data www-data 1150 Jan 4 2012 favicon.ico -rwxr-xr-x 0 www-data www-data 1412 Dec 28 2011 index.php drwxr-xr-x 0 www-data www-data 0 Jun 3 16:44 lib drwxr-xr-x 0 www-data www-data 0 Jan 3 2012 plugins drwxr-xr-x 0 www-data www-data 0 Jun 3 16:45 vendors If I switch to the www-data user, I have no problem creating a new file on the share: root@ubuntu-server:~# su www-data $ > /mnt/webfiles/test.txt $ ls -l /mnt/webfiles | grep test\.txt -rwxr-xr-x 0 www-data www-data 0 Sep 8 11:19 test.txt There should be no problem reading or writing to the share with php-fpm running as the user www-data. When I examine the error log of nginx, it's filled with a bunch of lines that look like the following: 2012/09/08 11:22:36 [error] 1096#0: *1 FastCGI sent in stderr: "Primary script unknown" while reading response header from upstream, client: 192.168.0.199, server: , request: "GET / HTTP/1.1", upstream: "fastcgi://unix:/var/run/php5-fpm.sock:", host: "192.168.0.123" 2012/09/08 11:22:39 [error] 1096#0: *1 FastCGI sent in stderr: "Primary script unknown" while reading response header from upstream, client: 192.168.0.199, server: , request: "GET /apc.php HTTP/1.1", upstream: "fastcgi://unix:/var/run/php5-fpm.sock:", host: "192.168.0.123" It's bizarre that this was working previously and now all of sudden PHP is complaining that it can't "find" the scripts on the share. Does anybody know why this is happening? EDIT I tried editing php-fpm.conf and changing chdir to the following: chdir = /mnt/webfiles When I try and restart the php-fpm service, I get the error: Starting php-fpm [08-Sep-2012 14:20:55] ERROR: [pool www] the chdir path '/mnt/webfiles' does not exist or is not a directory This is a total load of bullshit because this directory DOES exist and is mounted! Any ls commands to list that directory work perfectly. Why the hell can't PHP-FPM see this directory?! Here are my configuration files for reference: nginx.conf user www-data; worker_processes 2; error_log /var/log/nginx/nginx.log info; pid /var/run/nginx.pid; events { worker_connections 1024; multi_accept on; } http { include fastcgi.conf; include mime.types; default_type application/octet-stream; set_real_ip_from 127.0.0.1; real_ip_header X-Forwarded-For; ## Proxy proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; client_max_body_size 32m; client_body_buffer_size 128k; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 90; proxy_buffers 32 4k; ## Compression gzip on; gzip_types text/plain text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript; gzip_disable "MSIE [1-6]\.(?!.*SV1)"; ### TCP options tcp_nodelay on; tcp_nopush on; keepalive_timeout 65; sendfile on; include /etc/nginx/sites-enabled/*; } my site config server { listen 80; access_log /var/log/nginx/$host.access.log; error_log /var/log/nginx/error.log; root /mnt/webfiles/nTv5-2.0/app/webroot; index index.php; ## Block bad bots if ($http_user_agent ~* (HTTrack|HTMLParser|libcurl|discobot|Exabot|Casper|kmccrew|plaNETWORK|RPT-HTTPClient)) { return 444; } ## Block certain Referers (case insensitive) if ($http_referer ~* (sex|vigra|viagra) ) { return 444; } ## Deny dot files: location ~ /\. { deny all; } ## Favicon Not Found location = /favicon.ico { access_log off; log_not_found off; } ## Robots.txt Not Found location = /robots.txt { access_log off; log_not_found off; } if (-f $document_root/maintenance.html) { rewrite ^(.*)$ /maintenance.html last; } location ~* \.(?:ico|css|js|gif|jpe?g|png)$ { # Some basic cache-control for static files to be sent to the browser expires max; add_header Pragma public; add_header Cache-Control "max-age=2678400, public, must-revalidate"; } location / { try_files $uri $uri/ index.php; if (-f $request_filename) { break; } rewrite ^(.+)$ /index.php?url=$1 last; } location ~ \.php$ { include /etc/nginx/fastcgi.conf; fastcgi_pass unix:/var/run/php5-fpm.sock; } } php-fpm.conf ;;;;;;;;;;;;;;;;;;;;; ; FPM Configuration ; ;;;;;;;;;;;;;;;;;;;;; ; All relative paths in this configuration file are relative to PHP's install ; prefix (/opt/php5). This prefix can be dynamicaly changed by using the ; '-p' argument from the command line. ; Include one or more files. If glob(3) exists, it is used to include a bunch of ; files from a glob(3) pattern. This directive can be used everywhere in the ; file. ; Relative path can also be used. They will be prefixed by: ; - the global prefix if it's been set (-p arguement) ; - /opt/php5 otherwise ;include=etc/fpm.d/*.conf ;;;;;;;;;;;;;;;;;; ; Global Options ; ;;;;;;;;;;;;;;;;;; [global] ; Pid file ; Note: the default prefix is /opt/php5/var ; Default Value: none pid = /var/run/php-fpm.pid ; Error log file ; Note: the default prefix is /opt/php5/var ; Default Value: log/php-fpm.log error_log = /var/log/php5-fpm/php-fpm.log ; Log level ; Possible Values: alert, error, warning, notice, debug ; Default Value: notice ;log_level = notice ; If this number of child processes exit with SIGSEGV or SIGBUS within the time ; interval set by emergency_restart_interval then FPM will restart. A value ; of '0' means 'Off'. ; Default Value: 0 ;emergency_restart_threshold = 0 ; Interval of time used by emergency_restart_interval to determine when ; a graceful restart will be initiated. This can be useful to work around ; accidental corruptions in an accelerator's shared memory. ; Available Units: s(econds), m(inutes), h(ours), or d(ays) ; Default Unit: seconds ; Default Value: 0 ;emergency_restart_interval = 0 ; Time limit for child processes to wait for a reaction on signals from master. ; Available units: s(econds), m(inutes), h(ours), or d(ays) ; Default Unit: seconds ; Default Value: 0 ;process_control_timeout = 0 ; Send FPM to background. Set to 'no' to keep FPM in foreground for debugging. ; Default Value: yes ;daemonize = yes ;;;;;;;;;;;;;;;;;;;; ; Pool Definitions ; ;;;;;;;;;;;;;;;;;;;; ; Multiple pools of child processes may be started with different listening ; ports and different management options. The name of the pool will be ; used in logs and stats. There is no limitation on the number of pools which ; FPM can handle. Your system will tell you anyway :) ; Start a new pool named 'www'. ; the variable $pool can we used in any directive and will be replaced by the ; pool name ('www' here) [www] ; Per pool prefix ; It only applies on the following directives: ; - 'slowlog' ; - 'listen' (unixsocket) ; - 'chroot' ; - 'chdir' ; - 'php_values' ; - 'php_admin_values' ; When not set, the global prefix (or /opt/php5) applies instead. ; Note: This directive can also be relative to the global prefix. ; Default Value: none ;prefix = /path/to/pools/$pool ; The address on which to accept FastCGI requests. ; Valid syntaxes are: ; 'ip.add.re.ss:port' - to listen on a TCP socket to a specific address on ; a specific port; ; 'port' - to listen on a TCP socket to all addresses on a ; specific port; ; '/path/to/unix/socket' - to listen on a unix socket. ; Note: This value is mandatory. ;listen = 127.0.0.1:9000 listen = /var/run/php5-fpm.sock ; Set listen(2) backlog. A value of '-1' means unlimited. ; Default Value: 128 (-1 on FreeBSD and OpenBSD) ;listen.backlog = -1 ; List of ipv4 addresses of FastCGI clients which are allowed to connect. ; Equivalent to the FCGI_WEB_SERVER_ADDRS environment variable in the original ; PHP FCGI (5.2.2+). Makes sense only with a tcp listening socket. Each address ; must be separated by a comma. If this value is left blank, connections will be ; accepted from any ip address. ; Default Value: any ;listen.allowed_clients = 127.0.0.1 ; Set permissions for unix socket, if one is used. In Linux, read/write ; permissions must be set in order to allow connections from a web server. Many ; BSD-derived systems allow connections regardless of permissions. ; Default Values: user and group are set as the running user ; mode is set to 0666 ;listen.owner = www-data ;listen.group = www-data ;listen.mode = 0666 ; Unix user/group of processes ; Note: The user is mandatory. If the group is not set, the default user's group ; will be used. user = www-data group = www-data ; Choose how the process manager will control the number of child processes. ; Possible Values: ; static - a fixed number (pm.max_children) of child processes; ; dynamic - the number of child processes are set dynamically based on the ; following directives: ; pm.max_children - the maximum number of children that can ; be alive at the same time. ; pm.start_servers - the number of children created on startup. ; pm.min_spare_servers - the minimum number of children in 'idle' ; state (waiting to process). If the number ; of 'idle' processes is less than this ; number then some children will be created. ; pm.max_spare_servers - the maximum number of children in 'idle' ; state (waiting to process). If the number ; of 'idle' processes is greater than this ; number then some children will be killed. ; Note: This value is mandatory. pm = dynamic ; The number of child processes to be created when pm is set to 'static' and the ; maximum number of child processes to be created when pm is set to 'dynamic'. ; This value sets the limit on the number of simultaneous requests that will be ; served. Equivalent to the ApacheMaxClients directive with mpm_prefork. ; Equivalent to the PHP_FCGI_CHILDREN environment variable in the original PHP ; CGI. ; Note: Used when pm is set to either 'static' or 'dynamic' ; Note: This value is mandatory. pm.max_children = 50 ; The number of child processes created on startup. ; Note: Used only when pm is set to 'dynamic' ; Default Value: min_spare_servers + (max_spare_servers - min_spare_servers) / 2 pm.start_servers = 20 ; The desired minimum number of idle server processes. ; Note: Used only when pm is set to 'dynamic' ; Note: Mandatory when pm is set to 'dynamic' pm.min_spare_servers = 5 ; The desired maximum number of idle server processes. ; Note: Used only when pm is set to 'dynamic' ; Note: Mandatory when pm is set to 'dynamic' pm.max_spare_servers = 35 ; The number of requests each child process should execute before respawning. ; This can be useful to work around memory leaks in 3rd party libraries. For ; endless request processing specify '0'. Equivalent to PHP_FCGI_MAX_REQUESTS. ; Default Value: 0 pm.max_requests = 500 ; The URI to view the FPM status page. If this value is not set, no URI will be ; recognized as a status page. By default, the status page shows the following ; information: ; accepted conn - the number of request accepted by the pool; ; pool - the name of the pool; ; process manager - static or dynamic; ; idle processes - the number of idle processes; ; active processes - the number of active processes; ; total processes - the number of idle + active processes. ; max children reached - number of times, the process limit has been reached, ; when pm tries to start more children (works only for ; pm 'dynamic') ; The values of 'idle processes', 'active processes' and 'total processes' are ; updated each second. The value of 'accepted conn' is updated in real time. ; Example output: ; accepted conn: 12073 ; pool: www ; process manager: static ; idle processes: 35 ; active processes: 65 ; total processes: 100 ; max children reached: 1 ; By default the status page output is formatted as text/plain. Passing either ; 'html' or 'json' as a query string will return the corresponding output ; syntax. Example: ; http://www.foo.bar/status ; http://www.foo.bar/status?json ; http://www.foo.bar/status?html ; Note: The value must start with a leading slash (/). The value can be ; anything, but it may not be a good idea to use the .php extension or it ; may conflict with a real PHP file. ; Default Value: not set pm.status_path = /status ; The ping URI to call the monitoring page of FPM. If this value is not set, no ; URI will be recognized as a ping page. This could be used to test from outside ; that FPM is alive and responding, or to ; - create a graph of FPM availability (rrd or such); ; - remove a server from a group if it is not responding (load balancing); ; - trigger alerts for the operating team (24/7). ; Note: The value must start with a leading slash (/). The value can be ; anything, but it may not be a good idea to use the .php extension or it ; may conflict with a real PHP file. ; Default Value: not set ping.path = /ping ; This directive may be used to customize the response of a ping request. The ; response is formatted as text/plain with a 200 response code. ; Default Value: pong ping.response = pong ; The timeout for serving a single request after which the worker process will ; be killed. This option should be used when the 'max_execution_time' ini option ; does not stop script execution for some reason. A value of '0' means 'off'. ; Available units: s(econds)(default), m(inutes), h(ours), or d(ays) ; Default Value: 0 ;request_terminate_timeout = 0 ; The timeout for serving a single request after which a PHP backtrace will be ; dumped to the 'slowlog' file. A value of '0s' means 'off'. ; Available units: s(econds)(default), m(inutes), h(ours), or d(ays) ; Default Value: 0 ;request_slowlog_timeout = 0 ; The log file for slow requests ; Default Value: not set ; Note: slowlog is mandatory if request_slowlog_timeout is set ;slowlog = log/$pool.log.slow ; Set open file descriptor rlimit. ; Default Value: system defined value ;rlimit_files = 1024 ; Set max core size rlimit. ; Possible Values: 'unlimited' or an integer greater or equal to 0 ; Default Value: system defined value ;rlimit_core = 0 ; Chroot to this directory at the start. This value must be defined as an ; absolute path. When this value is not set, chroot is not used. ; Note: you can prefix with '$prefix' to chroot to the pool prefix or one ; of its subdirectories. If the pool prefix is not set, the global prefix ; will be used instead. ; Note: chrooting is a great security feature and should be used whenever ; possible. However, all PHP paths will be relative to the chroot ; (error_log, sessions.save_path, ...). ; Default Value: not set ;chroot = ; Chdir to this directory at the start. ; Note: relative path can be used. ; Default Value: current directory or / when chroot ;chdir = /var/www ; Redirect worker stdout and stderr into main error log. If not set, stdout and ; stderr will be redirected to /dev/null according to FastCGI specs. ; Note: on highloaded environement, this can cause some delay in the page ; process time (several ms). ; Default Value: no ;catch_workers_output = yes ; Pass environment variables like LD_LIBRARY_PATH. All $VARIABLEs are taken from ; the current environment. ; Default Value: clean env ;env[HOSTNAME] = $HOSTNAME ;env[PATH] = /usr/local/bin:/usr/bin:/bin ;env[TMP] = /tmp ;env[TMPDIR] = /tmp ;env[TEMP] = /tmp ; Additional php.ini defines, specific to this pool of workers. These settings ; overwrite the values previously defined in the php.ini. The directives are the ; same as the PHP SAPI: ; php_value/php_flag - you can set classic ini defines which can ; be overwritten from PHP call 'ini_set'. ; php_admin_value/php_admin_flag - these directives won't be overwritten by ; PHP call 'ini_set' ; For php_*flag, valid values are on, off, 1, 0, true, false, yes or no. ; Defining 'extension' will load the corresponding shared extension from ; extension_dir. Defining 'disable_functions' or 'disable_classes' will not ; overwrite previously defined php.ini values, but will append the new value ; instead. ; Note: path INI options can be relative and will be expanded with the prefix ; (pool, global or /opt/php5) ; Default Value: nothing is defined by default except the values in php.ini and ; specified at startup with the -d argument ;php_admin_value[sendmail_path] = /usr/sbin/sendmail -t -i -f [email protected] ;php_flag[display_errors] = off ;php_admin_value[error_log] = /var/log/fpm-php.www.log ;php_admin_flag[log_errors] = on ;php_admin_value[memory_limit] = 32M php_admin_value[sendmail_path] = /usr/sbin/sendmail -t -i

    Read the article

  • ODI 12c - Parallel Table Load

    - by David Allan
    In this post we will look at the ODI 12c capability of parallel table load from the aspect of the mapping developer and the knowledge module developer - two quite different viewpoints. This is about parallel table loading which isn't to be confused with loading multiple targets per se. It supports the ability for ODI mappings to be executed concurrently especially if there is an overlap of the datastores that they access, so any temporary resources created may be uniquely constructed by ODI. Temporary objects can be anything basically - common examples are staging tables, indexes, views, directories - anything in the ETL to help the data integration flow do its job. In ODI 11g users found a few workarounds (such as changing the technology prefixes - see here) to build unique temporary names but it was more of a challenge in error cases. ODI 12c mappings by default operate exactly as they did in ODI 11g with respect to these temporary names (this is also true for upgraded interfaces and scenarios) but can be configured to support the uniqueness capabilities. We will look at this feature from two aspects; that of a mapping developer and that of a developer (of procedures or KMs). 1. Firstly as a Mapping Developer..... 1.1 Control when uniqueness is enabled A new property is available to set unique name generation on/off. When unique names have been enabled for a mapping, all temporary names used by the collection and integration objects will be generated using unique names. This property is presented as a check-box in the Property Inspector for a deployment specification. 1.2 Handle cleanup after successful execution Provided that all temporary objects that are created have a corresponding drop statement then all of the temporary objects should be removed during a successful execution. This should be the case with the KMs developed by Oracle. 1.3 Handle cleanup after unsuccessful execution If an execution failed in ODI 11g then temporary tables would have been left around and cleaned up in the subsequent run. In ODI 12c, KM tasks can now have a cleanup-type task which is executed even after a failure in the main tasks. These cleanup tasks will be executed even on failure if the property 'Remove Temporary Objects on Error' is set. If the agent was to crash and not be able to execute this task, then there is an ODI tool (OdiRemoveTemporaryObjects here) you can invoke to cleanup the tables - it supports date ranges and the like. That's all there is to it from the aspect of the mapping developer it's much, much simpler and straightforward. You can now execute the same mapping concurrently or execute many mappings using the same resource concurrently without worrying about conflict.  2. Secondly as a Procedure or KM Developer..... In the ODI Operator the executed code shows the actual name that is generated - you can also see the runtime code prior to execution (introduced in 11.1.1.7), for example below in the code type I selected 'Pre-executed Code' this lets you see the code about to be processed and you can also see the executed code (which is the default view). References to the collection (C$) and integration (I$) names will be automatically made unique by using the odiRef APIs - these objects will have unique names whenever concurrency has been enabled for a particular mapping deployment specification. It's also possible to use name uniqueness functions in procedures and your own KMs. 2.1 New uniqueness tags  You can also make your own temporary objects have unique names by explicitly including either %UNIQUE_STEP_TAG or %UNIQUE_SESSION_TAG in the name passed to calls to the odiRef APIs. Such names would always include the unique tag regardless of the concurrency setting. To illustrate, let's look at the getObjectName() method. At <% expansion time, this API will append %UNIQUE_STEP_TAG to the object name for collection and integration tables. The name parameter passed to this API may contain  %UNIQUE_STEP_TAG or %UNIQUE_SESSION_TAG. This API always generates to the <? version of getObjectName() At execution time this API will replace the unique tag macros with a string that is unique to the current execution scope. The returned name will conform to the name-length restriction for the target technology, and its pattern for the unique tag. Any necessary truncation will be performed against the initial name for the object and any other fixed text that may have been specified. Examples are:- <?=odiRef.getObjectName("L", "%COL_PRFEMP%UNIQUE_STEP_TAG", "D")?> SCOTT.C$_EABH7QI1BR1EQI3M76PG9SIMBQQ <?=odiRef.getObjectName("L", "EMP%UNIQUE_STEP_TAG_AE", "D")?> SCOTT.EMPAO96Q2JEKO0FTHQP77TMSAIOSR_ Methods which have this kind of support include getFrom, getTableName, getTable, getObjectShortName and getTemporaryIndex. There are APIs for retrieving this tag info also, the getInfo API has been extended with the following properties (the UNIQUE* properties can also be used in ODI procedures); UNIQUE_STEP_TAG - Returns the unique value for the current step scope, e.g. 5rvmd8hOIy7OU2o1FhsF61 Note that this will be a different value for each loop-iteration when the step is in a loop. UNIQUE_SESSION_TAG - Returns the unique value for the current session scope, e.g. 6N38vXLrgjwUwT5MseHHY9 IS_CONCURRENT - Returns info about the current mapping, will return 0 or 1 (only in % phase) GUID_SRC_SET - Returns the UUID for the current source set/execution unit (only in % phase) The getPop API has been extended with the IS_CONCURRENT property which returns info about an mapping, will return 0 or 1.  2.2 Additional APIs Some new APIs are provided including getFormattedName which will allow KM developers to construct a name from fixed-text or ODI symbols that can be optionally truncate to a max length and use a specific encoding for the unique tag. It has syntax getFormattedName(String pName[, String pTechnologyCode]) This API is available at both the % and the ? phase.  The format string can contain the ODI prefixes that are available for getObjectName(), e.g. %INT_PRF, %COL_PRF, %ERR_PRF, %IDX_PRF alongwith %UNIQUE_STEP_TAG or %UNIQUE_SESSION_TAG. The latter tags will be expanded into a unique string according to the specified technology. Calls to this API within the same execution context are guaranteed to return the same unique name provided that the same parameters are passed to the call. e.g. <%=odiRef.getFormattedName("%COL_PRFMY_TABLE%UNIQUE_STEP_TAG_AE", "ORACLE")%> <?=odiRef.getFormattedName("%COL_PRFMY_TABLE%UNIQUE_STEP_TAG_AE", "ORACLE")?> C$_MY_TAB7wDiBe80vBog1auacS1xB_AE <?=odiRef.getFormattedName("%COL_PRFMY_TABLE%UNIQUE_STEP_TAG.log", "FILE")?> C2_MY_TAB7wDiBe80vBog1auacS1xB.log 2.3 Name length generation  As part of name generation, the length of the generated name will be compared with the maximum length for the target technology and truncation may need to be applied. When a unique tag is included in the generated string it is important that uniqueness is not compromised by truncation of the unique tag. When a unique tag is NOT part of the generated name, the name will be truncated by removing characters from the end - this is the existing 11g algorithm. When a unique tag is included, the algorithm will first truncate the <postfix> and if necessary  the <prefix>. It is recommended that users will ensure there is sufficient uniqueness in the <prefix> section to ensure uniqueness of the final resultant name. SUMMARY To summarize, ODI 12c make it much simpler to utilize mappings in concurrent cases and provides APIs for helping developing any procedures or custom knowledge modules in such a way they can be used in highly concurrent, parallel scenarios. 

    Read the article

  • Make curl download using non-Privacy extension IPv6 address?

    - by Azendale
    I currently use net.ipv6.conf.all.use_tempaddr=2 to get IPv6 privacy addresses (which have a random host part are regenerated a couple times a day). I need dynamic DNS because the computer is connected to different networks and that changes the network part of the address. I'm using curl to download a dynamic DNS url and want it to use the Non-random address that uses my MAC. How can I make curl prefer the non-privacy address?

    Read the article

  • SQL Monitor’s data repository

    - by Chris Lambrou
    As one of the developers of SQL Monitor, I often get requests passed on by our support people from customers who are looking to dip into SQL Monitor’s own data repository, in order to pull out bits of information that they’re interested in. Since there’s clearly interest out there in playing around directly with the data repository, I thought I’d write some blog posts to start to describe how it all works. The hardest part for me is knowing where to begin, since the schema of the data repository is pretty big. Hmmm… I guess it’s tricky for anyone to write anything but the most trivial of queries against the data repository without understanding the hierarchy of monitored objects, so perhaps my first post should start there. I always imagine that whenever a customer fires up SSMS and starts to explore their SQL Monitor data repository database, they become immediately bewildered by the schema – that was certainly my experience when I did so for the first time. The following query shows the number of different object types in the data repository schema: SELECT type_desc, COUNT(*) AS [count] FROM sys.objects GROUP BY type_desc ORDER BY type_desc;  type_desccount 1DEFAULT_CONSTRAINT63 2FOREIGN_KEY_CONSTRAINT181 3INTERNAL_TABLE3 4PRIMARY_KEY_CONSTRAINT190 5SERVICE_QUEUE3 6SQL_INLINE_TABLE_VALUED_FUNCTION381 7SQL_SCALAR_FUNCTION2 8SQL_STORED_PROCEDURE100 9SYSTEM_TABLE41 10UNIQUE_CONSTRAINT54 11USER_TABLE193 12VIEW124 With 193 tables, 124 views, 100 stored procedures and 381 table valued functions, that’s quite a hefty schema, and when you browse through it using SSMS, it can be a bit daunting at first. So, where to begin? Well, let’s narrow things down a bit and only look at the tables belonging to the data schema. That’s where all of the collected monitoring data is stored by SQL Monitor. The following query gives us the names of those tables: SELECT sch.name + '.' + obj.name AS [name] FROM sys.objects obj JOIN sys.schemas sch ON sch.schema_id = obj.schema_id WHERE obj.type_desc = 'USER_TABLE' AND sch.name = 'data' ORDER BY sch.name, obj.name; This query still returns 110 tables. I won’t show them all here, but let’s have a look at the first few of them:  name 1data.Cluster_Keys 2data.Cluster_Machine_ClockSkew_UnstableSamples 3data.Cluster_Machine_Cluster_StableSamples 4data.Cluster_Machine_Keys 5data.Cluster_Machine_LogicalDisk_Capacity_StableSamples 6data.Cluster_Machine_LogicalDisk_Keys 7data.Cluster_Machine_LogicalDisk_Sightings 8data.Cluster_Machine_LogicalDisk_UnstableSamples 9data.Cluster_Machine_LogicalDisk_Volume_StableSamples 10data.Cluster_Machine_Memory_Capacity_StableSamples 11data.Cluster_Machine_Memory_UnstableSamples 12data.Cluster_Machine_Network_Capacity_StableSamples 13data.Cluster_Machine_Network_Keys 14data.Cluster_Machine_Network_Sightings 15data.Cluster_Machine_Network_UnstableSamples 16data.Cluster_Machine_OperatingSystem_StableSamples 17data.Cluster_Machine_Ping_UnstableSamples 18data.Cluster_Machine_Process_Instances 19data.Cluster_Machine_Process_Keys 20data.Cluster_Machine_Process_Owner_Instances 21data.Cluster_Machine_Process_Sightings 22data.Cluster_Machine_Process_UnstableSamples 23… There are two things I want to draw your attention to: The table names describe a hierarchy of the different types of object that are monitored by SQL Monitor (e.g. clusters, machines and disks). For each object type in the hierarchy, there are multiple tables, ending in the suffixes _Keys, _Sightings, _StableSamples and _UnstableSamples. Not every object type has a table for every suffix, but the _Keys suffix is especially important and a _Keys table does indeed exist for every object type. In fact, if we limit the query to return only those tables ending in _Keys, we reveal the full object hierarchy: SELECT sch.name + '.' + obj.name AS [name] FROM sys.objects obj JOIN sys.schemas sch ON sch.schema_id = obj.schema_id WHERE obj.type_desc = 'USER_TABLE' AND sch.name = 'data' AND obj.name LIKE '%_Keys' ORDER BY sch.name, obj.name;  name 1data.Cluster_Keys 2data.Cluster_Machine_Keys 3data.Cluster_Machine_LogicalDisk_Keys 4data.Cluster_Machine_Network_Keys 5data.Cluster_Machine_Process_Keys 6data.Cluster_Machine_Services_Keys 7data.Cluster_ResourceGroup_Keys 8data.Cluster_ResourceGroup_Resource_Keys 9data.Cluster_SqlServer_Agent_Job_History_Keys 10data.Cluster_SqlServer_Agent_Job_Keys 11data.Cluster_SqlServer_Database_BackupType_Backup_Keys 12data.Cluster_SqlServer_Database_BackupType_Keys 13data.Cluster_SqlServer_Database_CustomMetric_Keys 14data.Cluster_SqlServer_Database_File_Keys 15data.Cluster_SqlServer_Database_Keys 16data.Cluster_SqlServer_Database_Table_Index_Keys 17data.Cluster_SqlServer_Database_Table_Keys 18data.Cluster_SqlServer_Error_Keys 19data.Cluster_SqlServer_Keys 20data.Cluster_SqlServer_Services_Keys 21data.Cluster_SqlServer_SqlProcess_Keys 22data.Cluster_SqlServer_TopQueries_Keys 23data.Cluster_SqlServer_Trace_Keys 24data.Group_Keys The full object type hierarchy looks like this: Cluster Machine LogicalDisk Network Process Services ResourceGroup Resource SqlServer Agent Job History Database BackupType Backup CustomMetric File Table Index Error Services SqlProcess TopQueries Trace Group Okay, but what about the individual objects themselves represented at each level in this hierarchy? Well that’s what the _Keys tables are for. This is probably best illustrated by way of a simple example – how can I query my own data repository to find the databases on my own PC for which monitoring data has been collected? Like this: SELECT clstr._Name AS cluster_name, srvr._Name AS instance_name, db._Name AS database_name FROM data.Cluster_SqlServer_Database_Keys db JOIN data.Cluster_SqlServer_Keys srvr ON db.ParentId = srvr.Id -- Note here how the parent of a Database is a Server JOIN data.Cluster_Keys clstr ON srvr.ParentId = clstr.Id -- Note here how the parent of a Server is a Cluster WHERE clstr._Name = 'dev-chrisl2' -- This is the hostname of my own PC ORDER BY clstr._Name, srvr._Name, db._Name;  cluster_nameinstance_namedatabase_name 1dev-chrisl2SqlMonitorData 2dev-chrisl2master 3dev-chrisl2model 4dev-chrisl2msdb 5dev-chrisl2mssqlsystemresource 6dev-chrisl2tempdb 7dev-chrisl2sql2005SqlMonitorData 8dev-chrisl2sql2005TestDatabase 9dev-chrisl2sql2005master 10dev-chrisl2sql2005model 11dev-chrisl2sql2005msdb 12dev-chrisl2sql2005mssqlsystemresource 13dev-chrisl2sql2005tempdb 14dev-chrisl2sql2008SqlMonitorData 15dev-chrisl2sql2008master 16dev-chrisl2sql2008model 17dev-chrisl2sql2008msdb 18dev-chrisl2sql2008mssqlsystemresource 19dev-chrisl2sql2008tempdb These results show that I have three SQL Server instances on my machine (a default instance, one named sql2005 and one named sql2008), and each instance has the usual set of system databases, along with a database named SqlMonitorData. Basically, this is where I test SQL Monitor on different versions of SQL Server, when I’m developing. There are a few important things we can learn from this query: Each _Keys table has a column named Id. This is the primary key. Each _Keys table has a column named ParentId. A foreign key relationship is defined between each _Keys table and its parent _Keys table in the hierarchy. There are two exceptions to this, Cluster_Keys and Group_Keys, because clusters and groups live at the root level of the object hierarchy. Each _Keys table has a column named _Name. This is used to uniquely identify objects in the table within the scope of the same shared parent object. Actually, that last item isn’t always true. In some cases, the _Name column is actually called something else. For example, the data.Cluster_Machine_Services_Keys table has a column named _ServiceName instead of _Name (sorry for the inconsistency). In other cases, a name isn’t sufficient to uniquely identify an object. For example, right now my PC has multiple processes running, all sharing the same name, Chrome (one for each tab open in my web-browser). In such cases, multiple columns are used to uniquely identify an object within the scope of the same shared parent object. Well, that’s it for now. I’ve given you enough information for you to explore the _Keys tables to see how objects are stored in your own data repositories. In a future post, I’ll try to explain how monitoring data is stored for each object, using the _StableSamples and _UnstableSamples tables. If you have any questions about this post, or suggestions for future posts, just submit them in the comments section below.

    Read the article

  • Dynamically loading Assemblies to reduce Runtime Dependencies

    - by Rick Strahl
    Using a static language like C# tends to work with hard assembly bindings for everything. But what if you want only want to provide an assembly optionally, if the functionality is actually used by the user? In this article I discuss a scenario where dynamic loading and activation made sense for me and show the code required to activate and use components loaded at runtime using Reflection and dynamic in combination.

    Read the article

  • APPLY LATE BINDING IN .NET 4.0 AND DIFFERENTIATE IT WITH VAR KEYWORD

    Latebinding is a common term among VB6.0 programmers. C# was always strongly typed. But in 3.x version they introducded var keyword which suporting dynamic binding. But not late binding. After 4.0 relese they came up with dynamic keyword. This fully supporting late binding. Below explaining the difference between var and dynamics. Also a simple example saying where we can use dynamics in C#

    Read the article

  • How accurate is "Business logic should be in a service, not in a model"?

    - by Jeroen Vannevel
    Situation Earlier this evening I gave an answer to a question on StackOverflow. The question: Editing of an existing object should be done in repository layer or in service? For example if I have a User that has debt. I want to change his debt. Should I do it in UserRepository or in service for example BuyingService by getting an object, editing it and saving it ? My answer: You should leave the responsibility of mutating an object to that same object and use the repository to retrieve this object. Example situation: class User { private int debt; // debt in cents private string name; // getters public void makePayment(int cents){ debt -= cents; } } class UserRepository { public User GetUserByName(string name){ // Get appropriate user from database } } A comment I received: Business logic should really be in a service. Not in a model. What does the internet say? So, this got me searching since I've never really (consciously) used a service layer. I started reading up on the Service Layer pattern and the Unit Of Work pattern but so far I can't say I'm convinced a service layer has to be used. Take for example this article by Martin Fowler on the anti-pattern of an Anemic Domain Model: There are objects, many named after the nouns in the domain space, and these objects are connected with the rich relationships and structure that true domain models have. The catch comes when you look at the behavior, and you realize that there is hardly any behavior on these objects, making them little more than bags of getters and setters. Indeed often these models come with design rules that say that you are not to put any domain logic in the the domain objects. Instead there are a set of service objects which capture all the domain logic. These services live on top of the domain model and use the domain model for data. (...) The logic that should be in a domain object is domain logic - validations, calculations, business rules - whatever you like to call it. To me, this seemed exactly what the situation was about: I advocated the manipulation of an object's data by introducing methods inside that class that do just that. However I realize that this should be a given either way, and it probably has more to do with how these methods are invoked (using a repository). I also had the feeling that in that article (see below), a Service Layer is more considered as a façade that delegates work to the underlying model, than an actual work-intensive layer. Application Layer [his name for Service Layer]: Defines the jobs the software is supposed to do and directs the expressive domain objects to work out problems. The tasks this layer is responsible for are meaningful to the business or necessary for interaction with the application layers of other systems. This layer is kept thin. It does not contain business rules or knowledge, but only coordinates tasks and delegates work to collaborations of domain objects in the next layer down. It does not have state reflecting the business situation, but it can have state that reflects the progress of a task for the user or the program. Which is reinforced here: Service interfaces. Services expose a service interface to which all inbound messages are sent. You can think of a service interface as a façade that exposes the business logic implemented in the application (typically, logic in the business layer) to potential consumers. And here: The service layer should be devoid of any application or business logic and should focus primarily on a few concerns. It should wrap Business Layer calls, translate your Domain in a common language that your clients can understand, and handle the communication medium between server and requesting client. This is a serious contrast to other resources that talk about the Service Layer: The service layer should consist of classes with methods that are units of work with actions that belong in the same transaction. Or the second answer to a question I've already linked: At some point, your application will want some business logic. Also, you might want to validate the input to make sure that there isn't something evil or nonperforming being requested. This logic belongs in your service layer. "Solution"? Following the guidelines in this answer, I came up with the following approach that uses a Service Layer: class UserController : Controller { private UserService _userService; public UserController(UserService userService){ _userService = userService; } public ActionResult MakeHimPay(string username, int amount) { _userService.MakeHimPay(username, amount); return RedirectToAction("ShowUserOverview"); } public ActionResult ShowUserOverview() { return View(); } } class UserService { private IUserRepository _userRepository; public UserService(IUserRepository userRepository) { _userRepository = userRepository; } public void MakeHimPay(username, amount) { _userRepository.GetUserByName(username).makePayment(amount); } } class UserRepository { public User GetUserByName(string name){ // Get appropriate user from database } } class User { private int debt; // debt in cents private string name; // getters public void makePayment(int cents){ debt -= cents; } } Conclusion All together not much has changed here: code from the controller has moved to the service layer (which is a good thing, so there is an upside to this approach). However this doesn't look like it had anything to do with my original answer. I realize design patterns are guidelines, not rules set in stone to be implemented whenever possible. Yet I have not found a definitive explanation of the service layer and how it should be regarded. Is it a means to simply extract logic from the controller and put it inside a service instead? Is it supposed to form a contract between the controller and the domain? Should there be a layer between the domain and the service layer? And, last but not least: following the original comment Business logic should really be in a service. Not in a model. Is this correct? How would I introduce my business logic in a service instead of the model?

    Read the article

  • Oracle Tutor: Top 10 to Implement Sustainable Policies and Procedures

    - by emily.chorba(at)oracle.com
    Overview Your organization (executives, managers, and employees) understands the value of having written business process documents (process maps, procedures, instructions, reference documents, and form abstracts). Policies and procedures should be documented because they help to reduce the range of individual decisions and encourage management by exception: the manager only needs to give special attention to unusual problems, not covered by a specific policy or procedure. As more and more procedures are written to cover recurring situations, managers will begin to make decisions which will be consistent from one functional area to the next.Companies should take a project management approach when implementing an environment for a sustainable documentation program and do the following:1. Identify an Executive Champion2. Put together a winning team3. Assign ownership4. Centralize publishing5. Establish the Document Maintenance Process Up Front6. Document critical activities only7. Document actual practice8. Minimize documentation9. Support continuous improvement10. Keep it simple 1. Identify an Executive ChampionAppoint a top down driver. Select one key individual to be a mentor for the procedure planning team. The individual should be a senior manager, such as your company president, CIO, CFO, the vice-president of quality, manufacturing, or engineering. Written policies and procedures can be important supportive aids when known to express the thinking for the chief executive officer and / or the president and to have his or her full support. 2. Put Together a Winning TeamChoose a strong Project Management Leader and staff the procedure planning team with management members from cross functional groups. Make sure team members have the responsibility - and the authority - to make things happen.The winning team should consist of the Documentation Project Manager, Document Owners (one for each functional area), a Document Controller, and Document Specialists (as needed). The Tutor Implementation Guide has complete job descriptions for these roles. 3. Assign Ownership It is virtually impossible to keep process documentation simple and meaningful if employees who are far removed from the activity itself create it. It is impossible to keep documentation up-to-date when responsibility for the document is not clearly understood.Key to the Tutor methodology, therefore, is the concept of ownership. Each document has a single owner, who is responsible for ensuring that the document is necessary and that it reflects actual practice. The owner must be a person who is knowledgeable about the activity and who has the authority to build consensus among the persons who participate in the activity as well as the authority to define or change the way an activity is performed. The owner must be an advocate of the performers and negotiate, not dictate practices.In the Tutor environment, a document's owner is the only person with the authority to approve an update to that document. 4. Centralize Publishing Although it is tempting (especially in a networked environment and with document management software solutions) to decentralize the control of all documents -- with each owner updating and distributing his own -- Tutor promotes centralized publishing by assigning the Document Administrator (gate keeper) to manage the updates and distribution of the procedures library. 5. Establish a Document Maintenance Process Up Front (and stick to it) Everyone in your organization should know they are invited to suggest changes to procedures and should understand exactly what steps to take to do so. Tutor provides a set of procedures to help your company set up a healthy document control system. There are many document management products available to automate some of the document change and maintenance steps. Depending on the size of your organization, a simple document management system can reduce the effort it takes to track and distribute document changes and updates. Whether your company decides to store the written policies and procedures on a file server or in a database, the essential tasks for maintaining documents are the same, though some tasks are automated. 6. Document Critical Activities Only The best way to keep your documentation simple is to reduce the number of process documents to a bare minimum and to include in those documents only as much detail as is absolutely necessary. The first step to reducing process documentation is to document only those activities that are deemed critical. Not all activities require documentation. In fact, some critical activities cannot and should not be standardized. Others may be sufficiently documented with an instruction or a checklist and may not require a procedure. A document should only be created when it enhances the performance of the employee performing the activity. If it does not help the employee, then there is no reason to maintain the document. Activities that represent little risk (such as project status), activities that cannot be defined in terms of specific tasks (such as product research), and activities that can be performed in a variety of ways (such as advertising) often do not require documentation. Sometimes, an activity will evolve to the point where documentation is necessary. For example, an activity performed by single employee may be straightforward and uncomplicated -- that is, until the activity is performed by multiple employees. Sometimes, it is the interaction between co-workers that necessitates documentation; sometimes, it is the complexity or the diversity of the activity.7. Document Actual Practices The only reason to maintain process documentation is to enhance the performance of the employee performing the activity. And documentation can only enhance performance if it reflects reality -- that is, current best practice. Documentation that reflects an unattainable ideal or outdated practices will end up on the shelf, unused and forgotten.Documenting actual practice means (1) auditing the activity to understand how the work is really performed, (2) identifying best practices with employees who are involved in the activity, (3) building consensus so that everyone agrees on a common method, and (4) recording that consensus.8. Minimize Documentation One way to keep it simple is to document at the highest level possible. That is, include in your documents only as much detail as is absolutely necessary.When writing a document, you should ask yourself, What is the purpose of this document? That is, what problem will it solve?By focusing on this question, you can target the critical information.• What questions are the end users likely to have?• What level of detail is required?• Is any of this information extraneous to the document's purpose? Short, concise documents are user friendly and they are easier to keep up to date. 9. Support Continuous Improvement Employees who perform an activity are often in the best position to identify improvements to the process. In other words, continuous improvement is a natural byproduct of the work itself -- but only if the improvements are communicated to all employees who are involved in the process, and only if there is consensus among those employees.Traditionally, process documentation has been used to dictate performance, to limit employees' actions. In the Tutor environment, process documents are used to communicate improvements identified by employees. How does this work? The Tutor methodology requires a process document to reflect actual practice, so the owner of a document must routinely audit its content -- does the document match what the employees are doing? If it doesn't, the owner has the responsibility to evaluate the process, to build consensus among the employees, to identify "best practices," and to communicate these improvements via a document update. Continuous improvement can also be an outgrowth of corrective action -- but only if the solutions to problems are communicated effectively. The goal should be to solve a problem once and only once, which means not only identifying the solution, but ensuring that the solution becomes part of the process. The Tutor system provides the method through which improvements and solutions are documented and communicated to all affected employees in a cost-effective, timely manner; it ensures that improvements are not lost or confined to a single employee. 10. Keep it Simple Process documents don't have to be complex and unfriendly. In fact, the simpler the format and organization, the more likely the documents will be used. And the simpler the method of maintenance, the more likely the documents will be kept up-to-date. Keep it simply by:• Minimizing skills and training required• Following the established Tutor document format and layout• Avoiding technology just for technology's sake No other rule has as major an impact on the success of your internal documentation as -- keep it simple. Learn More For more information about Tutor, visit Oracle.Com or the Tutor Blog. Post your questions at the Tutor Forum.   Emily Chorba Principle Product Manager Oracle Tutor & BPM 

    Read the article

  • links for 2010-06-07

    - by Bob Rhubart
    Dynamic Data Lookup in a Business Process In the latest installment of the SOA Suite Essentials for WLI Users article series, Simone Geib shows how dynamic data can be retrieved at run-time in a business process through Domain Value Maps in SOA Suite and the similarities to an XML MetaData Cache control in Oracle WebLogic Integration. (tags: oracle soa weblogic)

    Read the article

  • Effectiveness and Efficiency

    - by Daniel Moth
    In the professional environment, i.e. at work, I am always seeking personal growth and to be challenged. The result is that my assignments, my work list, my tasks, my goals, my commitments, my [insert whatever word resonates with you] keep growing (in scope and desired impact). Which in turn means I have to keep finding new ways to deliver more value, while not falling into the trap of working more hours. To do that I continuously evaluate both my effectiveness and my efficiency. EFFECTIVENESS The first thing I check is my effectiveness: Am I doing the right things? Am I focusing too much on unimportant things? Am I spending more time doing stuff that is important to my team/org/division/business/company, or am I spending it on stuff that is important to me and that I enjoy doing? Am I valuing activities that maybe I have outgrown and should be delegated to others who are at a stage I have surpassed (in Microsoft speak: is the work I am doing level appropriate or am I still operating at the previous level)? Notice how the answers to those questions change over time and due to certain events, so I have to remind myself to revisit them frequently. Events that force me to re-examine them are: change of role, change of team/org/etc, change of direction of team/org/etc, re-org, new hires on the team that take on some of the work I did, personal promotion, change of manager... and if none of those events has occurred since the last annual review, I ask myself those at each annual review anyway. If you think you are not being effective at work, make a list of the stuff that you do and start tracking where your time goes. In parallel, have a discussion with your manager about where they think your time should go. Ultimately your time is finite and hence it is your most precious investment, don't waste it. If your management doesn't value as highly what you spend your time on, then either convince your management, or stop spending your time on it, or find different management: Lead, Follow, or get out of the way! That's my view on effectiveness. You have to fix that before moving to being efficient, or you may end up being very efficient at stuff that nobody wants you to be doing in the first place. For example, you may be spending your time writing blog posts and becoming better and faster at it all the time. If your manager thinks that is not even part of your job description, you are wasting your time to satisfy your inner desires. Nobody can help you with your effectiveness other than your management chain and your management peers - they are the judges of it. EFFICIENCY The second thing I check is my efficiency: Am I doing things right? For me, doing things right means that I deliver the same quality of work faster [than what I used to, and than my peers, and than expected of me]. The result is that I can achieve more [than what I used to, and than my peers, and than expected of me]. Notice how the efficiency goal is a more portable one. If, by whatever criteria, you think you are the best at [insert your own skill here], this can change at two events: because you have new colleagues (who are potentially better than your older ones), and it can change with a change of manager (who has potentially higher expectations). That's about it. Once you are efficient at something, you carry that with you... All you need to really be doing here is, when taking on new kinds of work that you haven't done before, try a few approaches and devise a system so that you can become efficient at this new activity too... Just keep "collecting" stuff that you are efficient at. If you think you are not being efficient at something, break it down: What are the steps you take to complete that task? How long do you spend on each step? Talk to others about what steps they take, to see if you can optimize some steps away or trade them for better steps, or just learn how to complete a step faster. Have a system for every task you take so that you can have repeatable success. That's my view on efficiency. You have to fix it so that you can free up time to do more. When you plan a route from A to B - all else being equal - you try to get there as fast as possible so why would you not want to do that with your everyday work? For example, imagine you are inefficient at processing email: You spend more time than necessary dealing with email, and you still end up with dropped email threads and with slower response times than others. How can you improve? Talk to someone that you think is good at this, understand their system (e.g. here is my email processing system) and come up with one that works for you. Parting Thoughts Are you considered, by your colleagues and manager, an effective and efficient person at your workplace? If you are, what would you change if you were asked by your management to do the job of two people? Seriously, think about that! Your immediate reaction may be "that is not possible", but it actually is. You just have to re-assess what things that were previously important will now stop being important, by discussing them with your management and reaching agreement on relative priorities. For example, stuff that was previously on your plate may now have to be delegated or dropped. Where you thought you were efficient, maybe now you have to find an even faster path to completion, perhaps keeping in mind that Perfect is the Enemy of “Good Enough”. My personal experience (from both observing others and from my own reflection) is that when folks are struggling to keep up at work it is because of two reasons: They are investing energy in stuff that they enjoy doing which the business regards as having a lower priority than a lot of other things on their plate. They are completing tasks to a level of higher quality than what is required (due to personal pride) missing the big picture which almost always mandates completing three tasks at good enough quality than knocking only one of them out of the park while the other two come in late or not at all. There is a lot of content on the web, so I strongly encourage you to use your favorite search engine to read other views on effectiveness and efficiency (Bing, Google). Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • Can't save data for a member in a data form

    - by RahulS
    Implied sharing is an old thing everyone knows the reasons and solutions of that, still little theory about that: With Essbase implied sharing, some members are shared even if you do not explicitly set them as shared. These members are implied shared members. When an implied share relationship is created, each implied member assumes the other member’s value. Essbase assumes (or implies) a shared member relationship in these situations: 1. A parent has only one child 2. A parent has only one child that consolidates to the parent In a Planning form that contains members with an implied sharing relationship, when a value is added for the parent, the child assumes the same value after the form is saved. Likewise, if a value is added for the child, the parent usually assumes the same value after a form is saved.For example, when a calculation script or load rule populates an implied share member, the other implied share member assumes the value of the member populated by the calculation script or load rule. The last value calculated or imported takes precedence. The result is the same whether you refer to the parent or the child as a variable in a calculation script. For more information have a look at: http://docs.oracle.com/cd/E17236_01/epm.1112/hp_admin_11122/ch14s11.html Now the issue which we are going to talk about is We loose data on save even when the parent is dynamic calc and has a single child. A dynamic calc parent to a single child:  If we design the form with following selection: In the data form we will find parent below the member and this is by design whenever you make a selection using commands to select all the member below parent, always children will appear before the parent: Lets try to enter data, Save it Now, try to change the way we selected members Here we go: Now the question again why this behavior: 1. Data from Planning data form passes to Essbase row by row, 2. Because in data form the child member appears before the parent, 3. First, data goes to Essbase for child (SingleStoreChild), 4. Then when Planning passes the data for parent there was #Missing or No data,  5. Over writes the data to #missing. PS: As we know that dynamic calc members are calculated on the fly they are not allocated with any memory in the Essbase, here the parent was dynamic calc and it was pointing to same memory as child in the background, when Planning was passing data to Essbase for second row it has updated the child with missing data.(Little confusing, let me know if you need more explanation) 6. As one of the solutions just change the order of appearance of parent and child. Cheers..!!! Rahul S. https://www.facebook.com/pages/HyperionPlanning/117320818374228

    Read the article

  • Architecture for a template-building, WYSIWIG application

    - by Sam Selikoff
    I'm building a WYSIWYG designer in Ember.js. The designer will allow users to create campaigns - think MailChimp. To build a campaign, users will choose an existing template. The template will have a defined layout. The user will then be taken to the designer, where he will be able to edit the text and style, and additionally change some layout options. I've been thinking about how best to go about structuring this app, and there are a few hurdles. Specifically, the output of the campaign will be dynamic: eventually, it will be published somewhere, and when the consumers (not my users, but the people clicking on the campaign that my user created) visit the campaign, certain pieces of data will change, depending on the type of consumer viewing the campaign. That means the ultimate output of the designer will be a dynamic site. The data that is dynamic for this site - the end product - will not be manipulated by the user in the designer. However, the data that will be manipulated by the user in the designer are things like copy, styles, layout options, etc. I'll call the first set of variables server-side data, and the second client-side data. It seems, then, that the process will go something like this: I'll need to create templates for this designer that have two dynamic segments. For instance, the server-side data could be Liquid expressions, and the client-side data Handlebars expressions. When the user creates a campaign, I would compile the template on the back end using some dummy data for the server-side variables, and serve up a handlebars template to the Ember app. The user would then edit the template, and the Ember app would save all his edits to the JS variables that were powering the template. This way he'd be able to preview the template. When he saves, he'll send back the selected template, along with all the data and options he's made. When it comes time to publish, the back-end system will have to do two things: compile the template with Handlebars using the campaign data, and then compile the template with Liquid using the server-side data Is my thinking roughly accurate about this, or is there a simpler way?

    Read the article

  • APress Deal of the Day - 9/Nov/2011 - Pro ASP.NET 4 in C# 2010

    - by TATWORTH
    Today's $10 Deal of the day from APress at http://www.apress.com/9781430225294 is "Pro ASP.NET 4 in C# 2010". "ASP.NET 4 is the principal standard for creating dynamic web pages on the Windows platform. Pro ASP.NET 4 in C# 2010 raises the bar for high-quality, practical advice on learning and deploying Microsoft's dynamic web solution." Alan Freeman is an excellent author - I recommend this book to all C# development teams.

    Read the article

  • creating objects from trivial graph format text file. java. dijkstra algorithm.

    - by user560084
    i want to create objects, vertex and edge, from trivial graph format txt file. one of programmers here suggested that i use trivial graph format to store data for dijkstra algorithm. the problem is that at the moment all the information, e.g., weight, links, is in the sourcecode. i want to have a separate text file for that and read it into the program. i thought about using a code for scanning through the text file by using scanner. but i am not quite sure how to create different objects from the same file. could i have some help please? the file is v0 Harrisburg v1 Baltimore v2 Washington v3 Philadelphia v4 Binghamton v5 Allentown v6 New York # v0 v1 79.83 v0 v5 81.15 v1 v0 79.75 v1 v2 39.42 v1 v3 103.00 v2 v1 38.65 v3 v1 102.53 v3 v5 61.44 v3 v6 96.79 v4 v5 133.04 v5 v0 81.77 v5 v3 62.05 v5 v4 134.47 v5 v6 91.63 v6 v3 97.24 v6 v5 87.94 and the dijkstra algorithm code is Downloaded from: http://en.literateprograms.org/Special:Downloadcode/Dijkstra%27s_algorithm_%28Java%29 */ import java.util.PriorityQueue; import java.util.List; import java.util.ArrayList; import java.util.Collections; class Vertex implements Comparable<Vertex> { public final String name; public Edge[] adjacencies; public double minDistance = Double.POSITIVE_INFINITY; public Vertex previous; public Vertex(String argName) { name = argName; } public String toString() { return name; } public int compareTo(Vertex other) { return Double.compare(minDistance, other.minDistance); } } class Edge { public final Vertex target; public final double weight; public Edge(Vertex argTarget, double argWeight) { target = argTarget; weight = argWeight; } } public class Dijkstra { public static void computePaths(Vertex source) { source.minDistance = 0.; PriorityQueue<Vertex> vertexQueue = new PriorityQueue<Vertex>(); vertexQueue.add(source); while (!vertexQueue.isEmpty()) { Vertex u = vertexQueue.poll(); // Visit each edge exiting u for (Edge e : u.adjacencies) { Vertex v = e.target; double weight = e.weight; double distanceThroughU = u.minDistance + weight; if (distanceThroughU < v.minDistance) { vertexQueue.remove(v); v.minDistance = distanceThroughU ; v.previous = u; vertexQueue.add(v); } } } } public static List<Vertex> getShortestPathTo(Vertex target) { List<Vertex> path = new ArrayList<Vertex>(); for (Vertex vertex = target; vertex != null; vertex = vertex.previous) path.add(vertex); Collections.reverse(path); return path; } public static void main(String[] args) { Vertex v0 = new Vertex("Nottinghill_Gate"); Vertex v1 = new Vertex("High_Street_kensignton"); Vertex v2 = new Vertex("Glouchester_Road"); Vertex v3 = new Vertex("South_Kensignton"); Vertex v4 = new Vertex("Sloane_Square"); Vertex v5 = new Vertex("Victoria"); Vertex v6 = new Vertex("Westminster"); v0.adjacencies = new Edge[]{new Edge(v1, 79.83), new Edge(v6, 97.24)}; v1.adjacencies = new Edge[]{new Edge(v2, 39.42), new Edge(v0, 79.83)}; v2.adjacencies = new Edge[]{new Edge(v3, 38.65), new Edge(v1, 39.42)}; v3.adjacencies = new Edge[]{new Edge(v4, 102.53), new Edge(v2, 38.65)}; v4.adjacencies = new Edge[]{new Edge(v5, 133.04), new Edge(v3, 102.53)}; v5.adjacencies = new Edge[]{new Edge(v6, 81.77), new Edge(v4, 133.04)}; v6.adjacencies = new Edge[]{new Edge(v0, 97.24), new Edge(v5, 81.77)}; Vertex[] vertices = { v0, v1, v2, v3, v4, v5, v6 }; computePaths(v0); for (Vertex v : vertices) { System.out.println("Distance to " + v + ": " + v.minDistance); List<Vertex> path = getShortestPathTo(v); System.out.println("Path: " + path); } } } and the code for scanning file is import java.util.Scanner; import java.io.File; import java.io.FileNotFoundException; public class DataScanner1 { //private int total = 0; //private int distance = 0; private String vector; private String stations; private double [] Edge = new double []; /*public int getTotal(){ return total; } */ /* public void getMenuInput(){ KeyboardInput in = new KeyboardInput; System.out.println("Enter the destination? "); String val = in.readString(); return val; } */ public void readFile(String fileName) { try { Scanner scanner = new Scanner(new File(fileName)); scanner.useDelimiter (System.getProperty("line.separator")); while (scanner.hasNext()) { parseLine(scanner.next()); } scanner.close(); } catch (FileNotFoundException e) { e.printStackTrace(); } } public void parseLine(String line) { Scanner lineScanner = new Scanner(line); lineScanner.useDelimiter("\\s*,\\s*"); vector = lineScanner.next(); stations = lineScanner.next(); System.out.println("The current station is " + vector + " and the destination to the next station is " + stations + "."); //total += distance; //System.out.println("The total distance is " + total); } public static void main(String[] args) { /* if (args.length != 1) { System.err.println("usage: java TextScanner2" + "file location"); System.exit(0); } */ DataScanner1 scanner = new DataScanner1(); scanner.readFile(args[0]); //int total =+ distance; //System.out.println(""); //System.out.println("The total distance is " + scanner.getTotal()); } }

    Read the article

  • I am having a problem of class cast exception. Can anyone please help me out?

    - by Piyush
    This is my code: package com.example.userpage; import android.app.Activity; import android.content.Intent; import android.os.Bundle; import android.view.View; import android.widget.Button; import android.widget.EditText; import android.widget.TextView; public class UserPage extends Activity { String tv,tv1; EditText name,pass; TextView x,y; /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); Button button = (Button) findViewById(R.id.widget44); button.setOnClickListener(new View.OnClickListener() { public void onClick(View v) { name.setText(" "); pass.setText(" "); } }); x = (TextView) findViewById(R.id.widget46); y = (TextView) findViewById(R.id.widget47); name = (EditText)findViewById(R.id.widget41); pass = (EditText)findViewById(R.id.widget42); Button button1 = (Button) findViewById(R.id.widget45); button1.setOnClickListener(new View.OnClickListener() { public void onClick(View v) { tv= name.getText().toString(); tv1 = pass.getText().toString(); x.setText(tv); y.setText(tv1); } }); } } And this is my log cat: 02-16 12:24:30.488: DEBUG/AndroidRuntime(973): >>>>>>>>>>>>>> AndroidRuntime START <<<<<<<<<<<<<< 02-16 12:24:30.488: DEBUG/AndroidRuntime(973): CheckJNI is ON 02-16 12:24:31.208: DEBUG/AndroidRuntime(973): --- registering native functions --- 02-16 12:24:33.498: DEBUG/AndroidRuntime(973): Shutting down VM 02-16 12:24:33.537: DEBUG/dalvikvm(973): Debugger has detached; object registry had 1 entries 02-16 12:24:33.537: INFO/AndroidRuntime(973): NOTE: attach of thread 'Binder Thread #3' failed 02-16 12:24:34.917: DEBUG/AndroidRuntime(981): >>>>>>>>>>>>>> AndroidRuntime START <<<<<<<<<<<<<< 02-16 12:24:34.927: DEBUG/AndroidRuntime(981): CheckJNI is ON 02-16 12:24:35.617: DEBUG/AndroidRuntime(981): --- registering native functions --- 02-16 12:24:38.029: INFO/ActivityManager(67): Starting activity: Intent { act=android.intent.action.MAIN cat=[android.intent.category.LAUNCHER] flg=0x10000000 cmp=com.example.userpage/.UserPage } 02-16 12:24:38.129: DEBUG/AndroidRuntime(981): Shutting down VM 02-16 12:24:38.160: DEBUG/dalvikvm(981): Debugger has detached; object registry had 1 entries 02-16 12:24:38.168: INFO/AndroidRuntime(981): NOTE: attach of thread 'Binder Thread #3' failed 02-16 12:25:12.028: DEBUG/AndroidRuntime(990): >>>>>>>>>>>>>> AndroidRuntime START <<<<<<<<<<<<<< 02-16 12:25:12.038: DEBUG/AndroidRuntime(990): CheckJNI is ON 02-16 12:25:12.708: DEBUG/AndroidRuntime(990): --- registering native functions --- 02-16 12:25:15.178: DEBUG/dalvikvm(176): GC_EXPLICIT freed 114 objects / 5880 bytes in 115ms 02-16 12:25:15.318: DEBUG/PackageParser(67): Scanning package: /data/app/vmdl25170.tmp 02-16 12:25:15.588: INFO/PackageManager(67): Removing non-system package:com.example.userpage 02-16 12:25:15.597: INFO/ActivityManager(67): Force stopping package com.example.userpage uid=10036 02-16 12:25:15.648: INFO/Process(67): Sending signal. PID: 916 SIG: 9 02-16 12:25:15.877: INFO/UsageStats(67): Unexpected resume of com.android.launcher while already resumed in com.example.userpage 02-16 12:25:17.028: WARN/InputManagerService(67): Window already focused, ignoring focus gain of: com.android.internal.view.IInputMethodClient$Stub$Proxy@4400ecf8 02-16 12:25:17.928: DEBUG/PackageManager(67): Scanning package com.example.userpage 02-16 12:25:17.949: INFO/PackageManager(67): Package com.example.userpage codePath changed from /data/app/com.example.userpage-1.apk to /data/app/com.example.userpage-2.apk; Retaining data and using new 02-16 12:25:17.987: INFO/PackageManager(67): /data/app/com.example.userpage-2.apk changed; unpacking 02-16 12:25:18.037: DEBUG/installd(35): DexInv: --- BEGIN '/data/app/com.example.userpage-2.apk' --- 02-16 12:25:18.737: DEBUG/dalvikvm(997): DexOpt: load 81ms, verify 112ms, opt 6ms 02-16 12:25:18.768: DEBUG/installd(35): DexInv: --- END '/data/app/com.example.userpage-2.apk' (success) --- 02-16 12:25:18.799: INFO/ActivityManager(67): Force stopping package com.example.userpage uid=10036 02-16 12:25:18.808: WARN/PackageManager(67): Code path for pkg : com.example.userpage changing from /data/app/com.example.userpage-1.apk to /data/app/com.example.userpage-2.apk 02-16 12:25:18.839: WARN/PackageManager(67): Resource path for pkg : com.example.userpage changing from /data/app/com.example.userpage-1.apk to /data/app/com.example.userpage-2.apk 02-16 12:25:18.868: DEBUG/PackageManager(67): Activities: com.example.userpage.UserPage 02-16 12:25:19.297: INFO/installd(35): move /data/dalvik-cache/data@[email protected]@classes.dex -> /data/dalvik-cache/data@[email protected]@classes.dex 02-16 12:25:19.297: DEBUG/PackageManager(67): New package installed in /data/app/com.example.userpage-2.apk 02-16 12:25:19.598: DEBUG/dalvikvm(67): GC_FOR_MALLOC freed 7979 objects / 516856 bytes in 246ms 02-16 12:25:20.498: INFO/ActivityManager(67): Force stopping package com.example.userpage uid=10036 02-16 12:25:20.708: DEBUG/dalvikvm(129): GC_EXPLICIT freed 124 objects / 5672 bytes in 157ms 02-16 12:25:21.838: DEBUG/dalvikvm(67): GC_EXPLICIT freed 4208 objects / 236264 bytes in 419ms 02-16 12:25:21.918: WARN/RecognitionManagerService(67): no available voice recognition services found 02-16 12:25:22.127: INFO/installd(35): unlink /data/dalvik-cache/data@[email protected]@classes.dex 02-16 12:25:22.478: DEBUG/AndroidRuntime(990): Shutting down VM 02-16 12:25:22.488: DEBUG/dalvikvm(990): Debugger has detached; object registry had 1 entries 02-16 12:25:22.588: INFO/AndroidRuntime(990): NOTE: attach of thread 'Binder Thread #3' failed 02-16 12:25:24.137: DEBUG/AndroidRuntime(1003): >>>>>>>>>>>>>> AndroidRuntime START <<<<<<<<<<<<<< 02-16 12:25:24.147: DEBUG/AndroidRuntime(1003): CheckJNI is ON 02-16 12:25:24.817: DEBUG/AndroidRuntime(1003): --- registering native functions --- 02-16 12:25:27.450: INFO/ActivityManager(67): Starting activity: Intent { act=android.intent.action.MAIN cat=[android.intent.category.LAUNCHER] flg=0x10000000 cmp=com.example.userpage/.UserPage } 02-16 12:25:27.628: DEBUG/AndroidRuntime(1003): Shutting down VM 02-16 12:25:27.780: INFO/AndroidRuntime(1003): NOTE: attach of thread 'Binder Thread #3' failed 02-16 12:25:28.018: DEBUG/dalvikvm(1003): Debugger has detached; object registry had 1 entries 02-16 12:25:28.148: INFO/ActivityManager(67): Start proc com.example.userpage for activity com.example.userpage/.UserPage: pid=1010 uid=10036 gids={} 02-16 12:25:30.308: DEBUG/AndroidRuntime(1010): Shutting down VM 02-16 12:25:30.308: WARN/dalvikvm(1010): threadid=1: thread exiting with uncaught exception (group=0x4001d800) 02-16 12:25:30.388: ERROR/AndroidRuntime(1010): FATAL EXCEPTION: main 02-16 12:25:30.388: ERROR/AndroidRuntime(1010): java.lang.RuntimeException: Unable to start activity ComponentInfo{com.example.userpage/com.example.userpage.UserPage}: java.lang.ClassCastException: android.widget.TextView 02-16 12:25:30.388: ERROR/AndroidRuntime(1010): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2663) 02-16 12:25:30.388: ERROR/AndroidRuntime(1010): at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:2679) 02-16 12:25:30.388: ERROR/AndroidRuntime(1010): at android.app.ActivityThread.access$2300(ActivityThread.java:125) 02-16 12:25:30.388: ERROR/AndroidRuntime(1010): at android.app.ActivityThread$H.handleMessage(ActivityThread.java:2033) 02-16 12:25:30.388: ERROR/AndroidRuntime(1010): at android.os.Handler.dispatchMessage(Handler.java:99) 02-16 12:25:30.388: ERROR/AndroidRuntime(1010): at android.os.Looper.loop(Looper.java:123) 02-16 12:25:30.388: ERROR/AndroidRuntime(1010): at android.app.ActivityThread.main(ActivityThread.java:4627) 02-16 12:25:30.388: ERROR/AndroidRuntime(1010): at java.lang.reflect.Method.invokeNative(Native Method) 02-16 12:25:30.388: ERROR/AndroidRuntime(1010): at java.lang.reflect.Method.invoke(Method.java:521) 02-16 12:25:30.388: ERROR/AndroidRuntime(1010): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:868) 02-16 12:25:30.388: ERROR/AndroidRuntime(1010): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:626) 02-16 12:25:30.388: ERROR/AndroidRuntime(1010): at dalvik.system.NativeStart.main(Native Method) 02-16 12:25:30.388: ERROR/AndroidRuntime(1010): Caused by: java.lang.ClassCastException: android.widget.TextView 02-16 12:25:30.388: ERROR/AndroidRuntime(1010): at com.example.userpage.UserPage.onCreate(UserPage.java:35) 02-16 12:25:30.388: ERROR/AndroidRuntime(1010): at android.app.Instrumentation.callActivityOnCreate(Instrumentation.java:1047) 02-16 12:25:30.388: ERROR/AndroidRuntime(1010): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2627) 02-16 12:25:30.388: ERROR/AndroidRuntime(1010): ... 11 more 02-16 12:25:30.438: WARN/ActivityManager(67): Force finishing activity com.example.userpage/.UserPage 02-16 12:25:31.088: WARN/ActivityManager(67): Activity pause timeout for HistoryRecord{43f164f8 com.example.userpage/.UserPage} 02-16 12:25:32.588: DEBUG/dalvikvm(292): GC_EXPLICIT freed 46 objects / 2240 bytes in 6282ms 02-16 12:25:35.267: INFO/Process(1010): Sending signal. PID: 1010 SIG: 9 02-16 12:25:35.468: WARN/InputManagerService(67): Window already focused, ignoring focus gain of: com.android.internal.view.IInputMethodClient$Stub$Proxy@43e60a90 02-16 12:25:35.900: INFO/ActivityManager(67): Process com.example.userpage (pid 1010) has died. 02-16 12:25:38.278: DEBUG/dalvikvm(176): GC_EXPLICIT freed 172 objects / 12280 bytes in 127ms 02-16 12:25:43.011: WARN/ActivityManager(67): Activity destroy timeout for HistoryRecord{43f164f8 com.example.userpage/.UserPage} 02-16 12:28:12.698: DEBUG/AndroidRuntime(1019): >>>>>>>>>>>>>> AndroidRuntime START <<<<<<<<<<<<<< 02-16 12:28:12.711: DEBUG/AndroidRuntime(1019): CheckJNI is ON 02-16 12:28:13.367: DEBUG/AndroidRuntime(1019): --- registering native functions --- 02-16 12:28:15.998: DEBUG/dalvikvm(176): GC_EXPLICIT freed 114 objects / 5888 bytes in 183ms 02-16 12:28:16.539: DEBUG/PackageParser(67): Scanning package: /data/app/vmdl25171.tmp 02-16 12:28:16.867: INFO/PackageManager(67): Removing non-system package:com.example.userpage 02-16 12:28:16.867: INFO/ActivityManager(67): Force stopping package com.example.userpage uid=10036 02-16 12:28:17.277: DEBUG/PackageManager(67): Scanning package com.example.userpage 02-16 12:28:17.308: INFO/PackageManager(67): Package com.example.userpage codePath changed from /data/app/com.example.userpage-2.apk to /data/app/com.example.userpage-1.apk; Retaining data and using new 02-16 12:28:17.328: INFO/PackageManager(67): /data/app/com.example.userpage-1.apk changed; unpacking 02-16 12:28:17.367: DEBUG/installd(35): DexInv: --- BEGIN '/data/app/com.example.userpage-1.apk' --- 02-16 12:28:18.357: DEBUG/dalvikvm(1026): DexOpt: load 85ms, verify 114ms, opt 6ms 02-16 12:28:18.398: DEBUG/installd(35): DexInv: --- END '/data/app/com.example.userpage-1.apk' (success) --- 02-16 12:28:18.428: INFO/ActivityManager(67): Force stopping package com.example.userpage uid=10036 02-16 12:28:18.438: WARN/PackageManager(67): Code path for pkg : com.example.userpage changing from /data/app/com.example.userpage-2.apk to /data/app/com.example.userpage-1.apk 02-16 12:28:18.477: WARN/PackageManager(67): Resource path for pkg : com.example.userpage changing from /data/app/com.example.userpage-2.apk to /data/app/com.example.userpage-1.apk 02-16 12:28:18.477: DEBUG/PackageManager(67): Activities: com.example.userpage.UserPage 02-16 12:28:18.977: INFO/installd(35): move /data/dalvik-cache/data@[email protected]@classes.dex -> /data/dalvik-cache/data@[email protected]@classes.dex 02-16 12:28:18.988: DEBUG/PackageManager(67): New package installed in /data/app/com.example.userpage-1.apk 02-16 12:28:19.528: DEBUG/dalvikvm(67): GC_FOR_MALLOC freed 6733 objects / 459728 bytes in 211ms 02-16 12:28:20.138: INFO/ActivityManager(67): Force stopping package com.example.userpage uid=10036 02-16 12:28:20.368: DEBUG/dalvikvm(129): GC_EXPLICIT freed 892 objects / 48744 bytes in 175ms 02-16 12:28:21.317: WARN/RecognitionManagerService(67): no available voice recognition services found 02-16 12:28:22.827: DEBUG/dalvikvm(67): GC_EXPLICIT freed 3877 objects / 241128 bytes in 452ms 02-16 12:28:22.979: INFO/installd(35): unlink /data/dalvik-cache/data@[email protected]@classes.dex 02-16 12:28:23.277: DEBUG/AndroidRuntime(1019): Shutting down VM 02-16 12:28:23.307: DEBUG/dalvikvm(1019): Debugger has detached; object registry had 1 entries 02-16 12:28:23.328: INFO/AndroidRuntime(1019): NOTE: attach of thread 'Binder Thread #3' failed 02-16 12:28:24.989: DEBUG/AndroidRuntime(1032): >>>>>>>>>>>>>> AndroidRuntime START <<<<<<<<<<<<<< 02-16 12:28:24.989: DEBUG/AndroidRuntime(1032): CheckJNI is ON 02-16 12:28:25.888: DEBUG/AndroidRuntime(1032): --- registering native functions --- 02-16 12:28:28.588: INFO/ActivityManager(67): Starting activity: Intent { act=android.intent.action.MAIN cat=[android.intent.category.LAUNCHER] flg=0x10000000 cmp=com.example.userpage/.UserPage } 02-16 12:28:28.888: DEBUG/AndroidRuntime(1032): Shutting down VM 02-16 12:28:28.997: DEBUG/dalvikvm(1032): Debugger has detached; object registry had 1 entries 02-16 12:28:29.038: INFO/AndroidRuntime(1032): NOTE: attach of thread 'Binder Thread #3' failed 02-16 12:28:30.417: INFO/ActivityManager(67): Start proc com.example.userpage for activity com.example.userpage/.UserPage: pid=1039 uid=10036 gids={} 02-16 12:28:32.588: DEBUG/AndroidRuntime(1039): Shutting down VM 02-16 12:28:32.598: WARN/dalvikvm(1039): threadid=1: thread exiting with uncaught exception (group=0x4001d800) 02-16 12:28:32.648: ERROR/AndroidRuntime(1039): FATAL EXCEPTION: main 02-16 12:28:32.648: ERROR/AndroidRuntime(1039): java.lang.RuntimeException: Unable to start activity ComponentInfo{com.example.userpage/com.example.userpage.UserPage}: java.lang.ClassCastException: android.widget.TextView 02-16 12:28:32.648: ERROR/AndroidRuntime(1039): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2663) 02-16 12:28:32.648: ERROR/AndroidRuntime(1039): at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:2679) 02-16 12:28:32.648: ERROR/AndroidRuntime(1039): at android.app.ActivityThread.access$2300(ActivityThread.java:125) 02-16 12:28:32.648: ERROR/AndroidRuntime(1039): at android.app.ActivityThread$H.handleMessage(ActivityThread.java:2033) 02-16 12:28:32.648: ERROR/AndroidRuntime(1039): at android.os.Handler.dispatchMessage(Handler.java:99) 02-16 12:28:32.648: ERROR/AndroidRuntime(1039): at android.os.Looper.loop(Looper.java:123) 02-16 12:28:32.648: ERROR/AndroidRuntime(1039): at android.app.ActivityThread.main(ActivityThread.java:4627) 02-16 12:28:32.648: ERROR/AndroidRuntime(1039): at java.lang.reflect.Method.invokeNative(Native Method) 02-16 12:28:32.648: ERROR/AndroidRuntime(1039): at java.lang.reflect.Method.invoke(Method.java:521) 02-16 12:28:32.648: ERROR/AndroidRuntime(1039): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:868) 02-16 12:28:32.648: ERROR/AndroidRuntime(1039): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:626) 02-16 12:28:32.648: ERROR/AndroidRuntime(1039): at dalvik.system.NativeStart.main(Native Method) 02-16 12:28:32.648: ERROR/AndroidRuntime(1039): Caused by: java.lang.ClassCastException: android.widget.TextView 02-16 12:28:32.648: ERROR/AndroidRuntime(1039): at com.example.userpage.UserPage.onCreate(UserPage.java:34) 02-16 12:28:32.648: ERROR/AndroidRuntime(1039): at android.app.Instrumentation.callActivityOnCreate(Instrumentation.java:1047) 02-16 12:28:32.648: ERROR/AndroidRuntime(1039): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2627) 02-16 12:28:32.648: ERROR/AndroidRuntime(1039): ... 11 more 02-16 12:28:32.698: WARN/ActivityManager(67): Force finishing activity com.example.userpage/.UserPage 02-16 12:28:32.967: DEBUG/dalvikvm(292): GC_EXPLICIT freed 46 objects / 2240 bytes in 6840ms 02-16 12:28:33.247: WARN/ActivityManager(67): Activity pause timeout for HistoryRecord{43ee7b70 com.example.userpage/.UserPage} 02-16 12:28:36.947: INFO/Process(1039): Sending signal. PID: 1039 SIG: 9 02-16 12:28:37.017: INFO/ActivityManager(67): Process com.example.userpage (pid 1039) has died. 02-16 12:28:37.128: WARN/InputManagerService(67): Window already focused, ignoring focus gain of: com.android.internal.view.IInputMethodClient$Stub$Proxy@43e872f8 02-16 12:28:42.087: DEBUG/dalvikvm(176): GC_EXPLICIT freed 156 objects / 11488 bytes in 145ms 02-16 12:28:45.391: WARN/ActivityManager(67): Activity destroy timeout for HistoryRecord{43ee7b70 com.example.userpage/.UserPage} 02-16 12:28:47.177: DEBUG/SntpClient(67): request time failed: java.net.SocketException: Address family not supported by protocol

    Read the article

  • Oracle OpenWorld Preview: Real World Perspectives from Oracle WebCenter Customers

    - by Christie Flanagan
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} If you frequent the Oracle WebCenter blog you’ve probably read a lot about the customer experience revolution over the last few months.  An important aspect of the customer experience revolution is the increasing role that peers play in influencing how others perceive a product, brand or solution, simply by sharing their own, real-world experiences.  Think about it, who do you trust more -- marketers and sales people pitching polished messages or peers with similar roles and similar challenges to the ones you face in your business every day? With this spirit in mind, this polished marketer personally invites you to hear directly from Oracle WebCenter customers about their real-life experiences during our customer panel sessions at Oracle OpenWorld next week.  If you’re currently using WebCenter, thinking about it, or just want to find out more about best practices in social business, next-generation portals, enterprise content management or web experience management, be sure to attend these sessions: CON8899 - Becoming a Social Business: Stories from the Front Lines of Change Wednesday, Oct 3, 11:45 AM - 12:45 PM - Moscone West - 3000Priscilla Hancock - Vice President/CIO, University of Louisville Kellie Christensen - Director of Information Technology, Banner EngineeringWhat does it really mean to be a social business? How can you change your organization to embrace social approaches? What pitfalls do you need to avoid? In this lively panel discussion, customer and industry thought leaders in social business explore these topics and more as they share their stories of the good, the bad, and the ugly that can happen when embracing social methods and technologies to improve business success. Using moderated questions and open Q&A from the audience, the panel discusses vital topics such as the critical factors for success, the major issues to avoid, how to gain senior executive support for social efforts, how to handle undesired behavior, and how to measure business impact. This session will take a thought-provoking look at becoming a social business from the inside. CON8900 - Building Next-Generation Portals: An Interactive Customer Panel DiscussionWednesday, Oct 3, 5:00 PM - 6:00 PM - Moscone West - 3000Roberts Wayne - Director, IT, Canadian Partnership Against CancerMike Beattie - VP Application Development, Aramark Uniform ServicesJohn Chen - Utilities Services Manager 6, Los Angeles Department of Water & PowerJörg Modlmayr - Head of Product Managment, Siemens AGSocial and collaborative technologies have changed how people interact, learn, and collaborate, and providing a modern, social Web presence is imperative to remain competitive in today’s market. Can your business benefit from a more collaborative and interactive portal environment for employees, customers, and partners? Attend this session to hear from Oracle WebCenter Portal customers as they share their strategies and best practices for providing users with a modern experience that adapts to their needs and includes personalized access to content in context. The panel also addresses how customers have benefited from creating next-generation portals by migrating from older portal technologies to Oracle WebCenter Portal. CON8898 - Land Mines, Potholes, and Dirt Roads: Navigating the Way to ECM NirvanaThursday, Oct 4, 12:45 PM - 1:45 PM - Moscone West - 3001Stephen Madsen - Senior Management Consultant, Alberta Agriculture and Rural DevelopmentHimanshu Parikh - Sr. Director, Enterprise Architecture & Middleware, Ross Stores, Inc.Ten years ago, people were predicting that by this time in history, we’d be some kind of utopian paperless society. As we all know, we're not there yet, but are we getting closer? What is keeping companies from driving down the road to enterprise content management bliss? Most people understand that using ECM as a central platform enables organizations to expedite document-centric processes, but most business processes in organizations are still heavily paper-based. Many of these processes could be automated and improved with an ECM platform infrastructure. In this panel discussion, you’ll hear from Oracle WebCenter customers that have already solved some of these challenges as they share their strategies for success and roads to avoid along your journey. CON8897 - Using Web Experience Management to Drive Online Marketing SuccessThursday, Oct 4, 2:15 PM - 3:15 PM - Moscone West - 3001Blane Nelson - Chief Architect, Ancestry.comMike Remedios - CIO, ArbonneCaitlin Scanlon - Product Manager, Monster WorldwideEvery year, the online channel becomes more imperative for driving organizational top-line revenue, but for many companies, mastering how to best market their products and services in a fast-evolving online world with high customer expectations for personalized experiences can be a complex proposition. Come to this panel discussion, and hear directly from customers on how they are succeeding today by using Web experience management to drive marketing success, using capabilities such as targeting and optimization, user-generated content, mobile site publishing, and site visitor personalization to deliver engaging online experiences. Your Handy Guide to WebCenter at Oracle OpenWorld Want a quick and easy guide to all the keynotes, demos, hands-on labs and WebCenter sessions you definitely don't want to miss at Oracle OpenWorld? Download this handy guide, Focus on WebCenter. More helpful links: * Oracle OpenWorld* Oracle Customer Experience Summit @ OpenWorld* Oracle OpenWorld on Facebook * Oracle OpenWorld on Twitter* Oracle OpenWorld on LinkedIn* Oracle OpenWorld Blog

    Read the article

  • EBS Techstack Sessions at OAUG/Collaborate 2010

    - by Steven Chan
    We have a large contingent of E-Business Suite Applications Technology Group staff rolling out to the OAUG/Collaborate 2010 conference in Las Vegas new week.  Our Applications Technology Group staff will be appearing as guest speakers or full-speakers at the following E-Business Suite technology stack related sessions:Database Special Interest GroupSunday, April 18, 11:00 AM, Breakers FSIG Leaders:  Michael Brown, Colibri; Sandra Vucinic, Vlad GroupGuest Speaker:  Steven ChanCovering database upcoming and past desupport dates, and database support policies as they apply to E-Business Suite environments, general Q&A E-Business Suite Technology Stack Special Interest GroupSunday, April 18, 3:00 PM, Breakers FSIG Leaders:  Elke Phelps, Paul Jackson, HumanaGuest Speaker:  Steven ChanCovering the latest EBS technology stack certifications, roadmap, desupport noticesupgrade options for Discoverer, OID, SSO, Portal, general Q&A E-Business Suite Applications Technology Roadmap & VisionMonday, April 19, 8:00 AM, South Seas GOracle Speaker:  Uma PrabhalaLatest developments for SOA, AOL, OAF, Web ADI, SES, AMP, ACMP, security, and other technologies Oracle E-Business Suite Applications Strategy and General Manager UpdateMonday, April 19, 2:30 PM, Mandalay Bay Ballroom DOracle Speaker:  Cliff GodwinUpdate on the entire Oracle E-Business Suite product line. The session covers the value delivered by the current release of Oracle E-Business Suite applications, the momentum, and how Oracle E-Business Suite applications integrate into Oracle's overall applications strategy 10 Things You Can Do Today to Prepare for the Next Generation ApplicationsTuesday, April 20, 8:00 AM, South Seas FOracle Speaker:  Nadia Bendjedou"Common sense" and "practical" steps that can be taken today to increase the value of your Oracle Applications (E-Business Suite, PeopleSoft, Siebel, and JDE) investments by using the latest Oracle solutions and technologiesReducing TCO using Oracle E-Business Suite Management PacksTuesday, April 20, 10:30 AM, South Seas EOracle Speaker:  Angelo RosadoLearn how you can reduce the Total Cost of Ownership by implementing Application Management Pack (AMP) and Application Change Management Pack (ACP) for E-Business Suite 11i, R12, R12.1. AMP is Oracle's next generation system manageability product offering that provides a centralized platform to manage and maintain EBS. ACP is Oracle's offering to monitor and manage E-Business Suite changes in the areas of E-Business Suite Customizations, Patches and Functional Setups. E-Business Suite Upgrade Special Interest GroupTuesday, April 20, 3:15 PM, South Seas ESIG Leaders:  John Stouffer; Sandra Vucinic, Vlad GroupGuest Speaker:  Steven ChanParticipating in general Q&A E-Business Suite Technology Essentials: Using the Latest Oracle Technologies with E-Business Suite Wednesday, April 21, 8:00 AM, South Seas HOracle Speaker:  Lisa ParekhOracle continues to build new functionality into the Oracle Database, Fusion Middleware, and Enterprise Manager. Come see how you can enhance the value of E-Business Suite for your users and lower your costs of ownership by utilizing the latest features of these Oracle technologies with E-Business Suite. Learn about the latest advanced E-Business Suite topologies and features, including new options for security, performance, third-party integration, SOA, virtualization, clouds, systems management, and much more How to Leverage the New E-Business Suite R12.1 Solutions Without Upgrading your 11.5.10 EnvironmentWednesday, April 21, 10:30 AMOracle Speaker:  Nadia Bendjedou, South Seas ELearn how you can use the latest E-Business Suite 12.1 standalone solutions without upgrading from your E-Business Suite 11.5.10 environment Web 2.0 User Experience and Oracle Fusion Middleware Integration with Oracle E-Business SuiteWednesday, April 21, 4:00 PM, South Seas FOracle Speaker:  Padmaprabodh AmbaleSee the next generation Oracle E-Business Suite OA framework improvements that will provide new rich interactions in components such as LOV, Tables and Attachments.  See  new components like the Rich Container that allows any Web 2.0 content like Flash or OBIEE to be embedded in OA Framework pages. Advanced Technology Deployment Architectures for E-Business Suite Wednesday, April 21, 2:15 PM, South Seas EOracle Speaker:  Steven ChanLearn how to take advantage of the latest version of Oracle Fusion Middleware with Oracle E-Business Suite. Learn how to utilize identity management systems and LDAP directories. In addition, come to this session for answers about advanced network deployments involving reverse proxy servers, load balancers, and DMZ's, and to see how you can take benefit from virtualization and new system management capabilities. Upgrading to Oracle E-Business Suite 12.1 - Best PracticesThursday, April 22, 11:00 AM, South Seas EOracle Speaker:  Lester Gutierrez, Udayan ParvateFundamental of upgrading to Release 12.1, which includes the technology stack components and differences, the upgrade path from various releases of Oracle E-Business Suite, upgrade steps, monitoring the upgrade, hints and tips for minimizing downtime and upgrade best practices for making the upgrade to Release 12.1 a success.  We look forward to seeing you there!

    Read the article

  • Oracle Access Manager 10gR3 Certified with E-Business Suite

    - by Keith M. Swartz
    Oracle Access Manager 10gR3 (10.1.4.3) is now certified for use with E-Business Suite Releases 11.5.10 and 12.1, using the new component, Oracle E-Business Suite AccessGate. For information on how to obtain, install, and configure this new component, see:Integrating Oracle E-Business Suite with Oracle Access Manager using Oracle E-Business Suite AccessGate (Note 975182.1) About Oracle Access Manager Oracle Access Manager is Oracle's next-generation identity and access management platform, and is a key component in Oracle's Fusion Middleware Identity Management solution. It provides a set of authentication and authorization features, including support for single sign-on authentication, and integration with other identity management offerings such as Oracle Identity Federation and Oracle Adaptive Access Manager.

    Read the article

  • Gartner PCC Follow-up: Interview with Chaeny Emanavin, Usability Lead - Office of Information Develo

    - by [email protected]
    Last week at the Gartner Portals, Content and Collaboration conference in Baltimore, Chaeny and I co-presented on Oracle Enterprise 2.0 and BIA's Citizen Portal. Chaeny's presentation about the BIA solution was very well received and I wanted to do a follow-up interview with Chaeny to discuss more details about their solution and its Enterprise 2.0 features. Ajay: What were the main objectives for the BIA Citizen Portal? Chaeny: The BIA Citizen Portal is designed to provide all the services of the Bureau of Indian Affairs to the community of 564 federally recognized tribes that include over 1.9 million American Indians and Alaska Natives. The BIA provides the same breadth of services that the entire U.S. Federal Government provides in one small Bureau. So, we needed a solution that was flexible enough to handle content ranging from law enforcement to housing to education. Key objectives for external users was to use the Web as a communications channel and keep them informed on what services are available. We also wanted to build an internal web presence and community for BIA's 5000 employees to ensure that they update their content, leverage internal experts and create single sources of truth for key policy documents. Ajay: How is the project being implemented? Chaeny: We are using a phased approach. In phases 1 & 2, interim internal and external sites were built to ensure usability and functional requirements are being met. In Phases 3 & 4, we built out a modern internal and external presence using Oracle WebCenter Suite and Oracle Universal Content Management (UCM), including enabling delegated content management for our internal business units. Phase 4 was completed in January 2010. Phase 5 will add deeper Enterprise 2.0 collaboration capabilities to the solution. Ajay: Are you integrating any existing sites into the new solution? Chaeny: Yes, we have a SharePoint implementation that we are using for document management. We needed more precise functionality however. We found that SharePoint would let individual administrators of a SharePoint site actually create new sites. In a 3 months span, we had over 200 new sites created and most were not being used. So, we had an enormous sprawl problem. Our requirements mandated increased governance and more granular control over the creation of sites and flexible user access to content. In SharePoint this required custom code and was very time-intensive which was unfeasible given our tight deadlines. We are piloting Oracle WebCenter Spaces as our collaboration solution to mitigate these issues. However, we must integrate our existing SharePoint investment which we can do easily by using the SharePoint connectors available in Oracle WebCenter and UCM. Ajay: What were the key design parameters for your solution? Chaeny: We wanted everything driven by standards and policies. We created a cross-functional steering group called the Indian Affairs Web Council to codify policies that were baked into the system. Other key design areas were focused on security/governance, self-service content management, ease of use, integration with legacy applications and seamless single sign-on. We are using Dublin Core as our metadata standard. We also are using Java, APEX, and ADF as our development standards. Ajay: Why was it important to standardize on a platform? Chaeny: We initially looked at best-of-breed solutions, but we faced a lot of issues getting the different solutions to work together. Going with an integrated solution was more economical, easier to learn and faster to deliver the solution. Ajay: What type of legacy applications are you integrating into the portal? Chaeny: Initially we are starting with administrative apps such as people directory and user admin and then we will integrate HR and Financial applications among others. Ajay: Can you describe some of the E20 collaboration features you are putting into the solution? Chaeny: We are adding Enterprise 2.0 using Oracle WebCenter Spaces to deliver different collaboration tools such as wikis, blogs and discussion forums. Wikis to create rapid, ad hoc monthly roll-up reports; discussion forums to provide context-specific help; blogs to capture tacit organization knowledge from experts, identify gurus and turn tacit knowledge into explicit knowledge. Ajay: Are you doing anything specifically to spur adoption and usage? Chaeny: Yes, we did several things that I think helped us ramp quickly. First, we met our commitments for the new system launch date and also provided extra resources for a customer support "hotline" during the launch period. Prior to launch, we did exhaustive usability studies to capture user requirements around functionality, navigation and other key interaction areas. We also created extensive training programs so that the content managers in each business unit were comfortable using the content management tools and knew the best practices for usage. Finally, to launch the Enterprise 2.0 collaboration capabilities, we are working with a pilot group from the Division of Forestry and Wildland Fire Management of BIA. This group of people in the past have been willing early adopters and they have a strong business need to collaborate with many agencies both internal and external across State, County and other Federal jurisdictions. Their feedback is key to helping us launch Enterprise 2.0 successfully in our broader organization. Ajay: What were the biggest benefits to internal BIA employees and to the external community of users? Chaeny: For our employees, the new Enterprise 2.0-based solution will make it easier to find information; enhance employee productivity by embedding standard business processes into the system and create more of a community by creating connections with experts via social collaboration to ultimately provide better services more quickly. For the external American Indian and Alaska Native communities, we have a better relationship with the users and the new site has improved BIA's perception as a more responsive and customer-centric organization.

    Read the article

  • DNASTREAM’s RapidLaunch Oracle Accelerate solution for RightNow

    - by Richard Lefebvre
    The Oracle RightNow Accelerate solution from DNASTREAM allows each Customer to enjoy quicker deployment and earlier time to benefits from this SAAS Customer Experience solution. At the start of the project, a full suite of E-Learning simulations & materials is provided by DNASTREAM to match the customer’s processes. This RapidLaunch content library for RightNow can be leveraged by our customers early in their project implementations bringing significant cost efficiencies, time reduction and improved user adoption to their project roll outs. Solution Profile: This Oracle Accelerate solution is based on Oracle RightNow CX that includes Content management, Contact management, Incident management, Customer Portal, Closed incident Survey, Standard reports. As an additional option there is available the Oracle RightNow CX Chat implementation. For more information about RightNow and the DNASTREAM Accelerate solution, visit the Oracle Accelerate microsite or contact www.dnastream.com

    Read the article

  • Mike Neuenschwander on the Identity Platform

    - by Naresh Persaud
    If you are in London on March 22nd, check out the Identity Platform Event. Mike is deeply passionate about the platform. I caught up with Mike recently for an interview to discuss his perspective on the Oracle Identity Platform. Identity Management is not a department level initiative. To unlock the business potential of Identity Management, we have to think organizationally and holistically. To learn more about how to take a strategic approach to Identity Management, visit one of our physical events globally.  Here are some of the listings and registrations world wide: North America, Asia Pacific, Europe .

    Read the article

  • Find Nearest Object

    - by ultifinitus
    I have a fairly sizable game engine created, and I'm adding some needed features, such as this, how do I find the nearest object from a list of points? In this case, I could simply use the Pythagorean theorem to find the distance, and check the results. I know I can't simply add x and y, because that's the distance to the object, if you only took right angle turns. However I'm wondering if there's something else I could do? I also have a collision system, where essentially I turn objects into smaller objects on a smaller grid, kind of like a minimap, and only if objects exist in the same gridspace do I check for collisions, I could do the same thing, only make the gridspace larger to check for closeness. (rather than checking every. single. object) however that would take additional setup in my base class and clutter up the already cluttered object. TL;DR Question: Is there something efficient and accurate that I can use to detect which object is closest, based on a list of points and sizes?

    Read the article

  • My History with Agile

    - by Robert May
    I’m going to write my history with Agile here.  That way, in future posts, I can refer back to it, instead of typing it out in the post that contains information you may actually want to read.  Note that I’m actually a pretty senior developer, and do lots of technical interviews.  I’m an Agile fan because of the difference it makes in peoples lives and the improvement in quality it brings, and I’ll sacrifice my technological advance to help teams. Management History I started management pretty early in my career, starting with the first job that I ever had.  I actually do NOT have a CS or similar degree.  I have a Bachelor’s of Business Administration with an emphasis in Computer Information Systems. My first management gigs were around call center work and were very schedule oriented.  I didn’t understand the true value of teams, and I’m ashamed to admit, I actually installed a fingerprint scanner as a time clock in this job.  I shudder to think of the impact that I had on the team spirit.  I didn’t even trust them enough to fill out their time cards correctly.  How sad. I was managing nearly 100 people in this position, with the help of a great set of subordinates. I did try to come up with reward programs for the team, but again, didn’t understand the concept of team, so instead of letting the team determine how the rewards should work, I mandated from on high, which isn’t a good thing. I was told that I wasn’t the type that would be a good manager by people whom I respected a lot.  They said it because I was a computer geek, since they don’t understand good management either, but in retrospect, they were right about me then.  I was too green. After my first job, I went on to other jobs and with the exception of one job, I’ve managed people at them all.  The rest of the management story is important for understanding agile, so I’ll save it for my next post. Technical History I’ve been in software development for many, many years.  I technically started programming on a commodore 64 in basic.  I didn’t know that I was programming, but I was sure having fun.  That was followed by batch files, Gorilla hacking (I always had to win), WordPerfect Macro programming and other things that taught me the basics. My first “real” job was with a telephone company, and that’s where I made my first database application in DataEase, wrote my first VBA app and started using real programming tools, like turbo pascal, vb3-vb5, and semi-real tools like RPG and VisualRPG.  I wrote my first web page in 1994, and built my first data driven web page in 1995 using perlDB.  You really can do anything with Perl.  At this time, I also started a Linux based internet service provider that is still in operation today.  One of the people I worked with is now a Microsoft employee building and designing frameworks you probably know well.  Smart guy.  I also built my first ASP applications connecting to Sql Server 6.5, setup Exchange 5.5 for the company, and many other system administration stuff.  I’m a programmer by choice, mostly because I don’t really like PC support. From there, I went on to a large state agency.  I got to see and maintain true waterfall projects.  5 years of maintaining the 200 VB COM+ (MTS, actually) dlls that were used to calculate a single number is a long time.  That was all Microsoft DNS technologies.  SQL Server and VB6 were the tools of choice, although .net started to be a factor near the end of employment.  I did some heavy XML work at this job and even wrote an XSD parser and validator in VB6 that was a shim until MSXML 3.0 came out.  Prior to 3.0, XSD’s weren’t supported, and I didn’t want to write DTDs. Ironically, jobs after this were more generic.  I pretty much settled in on the .net framework and revisions of it.  Lots of WPF, some silverlight, lots of ASP.NET, some SQL Azure, lots of SQL Server, some Oracle, but I don’t think that I was as passionate about development and technologies.  I was more into the management of development.  I like people. Technorati Tags: Agile,history

    Read the article

  • How to Import Data Taxonomy Into Managed Metada Service in SharePoint 2010

    - by Wayne
    First, Open the Term Store Management Tool (Site Actions > Site Settings > Term Store Management) an download the sample import file. (Remember, Service Applications are configured on a per Web Application basis, so use any site collection inside a WebApp configured with your MMS.) Second, Insert your data. In the photo below, I demonstrate creating a term called USA. Under that, I create the term Alabama. Under that, 4 cities. Then under USA, a term called Alaska. The point is that the we have a hierarchy. Using the import file, we can go 7 layers deep. The last steps is Save the file, head back into the Term Store Management Tool, select/create a group, and Import the file.

    Read the article

< Previous Page | 380 381 382 383 384 385 386 387 388 389 390 391  | Next Page >