Search Results

Search found 22040 results on 882 pages for 'process improvement'.

Page 199/882 | < Previous Page | 195 196 197 198 199 200 201 202 203 204 205 206  | Next Page >

  • PC won't boot up after installing 11.10

    - by bakhshu
    I have a HP/Compaq s1010z (AMD64) desktop and installed 11.10 from a CD (using the entire disk). So it formatted the entire drive, went through installation and then it ejected the CD and asked me to click the Restart button, which I did. It rebooted fine the first time, but any time thereafter, it fails. Meaning, after the initial BIOS screen, the monitor seems to be stuck in limbo with a text cursor blinking on the top left corner, as if it can't find anything to boot from. At first I tried reinstalling (reformat entire drive again) - no improvement. Then I did an in-place re-installation (leave home dirs in place, just redo the OS), nothing there either. Then I put in the 11.04 CD, changed the boot order to CD first, and got the CD menu, chose 'Boot from first hard disk' and it booted fine. The problem is that I can't boot without the 11.04 CD, how ironic! Any ideas?

    Read the article

  • Disable "Ubuntu 14.04" and "Files" from auto sorting alphabetically and THEN<(Disable the then.) automatically going to that location

    - by Doodski Man
    I looked about in Ubuntu 14.04 and I could not find a option to disable a feature that made extra work for me. After renaming something "Files" then reorganizes alphanumerically and then automatically moves the prompt/cursor/highlighting to the new file name at the new alphanumerical location. It's much easier and practical if the working cursor remains in the same area where the file was found that needed name changes or needed name spelling corrections. Windows 7 does what I ask about and it's a huge improvement over that auto sort and then go to the new file location. A example to consider is a file is in the "A" area and then you rename it to a "Z" and you are then taken to the "Z" stuff and need to scroll hundreds of lines back to the "A" section. It's annoying. Thanks for reading and taking the time.

    Read the article

  • From the Coalface - 2 - Work as hard as you can to be as lazy as you can

    - by TATWORTH
    On one project, the standard build involved building the database from scratch producing a build log that was about 100,000 lines long.  Manually paging throught this log took hours and it was very easy to miss any errors. The solution was to write a filter that read the log file and output a summary file with each output line pepended with the line number in the original file. Sucessive iterations eliminated more noise lines. Diagnostic displays required escaping. The final output was just a few hundred lines long and the errors were easy to spot. The filter program took less time to to write than one manual pass reading the log file.  The utility digested the log file in about 1 second. Now database build errors could be identified quickly instead of after hours of manual examination or lengthy system testing. A few minutes consideration of the task led to a tremendous improvement in quality.

    Read the article

  • Hash Function Added To The PredicateEqualityComparer

    - by Paulo Morgado
    Sometime ago I wrote a predicate equality comparer to be used with LINQ’s Distinct operator. The Distinct operator uses an instance of an internal Set class to maintain the collection of distinct elements in the source collection which in turn checks the hash code of each element (by calling the GetHashCode method of the equality comparer) and only if there’s already an element with the same hash code in the collection calls the Equals method of the comparer to disambiguate. At the time I provided only the possibility to specify the comparison predicate, but, in some cases, comparing a hash code instead of calling the provided comparer predicate can be a significant performance improvement, I’ve added the possibility to had a hash function to the predicate equality comparer. You can get the updated code from the PauloMorgado.Linq project on CodePlex,

    Read the article

  • How to handle situation of programmer stealing credit for your project [on hold]

    - by shorecc
    There is a coworker who has stolen credit for my work. I created a system that will show live statuses of printing machines throughout the plant. There was one glitch in it so the coworker made an improvement and then re-wrote the entire system (copying 90% of my code, yet he added a new piece of hardware then took over the project). Now he asks me programming questions from time to time. I don't mind teaching coworkers how to program, but since he has stolen my ideas and credit in the past I am reluctant to teach him all I know. Today he asks me for a particularly challenging algorithm for this system. His code now looks exactly like the way I would program! How would you handle the situation? How can I decline helping him without appearing to not being a team player?

    Read the article

  • Alt text vs CSS sprites (SEO vs speed)

    - by leeoniya
    I'm reworking our site to reduce HTTP requests and blocking requests by concatenating JS, css, gzipping, loading all JS via LABjs and using CSS sprites for images that were loaded individually via <img> tags before. Progress has been great so far - 5x page load performance improvement. However, we're in the top 5 organic search ranking in google for many targeted keywords and phrases. I'm afraid eliminating so many img tags with alt attributes can hurt our SEO. Does anyone have any experience with alt tag manip/removal and effects on SEO positions? Is previous rank "sticky"?

    Read the article

  • cVidya’s MoneyMap Achieves Oracle Exadata Optimized Status

    - by Javier Puerta
    cVidya's MoneyMap running on Oracle Exadata provides extreme performance, including 4x-16x improvement in high data load rates, 4x faster data transformation and reconciliation, and query speeds - from a 2.5 billion record index –  improved from hours to few seconds! The MoneyMap solution enables operators to reconcile information from all network, operations and business support systems and through an on-going automated process, it detects problem areas which impact profitability as a result of revenue leakage, data inconsistencies or resources that are not being used efficiently. Once detected, MoneyMap provides tools to promptly correct and manage the problems to achieve profit maximization Learn more here.

    Read the article

  • Express5800 to Mesh with SQL Server 2012

    NEC's GX servers have been engineered to supply high performance and availability. At their core is an Intel Xeon E7 processor with the power to handle up to 2TB of memory and 160 threads. In addition, thanks to QuickPath Interconnect technology, GX servers boast as much as a 200 percent in database improvement over their predecessors. The combination of NEC servers with Microsoft SQL Server 2012 give users the necessary capabilities to truly realize the cloud's potential for their needs in a number of ways. Organizations get stable platforms built for enterprise environments that offer hig...

    Read the article

  • Money investments for software developers [closed]

    - by user91590
    Let's say you (or your company) have an amount of money that you would like to invest in improving your career, and probably make more money in the long run by landing a better job or raising your productivity. There are some classical investments that anyone would suggest: spend on programming books, since they pay for themselves very quickly; buy a better chair, since you will stay on that all day; attend conferences to improve your network of contacts. I'm looking for more examples to weigh my options in the quest for improvement. Google does not help as it only shows job ads for programmers in the financial sector.

    Read the article

  • Which language to dive into after having mastered python? [closed]

    - by larsvegas
    To put it in a nutshell: which programming language would you advice to go at next after having mastered python? I do love python (after having done some rather simple scripting in Perl for a couple of years) for it's clearness and straightforwardness. Of course there is still room for improvement and topics I haven't touched yet. Still: I feel quite confident about my skills by now but want to keep learning. I thought it might be helpful to dive into another language to learn about differences, advantages and disadvantages of one over the other etc.

    Read the article

  • MySQL hangs if connection comes from outside the LAN

    - by Subito
    I have a MySQL Server operating just fine if I access him from his local LAN (192.168.100.0/24). If I try to access hin from another LAN (192.168.113.0/24 in this case) it hangs for a really long time before delivering the result. SHOW PROCESSLIST; shows this process in Sleep, State empty. If I strace -p this process I get the following Output (23512 is the PID of the corresponding mysqld process): Process 23512 attached - interrupt to quit restart_syscall(<... resuming interrupted call ...>) = 1 fcntl(10, F_GETFL) = 0x2 (flags O_RDWR) fcntl(10, F_SETFL, O_RDWR|O_NONBLOCK) = 0 accept(10, {sa_family=AF_INET, sin_port=htons(51696), sin_addr=inet_addr("192.168.113.4")}, [16]) = 33 fcntl(10, F_SETFL, O_RDWR) = 0 rt_sigaction(SIGCHLD, {SIG_DFL, [CHLD], SA_RESTORER|SA_RESTART, 0x7f9ce7ca34f0}, {SIG_DFL, [CHLD], SA_RESTORER|SA_RESTART, 0x7f9ce7ca34f0}, ) = 0 getpeername(33, {sa_family=AF_INET, sin_port=htons(51696), sin_addr=inet_addr("192.168.113.4")}, [16]) = 0 getsockname(33, {sa_family=AF_INET, sin_port=htons(3306), sin_addr=inet_addr("192.168.100.190")}, [16]) = 0 open("/etc/hosts.allow", O_RDONLY) = 64 fstat(64, {st_mode=S_IFREG|0644, st_size=580, ...}) = 0 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f9ce9839000 read(64, "# /etc/hosts.allow: list of host"..., 4096) = 580 read(64, "", 4096) = 0 close(64) = 0 munmap(0x7f9ce9839000, 4096) = 0 open("/etc/hosts.deny", O_RDONLY) = 64 fstat(64, {st_mode=S_IFREG|0644, st_size=880, ...}) = 0 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f9ce9839000 read(64, "# /etc/hosts.deny: list of hosts"..., 4096) = 880 read(64, "", 4096) = 0 close(64) = 0 munmap(0x7f9ce9839000, 4096) = 0 getsockname(33, {sa_family=AF_INET, sin_port=htons(3306), sin_addr=inet_addr("192.168.100.190")}, [16]) = 0 fcntl(33, F_SETFL, O_RDONLY) = 0 fcntl(33, F_GETFL) = 0x2 (flags O_RDWR) setsockopt(33, SOL_SOCKET, SO_RCVTIMEO, "\36\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0", 16) = 0 setsockopt(33, SOL_SOCKET, SO_SNDTIMEO, "<\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0", 16) = 0 fcntl(33, F_SETFL, O_RDWR|O_NONBLOCK) = 0 setsockopt(33, SOL_IP, IP_TOS, [8], 4) = 0 setsockopt(33, SOL_TCP, TCP_NODELAY, [1], 4) = 0 futex(0x7f9cea5c9564, FUTEX_WAKE_OP_PRIVATE, 1, 1, 0x7f9cea5c9560, {FUTEX_OP_SET, 0, FUTEX_OP_CMP_GT, 1}) = 1 futex(0x7f9cea5c6fe0, FUTEX_WAKE_PRIVATE, 1) = 1 poll([{fd=10, events=POLLIN}, {fd=12, events=POLLIN}], 2, -1) = 1 ([{fd=10, revents=POLLIN}]) fcntl(10, F_GETFL) = 0x2 (flags O_RDWR) fcntl(10, F_SETFL, O_RDWR|O_NONBLOCK) = 0 accept(10, {sa_family=AF_INET, sin_port=htons(51697), sin_addr=inet_addr("192.168.113.4")}, [16]) = 31 fcntl(10, F_SETFL, O_RDWR) = 0 rt_sigaction(SIGCHLD, {SIG_DFL, [CHLD], SA_RESTORER|SA_RESTART, 0x7f9ce7ca34f0}, {SIG_DFL, [CHLD], SA_RESTORER|SA_RESTART, 0x7f9ce7ca34f0}, ) = 0 getpeername(31, {sa_family=AF_INET, sin_port=htons(51697), sin_addr=inet_addr("192.168.113.4")}, [16]) = 0 getsockname(31, {sa_family=AF_INET, sin_port=htons(3306), sin_addr=inet_addr("192.168.100.190")}, [16]) = 0 open("/etc/hosts.allow", O_RDONLY) = 33 fstat(33, {st_mode=S_IFREG|0644, st_size=580, ...}) = 0 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f9ce9839000 read(33, "# /etc/hosts.allow: list of host"..., 4096) = 580 read(33, "", 4096) = 0 close(33) = 0 munmap(0x7f9ce9839000, 4096) = 0 open("/etc/hosts.deny", O_RDONLY) = 33 fstat(33, {st_mode=S_IFREG|0644, st_size=880, ...}) = 0 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f9ce9839000 read(33, "# /etc/hosts.deny: list of hosts"..., 4096) = 880 read(33, "", 4096) = 0 close(33) = 0 munmap(0x7f9ce9839000, 4096) = 0 getsockname(31, {sa_family=AF_INET, sin_port=htons(3306), sin_addr=inet_addr("192.168.100.190")}, [16]) = 0 fcntl(31, F_SETFL, O_RDONLY) = 0 fcntl(31, F_GETFL) = 0x2 (flags O_RDWR) setsockopt(31, SOL_SOCKET, SO_RCVTIMEO, "\36\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0", 16) = 0 setsockopt(31, SOL_SOCKET, SO_SNDTIMEO, "<\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0", 16) = 0 fcntl(31, F_SETFL, O_RDWR|O_NONBLOCK) = 0 setsockopt(31, SOL_IP, IP_TOS, [8], 4) = 0 setsockopt(31, SOL_TCP, TCP_NODELAY, [1], 4) = 0 futex(0x7f9cea5c9564, FUTEX_WAKE_OP_PRIVATE, 1, 1, 0x7f9cea5c9560, {FUTEX_OP_SET, 0, FUTEX_OP_CMP_GT, 1}) = 1 futex(0x7f9cea5c6fe0, FUTEX_WAKE_PRIVATE, 1) = 1 poll([{fd=10, events=POLLIN}, {fd=12, events=POLLIN}], 2, -1^C <unfinished ...> Process 23512 detached This output repeats itself until the answer gets send. It could take up to 15 Minutes until the request gets served. In the local LAN its a matter of Milliseconds. Why is this and how can I debug this further? [Edit] tcpdump shows a ton of this: 14:49:44.103107 IP cassandra-test.mysql > 192.168.X.6.64626: Flags [S.], seq 1434117703, ack 1793610733, win 14600, options [mss 1460,nop,nop,sackOK,nop,wscale 7], length 0 14:49:44.135187 IP 192.168.X.6.64625 > cassandra-test.mysql: Flags [P.], seq 106:145, ack 182, win 4345, length 39 14:49:44.135293 IP cassandra-test.mysql > 192.168.X.6.64625: Flags [P.], seq 182:293, ack 145, win 115, length 111 14:49:44.167025 IP 192.168.X.6.64624 > cassandra-test.mysql: Flags [.], ack 444, win 4280, length 0 14:49:44.168933 IP 192.168.X.6.64626 > cassandra-test.mysql: Flags [.], ack 1, win 4390, length 0 14:49:44.169088 IP cassandra-test.mysql > 192.168.X.6.64626: Flags [P.], seq 1:89, ack 1, win 115, length 88 14:49:44.169672 IP 192.168.X.6.64625 > cassandra-test.mysql: Flags [P.], seq 145:171, ack 293, win 4317, length 26 14:49:44.169726 IP cassandra-test.mysql > 192.168.X.6.64625: Flags [P.], seq 293:419, ack 171, win 115, length 126 14:49:44.275111 IP 192.168.X.6.64626 > cassandra-test.mysql: Flags [P.], seq 1:74, ack 89, win 4368, length 73 14:49:44.275131 IP cassandra-test.mysql > 192.168.X.6.64626: Flags [.], ack 74, win 115, length 0 14:49:44.275149 IP 192.168.X.6.64625 > cassandra-test.mysql: Flags [P.], seq 171:180, ack 419, win 4286, length 9 14:49:44.275189 IP cassandra-test.mysql > 192.168.X.6.64626: Flags [P.], seq 89:100, ack 74, win 115, length 11 14:49:44.275264 IP 192.168.X.6.64625 > cassandra-test.mysql: Flags [P.], seq 180:185, ack 419, win 4286, length 5 14:49:44.275281 IP cassandra-test.mysql > 192.168.X.6.64625: Flags [.], ack 185, win 115, length 0 14:49:44.275295 IP cassandra-test.mysql > 192.168.X.6.64625: Flags [F.], seq 419, ack 185, win 115, length 0 14:49:44.275650 IP 192.168.X.6.64625 > cassandra-test.mysql: Flags [F.], seq 185, ack 419, win 4286, length 0 14:49:44.275660 IP cassandra-test.mysql > 192.168.X.6.64625: Flags [.], ack 186, win 115, length 0 14:49:44.275910 IP 192.168.X.6.64627 > cassandra-test.mysql: Flags [S], seq 2336421549, win 8192, options [mss 1351,nop,wscale 2,nop,nop,sackOK], length 0 14:49:44.275921 IP cassandra-test.mysql > 192.168.X.6.64627: Flags [S.], seq 3289359778, ack 2336421550, win 14600, options [mss 1460,nop,nop,sackOK,nop,wscale 7], length 0

    Read the article

  • "Can't Connect to Server" from 2nd virtual host on VPS

    - by chaoskreator
    I'm using Debian 7 Wheezy and Apache 2.2.22, and I'm setting up Virtual Hosts for a number of websites on my VPS. I've successfully configured the VirtualHost directives for one of the sites, but the second one continually gives "Problem Loading Page" in Firefox. I've run configtest and it has verified all my syntax is correct, and I've checked all the permissions. Everything on the 2nd domain is pretty much copy/pasted from the first, so I'm not sure what the issue is, as there are no entries into /var/log/apache2/error.log other than where I have reloaded the configurations: /# cat /var/log/apache2/error.log [Thu May 29 01:19:00 2014] [notice] Graceful restart requested, doing restart [Thu May 29 01:19:00 2014] [info] Init: Seeding PRNG with 656 bytes of entropy [Thu May 29 01:19:00 2014] [info] Init: Generating temporary RSA private keys (512/1024 bits) [Thu May 29 01:19:00 2014] [info] Init: Generating temporary DH parameters (512/1024 bits) [Thu May 29 01:19:00 2014] [debug] ssl_scache_shmcb.c(253): shmcb_init allocated 512000 bytes of shared memory [Thu May 29 01:19:00 2014] [debug] ssl_scache_shmcb.c(272): for 511920 bytes (512000 including header), recommending 32 subcaches, 133 indexes each [Thu May 29 01:19:00 2014] [debug] ssl_scache_shmcb.c(306): shmcb_init_memory choices follow [Thu May 29 01:19:00 2014] [debug] ssl_scache_shmcb.c(308): subcache_num = 32 [Thu May 29 01:19:00 2014] [debug] ssl_scache_shmcb.c(310): subcache_size = 15992 [Thu May 29 01:19:00 2014] [debug] ssl_scache_shmcb.c(312): subcache_data_offset = 3208 [Thu May 29 01:19:00 2014] [debug] ssl_scache_shmcb.c(314): subcache_data_size = 12784 [Thu May 29 01:19:00 2014] [debug] ssl_scache_shmcb.c(316): index_num = 133 [Thu May 29 01:19:00 2014] [info] Shared memory session cache initialised [Thu May 29 01:19:00 2014] [info] Init: Initializing (virtual) servers for SSL [Thu May 29 01:19:00 2014] [info] mod_ssl/2.2.22 compiled against Server: Apache/2.2.22, Library: OpenSSL/1.0.1e [Thu May 29 01:19:00 2014] [notice] Apache/2.2.22 (Debian) PHP/5.4.4-14+deb7u9 mod_ssl/2.2.22 OpenSSL/1.0.1e mod_perl/2.0.7 Perl/v5.14.2 configured -- resuming normal operations [Thu May 29 01:19:00 2014] [info] Server built: Mar 4 2013 22:05:16 [Thu May 29 01:19:00 2014] [debug] prefork.c(1023): AcceptMutex: sysvsem (default: sysvsem) I've ensured to enable each vhost with a2ensite {sitename.conf} with no errors there, either. Below are the contents of the configuration files... /etc/apache2/apache2.conf # Global configuration # LockFile ${APACHE_LOCK_DIR}/accept.lock PidFile ${APACHE_PID_FILE} Timeout 300 KeepAlive On MaxKeepAliveRequests 100 KeepAliveTimeout 5 ## ## Server-Pool Size Regulation (MPM specific) ## # prefork MPM # StartServers: number of server processes to start # MinSpareServers: minimum number of server processes which are kept spare # MaxSpareServers: maximum number of server processes which are kept spare # MaxClients: maximum number of server processes allowed to start # MaxRequestsPerChild: maximum number of requests a server process serves <IfModule mpm_prefork_module> StartServers 5 MinSpareServers 5 MaxSpareServers 10 MaxClients 150 MaxRequestsPerChild 0 </IfModule> # worker MPM # StartServers: initial number of server processes to start # MinSpareThreads: minimum number of worker threads which are kept spare # MaxSpareThreads: maximum number of worker threads which are kept spare # ThreadLimit: ThreadsPerChild can be changed to this maximum value during a # graceful restart. ThreadLimit can only be changed by stopping # and starting Apache. # ThreadsPerChild: constant number of worker threads in each server process # MaxClients: maximum number of simultaneous client connections # MaxRequestsPerChild: maximum number of requests a server process serves <IfModule mpm_worker_module> StartServers 2 MinSpareThreads 25 MaxSpareThreads 75 ThreadLimit 64 ThreadsPerChild 25 MaxClients 150 MaxRequestsPerChild 0 </IfModule> # event MPM # StartServers: initial number of server processes to start # MinSpareThreads: minimum number of worker threads which are kept spare # MaxSpareThreads: maximum number of worker threads which are kept spare # ThreadsPerChild: constant number of worker threads in each server process # MaxClients: maximum number of simultaneous client connections # MaxRequestsPerChild: maximum number of requests a server process serves <IfModule mpm_event_module> StartServers 2 MinSpareThreads 25 MaxSpareThreads 75 ThreadLimit 64 ThreadsPerChild 25 MaxClients 150 MaxRequestsPerChild 0 </IfModule> # These need to be set in /etc/apache2/envvars User ${APACHE_RUN_USER} Group ${APACHE_RUN_GROUP} # # AccessFileName: The name of the file to look for in each directory # for additional configuration directives. See also the AllowOverride # directive. # AccessFileName .htaccess # # The following lines prevent .htaccess and .htpasswd files from being # viewed by Web clients. # <Files ~ "^\.ht"> Order allow,deny Deny from all Satisfy all </Files> DefaultType None HostnameLookups Off ErrorLog ${APACHE_LOG_DIR}/error.log LogLevel debug # Include module configuration: Include mods-enabled/*.load Include mods-enabled/*.conf # Include list of ports to listen on and which to use for name based vhosts Include ports.conf # # The following directives define some format nicknames for use with # a CustomLog directive (see below). # If you are behind a reverse proxy, you might want to change %h into %{X-Forwarded-For}i # # LogFormat "%v:%p %h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" vhost_combined LogFormat "%h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" combined LogFormat "%h %l %u %t \"%r\" %>s %O" common LogFormat "%{Referer}i -> %U" referer LogFormat "%{User-agent}i" agent <Directory "/var/www"> Order allow,deny Allow from all Require all granted </Directory> # Include generic snippets of statements Include conf.d/ # Include the virtual host configurations: Include sites-enabled/*.conf NameVirtualHost *:80 /etc/apache2/sites-available/site1.net.conf <VirtualHost *:80> ServerName site1.net ServerAlias site1.net *.site1.net DocumentRoot "/var/www/site1" ErrorLog "/var/www/site1/logs/error.log" CustomLog "/var/www/site1/logs/access.log" vhost_combined <Directory "/var/www/site1"> Options None AllowOverride All Order allow,deny Allow from all Satisfy Any </Directory> </VirtualHost> /etc/apache2/sites-available/site2.com.conf <VirtualHost *:80> ServerName site2.com ServerAlias site2.com *.site2.com DocumentRoot "/var/www/site2" ErrorLog "/var/www/site2/logs/error.log" CustomLog "/var/www/site2/logs/access.log" vhost_combined <Directory "/var/www/site2"> Options None AllowOverride All Order allow,deny Allow from all Satisfy Any </Directory> </VirtualHost> I've also tried setting NameVirtualHost like: Listen 80 NameVirtualHost 23.88.121.82:80 NameVirtualHost 127.0.0.1:80 and the VirtualHost Directives: <VirtualHost 23.88.121.82:80> ... </VirtualHost> for both sites, but that causes the first site to fail, as well. I'm wondering if I need to set up individual IPs for each site, possibly? I have 2 more IPv4 and 3 IPv6 addresses available, if that would make a difference. Also, in the grand scheme of things, I will need to enable SSL for the first site. I've been reading that I'll need to basically just mimic the directives for listening on port 80, only on port 443, and make sure mod_ssl is enabled? EDIT: I just ran apache2 -t to test the config files that way, and got the error: apache2: bad user name ${APACHE_RUN_USER}. However, apachectl configtest returns Syntax OK. There are no other mentions of errors with the mutex anywhere else, however. I was pretty sure if there was an error with the user apache was supposed to run under, the server wouldn't start at all... EDIT 2: Restarting apache fixed the bad user name error.

    Read the article

  • Enabling Kerberos Authentication for Reporting Services

    - by robcarrol
    Recently, I’ve helped several customers with Kerberos authentication problems with Reporting Services and Analysis Services, so I’ve decided to write this blog post and pull together some useful resources in one place (there are 2 whitepapers in particular that I found invaluable configuring Kerberos authentication, and these can be found in the references section at the bottom of this post). In most of these cases, the problem has manifested itself with the Login failed for User ‘NT Authority\Anonymous’ (“double-hop”) error. By default, Reporting Services uses Windows Integrated Authentication, which includes the Kerberos and NTLM protocols for network authentication. Additionally, Windows Integrated Authentication includes the negotiate security header, which prompts the client to select Kerberos or NTLM for authentication. The client can access reports which have the appropriate permissions by using Kerberos for authentication. Servers that use Kerberos authentication can impersonate those clients and use their security context to access network resources. You can configure Reporting Services to use both Kerberos and NTLM authentication; however this may lead to a failure to authenticate. With negotiate, if Kerberos cannot be used, the authentication method will default to NTLM. When negotiate is enabled, the Kerberos protocol is always used except when: Clients/servers that are involved in the authentication process cannot use Kerberos. The client does not provide the information necessary to use Kerberos. An in-depth discussion of Kerberos authentication is beyond the scope of this post, however when users execute reports that are configured to use Windows Integrated Authentication, their logon credentials are passed from the report server to the server hosting the data source. Delegation needs to be set on the report server and Service Principle Names (SPNs) set for the relevant services. When a user processes a report, the request must go through a Web server on its way to a database server for processing. Kerberos authentication enables the Web server to request a service ticket from the domain controller; impersonate the client when passing the request to the database server; and then restrict the request based on the user’s permissions. Each time a server is required to pass the request to another server, the same process must be used. Kerberos authentication is supported in both native and SharePoint integrated mode, but I’ll focus on native mode for the purpose of this post (I’ll explain configuring SharePoint integrated mode and Kerberos authentication in a future post). Configuring Kerberos avoids the authentication failures due to double-hop issues. These double-hop errors occur when a users windows domain credentials can’t be passed to another server to complete the user’s request. In the case of my customers, users were executing Reporting Services reports that were configured to query Analysis Services cubes on a separate machine using Windows Integrated security. The double-hop issue occurs as NTLM credentials are valid for only one network hop, subsequent hops result in anonymous authentication. The client attempts to connect to the report server by making a request from a browser (or some other application), and the connection process begins with authentication. With NTLM authentication, client credentials are presented to Computer 2. However Computer 2 can’t use the same credentials to access Computer 3 (so we get the Anonymous login error). To access Computer 3 it is necessary to configure the connection string with stored credentials, which is what a number of customers I have worked with have done to workaround the double-hop authentication error. However, to get the benefits of Windows Integrated security, a better solution is to enable Kerberos authentication. Again, the connection process begins with authentication. With Kerberos authentication, the client and the server must demonstrate to one another that they are genuine, at which point authentication is successful and a secure client/server session is established. In the illustration above, the tiers represent the following: Client tier (computer 1): The client computer from which an application makes a request. Middle tier (computer 2): The Web server or farm where the client’s request is directed. Both the SharePoint and Reporting Services server(s) comprise the middle tier (but we’re only concentrating on native deployments just now). Back end tier (computer 3): The Database/Analysis Services server/Cluster where the requested data is stored. In order to enable Kerberos authentication for Reporting Services it’s necessary to configure the relevant SPNs, configure trust for delegation for server accounts, configure Kerberos with full delegation and configure the authentication types for Reporting Services. Service Principle Names (SPNs) are unique identifiers for services and identify the account’s type of service. If an SPN is not configured for a service, a client account will be unable to authenticate to the servers using Kerberos. You need to be a domain administrator to add an SPN, which can be added using the SetSPN utility. For Reporting Services in native mode, the following SPNs need to be registered --SQL Server Service SETSPN -S mssqlsvc/servername:1433 Domain\SQL For named instances, or if the default instance is running under a different port, then the specific port number should be used. --Reporting Services Service SETSPN -S http/servername Domain\SSRS SETSPN -S http/servername.domain.com Domain\SSRS The SPN should be set for the NETBIOS name of the server and the FQDN. If you access the reports using a host header or DNS alias, then that should also be registered SETSPN -S http/www.reports.com Domain\SSRS --Analysis Services Service SETSPN -S msolapsvc.3/servername Domain\SSAS Next, you need to configure trust for delegation, which refers to enabling a computer to impersonate an authenticated user to services on another computer: Location Description Client 1. The requesting application must support the Kerberos authentication protocol. 2. The user account making the request must be configured on the domain controller. Confirm that the following option is not selected: Account is sensitive and cannot be delegated. Servers 1. The service accounts must be trusted for delegation on the domain controller. 2. The service accounts must have SPNs registered on the domain controller. If the service account is a domain user account, the domain administrator must register the SPNs. In Active Directory Users and Computers, verify that the domain user accounts used to access reports have been configured for delegation (the ‘Account is sensitive and cannot be delegated’ option should not be selected): We then need to configure the Reporting Services service account and computer to use Kerberos with full delegation:   We also need to do the same for the SQL Server or Analysis Services service accounts and computers (depending on what type of data source you are connecting to in your reports). Finally, and this is the part that sometimes gets over-looked, we need to configure the authentication type correctly for reporting services to use Kerberos authentication. This is configured in the Authentication section of the RSReportServer.config file on the report server. <Authentication> <AuthenticationTypes>           <RSWindowsNegotiate/> </AuthenticationTypes> <EnableAuthPersistence>true</EnableAuthPersistence> </Authentication> This will enable Kerberos authentication for Internet Explorer. For other browsers, see the link below. The report server instance must be restarted for these changes to take effect. Once these changes have been made, all that’s left to do is test to make sure Kerberos authentication is working properly by running a report from report manager that is configured to use Windows Integrated authentication (either connecting to Analysis Services or SQL Server back-end). Resources: Manage Kerberos Authentication Issues in a Reporting Services Environment http://download.microsoft.com/download/B/E/1/BE1AABB3-6ED8-4C3C-AF91-448AB733B1AF/SSRSKerberos.docx Configuring Kerberos Authentication for Microsoft SharePoint 2010 Products http://www.microsoft.com/download/en/details.aspx?displaylang=en&id=23176 How to: Configure Windows Authentication in Reporting Services http://msdn.microsoft.com/en-us/library/cc281253.aspx RSReportServer Configuration File http://msdn.microsoft.com/en-us/library/ms157273.aspx#Authentication Planning for Browser Support http://msdn.microsoft.com/en-us/library/ms156511.aspx

    Read the article

  • Oracle Enterprise Manager 11g Application Management Suite for Oracle E-Business Suite Now Available

    - by chung.wu
    Oracle Enterprise Manager 11g Application Management Suite for Oracle E-Business Suite is now available. The management suite combines features that were available in the standalone Application Management Pack for Oracle E-Business Suite and Application Change Management Pack for Oracle E-Business Suite with Oracle's market leading real user monitoring and configuration management capabilities to provide the most complete solution for managing E-Business Suite applications. The features that were available in the standalone management packs are now packaged into Oracle E-Business Suite Plug-in 4.0, which is now fully certified with Oracle Enterprise Manager 11g Grid Control. This latest plug-in extends Grid Control with E-Business Suite specific management capabilities and features enhanced change management support. In addition, this latest release of Application Management Suite for Oracle E-Business Suite also includes numerous real user monitoring improvements. General Enhancements This new release of Application Management Suite for Oracle E-Business Suite offers the following key capabilities: Oracle Enterprise Manager 11g Grid Control Support: All components of the management suite are certified with Oracle Enterprise Manager 11g Grid Control. Built-in Diagnostic Ability: This release has numerous major enhancements that provide the necessary intelligence to determine if the product has been installed and configured correctly. There are diagnostics for Discovery, Cloning, and User Monitoring that will validate if the appropriate patches, privileges, setups, and profile options have been configured. This feature improves the setup and configuration time to be up and operational. Lifecycle Automation Enhancements Application Management Suite for Oracle E-Business Suite provides a centralized view to monitor and orchestrate changes (both functional and technical) across multiple Oracle E-Business Suite systems. In this latest release, it provides even more control and flexibility in managing Oracle E-Business Suite changes.Change Management: Built-in Diagnostic Ability: This latest release has numerous major enhancements that provide the necessary intelligence to determine if the product has been installed and configured correctly. There are diagnostics for Customization Manager, Patch Manager, and Setup Manager that will validate if the appropriate patches, privileges, setups, and profile options have been configured. Enhancing the setup time and configuration time to be up and operational. Customization Manager: Multi-Node Custom Application Registration: This feature automates the process of registering and validating custom products/applications on every node in a multi-node EBS system. Public/Private File Source Mappings and E-Business Suite Mappings: File Source Mappings & E-Business Suite Mappings can be created and marked as public or private. Only the creator/owner can define/edit his/her own mappings. Users can use public mappings, but cannot edit or change settings. Test Checkout Command for Versions: This feature allows you to test/verify checkout commands at the version level within the File Source Mapping page. Prerequisite Patch Validation: You can specify prerequisite patches for Customization packages and for Release 12 Oracle E-Business Suite packages. Destination Path Population: You can now automatically populate the Destination Path for common file types during package construction. OAF File Type Support: Ability to package Oracle Application Framework (OAF) customizations and deploy them across multiple Oracle E-Business Suite instances. Extended PLL Support: Ability to distinguish between different types of PLLs (that is, Report and Forms PLL files). Providing better granularity when managing PLL objects. Enhanced Standard Checker: Provides greater and more comprehensive list of coding standards that are verified during the package build process (for example, File Driver exceptions, Java checks, XML checks, SQL checks, etc.) HTML Package Readme: The package Readme is in HTML format and includes the file listing. Advanced Package Search Capabilities: The ability to utilize more criteria within the advanced search package (that is, Public, Last Updated by, Files Source Mapping, and E-Business Suite Mapping). Enhanced Package Build Notifications: More detailed information on the results of a package build process. Better, more detailed troubleshooting guidance in the event of build failures. Patch Manager:Staged Patches: Ability to run Patch Manager with no external internet access. Customer can download Oracle E-Business Suite patches into a shared location for Patch Manager to access and apply. Supports highly secured production environments that prohibit external internet connections. Support for Superseded Patches: Automatic check for superseded patches. Allows users to easily add superseded patches into the Patch Run. More comprehensive and correct Patch Runs. Removes many manual and laborious tasks, frees up Apps DBAs for higher value-added tasks. Automatic Primary Node Identification: Users can now specify which is the "primary node" (that is, which node hosts the Shared APPL_TOP) during the Patch Run interview process, available for Release 12 only. Setup Manager:Preview Extract Results: Ability to execute an extract in "proof mode", and examine the query results, to determine accuracy. Used in conjunction with the "where" clause in Advanced Filtering. This feature can provide better and more accurate fine tuning of extracts. Use Uploaded Extracts in New Projects: Ability to incorporate uploaded extracts in new projects via new LOV fields in package construction. Leverages the Setup Manager repository to access extracts that have been uploaded. Allows customer to reuse uploaded extracts to provision new instances. Re-use Existing (that is, historical) Extracts in New Projects: Ability to incorporate existing extracts in new projects via new LOV fields in package construction. Leverages the Setup Manager repository to access point-in-time extracts (snapshots) of configuration data. Allows customer to reuse existing extracts to provision new instances. Allows comparative historical reporting of identical APIs, executed at different times. Support for BR100 formats: Setup Manager can now automatically produce reports in the BR100 format. Native support for industry standard formats. Concurrent Manager API Support: General Foundation now provides an API for management of "Concurrent Manager" configuration data. Ability to migrate Concurrent Managers from one instance to another. Complete the setup once and never again; no need to redefine the Concurrent Managers. User Experience Management Enhancements Application Management Suite for Oracle E-Business Suite includes comprehensive capabilities for user experience management, supporting both real user and synthetic transaction based user monitoring techniques. This latest release of the management suite include numerous improvements in real user monitoring support. KPI Reporting: Configurable decimal precision for reporting of KPI and SLA values. By default, this is two decimal places. KPI numerator and denominator information. It is now possible to view KPI numerator and denominator information, and to have it available for export. Content Messages Processing: The application content message facility has been extended to distinguish between notifications and errors. In addition, it is now possible to specify matching rules that can be used to refine a selected content message specification. Note this is only available for XPath-based (not literal) message contents. Data Export: The Enriched data export facility has been significantly enhanced to provide improved performance and accessibility. Data is no longer stored within XML-based files, but is now stored within the Reporter database. However, it is possible to configure an alternative database for its storage. Access to the export data is through SQL. With this enhancement, it is now more easy than ever to use tools such as Oracle Business Intelligence Enterprise Edition to analyze correlated data collected from real user monitoring and business data sources. SNMP Traps for System Events: Previously, the SNMP notification facility was only available for KPI alerting. It has now been extended to support the generation of SNMP traps for system events, to provide external health monitoring of the RUEI system processes. Performance Improvements: Enhanced dashboard performance. The dashboard facility has been enhanced to support the parallel loading of items. In the case of dashboards containing large numbers of items, this can result in a significant performance improvement. Initial period selection within Data Browser and reports. The User Preferences facility has been extended to allow you to specify the initial period selection when first entering the Data Browser or reports facility. The default is the last hour. Performance improvement when querying the all sessions group. Technical Prerequisites, Download and Installation Instructions The Linux version of the plug-in is available for immediate download from Oracle Technology Network or Oracle eDelivery. For specific information regarding technical prerequisites, product download and installation, please refer to My Oracle Support note 1224313.1. The following certifications are in progress: * Oracle Solaris on SPARC (64-bit) (9, 10) * HP-UX Itanium (11.23, 11.31) * HP-UX PA-RISC (64-bit) (11.23, 11.31) * IBM AIX on Power Systems (64-bit) (5.3, 6.1)

    Read the article

  • Refreshing Your PC Won’t Help: Why Bloatware is Still a Problem on Windows 8

    - by Chris Hoffman
    Bloatware is still a big problem on new Windows 8 and 8.1 PCs. Some websites will tell you that you can easily get rid of manufacturer-installed bloatware with Windows 8′s Reset feature, but they’re generally wrong. This junk software often turns the process of powering on your new PC from what could be a delightful experience into a tedious slog, forcing you to spend hours cleaning up your new PC before you can enjoy it. Why Refreshing Your PC (Probably) Won’t Help Manufacturers install software along with Windows on their new PCs. In addition to hardware drivers that allow the PC’s hardware to work properly, they install more questionable things like trial antivirus software and other nagware. Much of this software runs at boot, cluttering the system tray and slowing down boot times, often dramatically. Software companies pay computer manufacturers to include this stuff. It’s installed to make the PC manufacturer money at the cost of making the Windows computer worse for actual users. Windows 8 includes “Refresh Your PC” and “Reset Your PC” features that allow Windows users to quickly get their computers back to a fresh state. It’s essentially a quick, streamlined way of reinstalling Windows.  If you install Windows 8 or 8.1 yourself, the Refresh operation will give your PC a clean Windows system without any additional third-party software. However, Microsoft allows computer manufacturers to customize their Refresh images. In other words, most computer manufacturers will build their drivers, bloatware, and other system customizations into the Refresh image. When you Refresh your computer, you’ll just get back to the factory-provided system complete with bloatware. It’s possible that some computer manufacturers aren’t building bloatware into their refresh images in this way. It’s also possible that, when Windows 8 came out, some computer manufacturer didn’t realize they could do this and that refreshing a new PC would strip the bloatware. However, on most Windows 8 and 8.1 PCs, you’ll probably see bloatware come back when you refresh your PC. It’s easy to understand how PC manufacturers do this. You can create your own Refresh images on Windows 8 and 8.1 with just a simple command, replacing Microsoft’s image with a customized one. Manufacturers can install their own refresh images in the same way. Microsoft doesn’t lock down the Refresh feature. Desktop Bloatware is Still Around, Even on Tablets! Not only is typical Windows desktop bloatware not gone, it has tagged along with Windows as it moves to new form factors. Every Windows tablet currently on the market — aside from Microsoft’s own Surface and Surface 2 tablets — runs on a standard Intel x86 chip. This means that every Windows 8 and 8.1 tablet you see in stores has a full desktop with the capability to run desktop software. Even if that tablet doesn’t come with a keyboard, it’s likely that the manufacturer has preinstalled bloatware on the tablet’s desktop. Yes, that means that your Windows tablet will be slower to boot and have less memory because junk and nagging software will be on its desktop and in its system tray. Microsoft considers tablets to be PCs, and PC manufacturers love installing their bloatware. If you pick up a Windows tablet, don’t be surprised if you have to deal with desktop bloatware on it. Microsoft Surfaces and Signature PCs Microsoft is now selling their own Surface PCs that they built themselves — they’re now a “devices and services” company after all, not a software company. One of the nice things about Microsoft’s Surface PCs is that they’re free of the typical bloatware. Microsoft won’t take money from Norton to include nagging software that worsens the experience. If you pick up a Surface device that provides Windows 8.1 and 8 as Microsoft intended it — or install a fresh Windows 8.1 or 8 system — you won’t see any bloatware. Microsoft is also continuing their Signature program. New PCs purchased from Microsoft’s official stores are considered “Signature PCs” and don’t have the typical bloatware. For example, the same laptop could be full of bloatware in a traditional computer store and clean, without the nasty bloatware when purchased from a Microsoft Store. Microsoft will also continue to charge you $99 if you want them to remove your computer’s bloatware for you — that’s the more questionable part of the Signature program. Windows 8 App Bloatware is an Improvement There’s a new type of bloatware on new Windows 8 systems, which is thankfully less harmful. This is bloatware in the form of included “Windows 8-style”, “Store-style”, or “Modern” apps in the new, tiled interface. For example, Amazon may pay a computer manufacturer to include the Amazon Kindle app from the Windows Store. (The manufacturer may also just receive a cut of book sales for including it. We’re not sure how the revenue sharing works — but it’s clear PC manufacturers are getting money from Amazon.) The manufacturer will then install the Amazon Kindle app from the Windows Store by default. This included software is technically some amount of clutter, but it doesn’t cause the problems older types of bloatware does. It won’t automatically load and delay your computer’s startup process, clutter your system tray, or take up memory while you’re using your computer. For this reason, a shift to including new-style apps as bloatware is a definite improvement over older styles of bloatware. Unfortunately, this type of bloatware has not replaced traditional desktop bloatware, and new Windows PCs will generally have both. Windows RT is Immune to Typical Bloatware, But… Microsoft’s Windows RT can’t run Microsoft desktop software, so it’s immune to traditional bloatware. Just as you can’t install your own desktop programs on it, the Windows RT device’s manufacturer can’t install their own desktop bloatware. While Windows RT could be an antidote to bloatware, this advantage comes at the cost of being able to install any type of desktop software at all. Windows RT has also seemingly failed — while a variety of manufacturers came out with their own Windows RT devices when Windows 8 was first released, they’ve all since been withdrawn from the market. Manufacturers who created Windows RT devices have criticized it in the media and stated they have no plans to produce any future Windows RT devices. The only Windows RT devices still on the market are Microsoft’s Surface (originally named Surface RT) and Surface 2. Nokia is also coming out with their own Windows RT tablet, but they’re in the process of being purchased by Microsoft. In other words, Windows RT just isn’t a factor when it comes to bloatware — you wouldn’t get a Windows RT device unless you purchased a Surface, but those wouldn’t come with bloatware anyway. Removing Bloatware or Reinstalling Windows 8.1 While bloatware is still a problem on new Windows systems and the Refresh option probably won’t help you, you can still eliminate bloatware in the traditional way. Bloatware can be uninstalled from the Windows Control Panel or with a dedicated removal tool like PC Decrapifier, which tries to automatically uninstall the junk for you. You can also do what Windows geeks have always tended to do with new computers — reinstall Windows 8 or 8.1 from scratch with installation media from Microsoft. You’ll get a clean Windows system and you can install only the hardware drivers and other software you need. Unfortunately, bloatware is still a big problem for Windows PCs. Windows 8 tries to do some things to address bloatware, but it ultimately comes up short. Most Windows PCs sold in most stores to most people will still have the typical bloatware slowing down the boot process, wasting memory, and adding clutter. Image Credit: LG on Flickr, Intel Free Press on Flickr, Wilson Hui on Flickr, Intel Free Press on Flickr, Vernon Chan on Flickr     

    Read the article

  • Launch market place with id of an application that doesn't exist in the android market place

    - by Gaurav
    Hi, I am creating an application that checks the installation of a package and then launches the market-place with its id. When I try to launch market place with id of an application say com.mybrowser.android by throwing an intent android.intent.action.VIEW with url: market://details?id=com.mybrowser.android, the market place application does launches but crashes after launch. Note: the application com.mybrowser.android doesn't exists in the market-place. MyApplication is my application. $ adb logcat I/ActivityManager( 1030): Starting activity: Intent { act=android.intent.action.MAIN cat=[android.intent.category.LAUNCHER] flg=0x10200000 cmp=myapp.testapp/.MyApplication } I/ActivityManager( 1030): Start proc myapp.testapp for activity myapp.testapp/.MyApplication: pid=3858 uid=10047 gids={1015, 3003} I/MyApplication( 3858): [ Activity CREATED ] I/MyApplication( 3858): [ Activity STARTED ] I/MyApplication( 3858): onResume D/dalvikvm( 1109): GC freed 6571 objects / 423480 bytes in 73ms I/MyApplication( 3858): Pressed OK button I/MyApplication( 3858): Broadcasting Intent: android.intent.action.VIEW, data: market://details?id=com.mybrowser.android I/ActivityManager( 1030): Starting activity: Intent { act=android.intent.action.VIEW dat=market://details?id=com.mybrowser.android flg=0x10000000 cmp=com.android.ven ding/.AssetInfoActivity } I/MyApplication( 3858): onPause I/ActivityManager( 1030): Start proc com.android.vending for activity com.android.vending/.AssetInfoActivity: pid=3865 uid=10023 gids={3003} I/ActivityThread( 3865): Publishing provider com.android.vending.SuggestionsProvider: com.android.vending.SuggestionsProvider D/dalvikvm( 1030): GREF has increased to 701 I/vending ( 3865): com.android.vending.api.RadioHttpClient$1.handleMessage(): Handle DATA_STATE_CHANGED event: NetworkInfo: type: WIFI[], state: CONNECTED/CO NNECTED, reason: (unspecified), extra: (none), roaming: false, failover: false, isAvailable: true I/ActivityManager( 1030): Displayed activity com.android.vending/.AssetInfoActivity: 609 ms (total 7678 ms) D/dalvikvm( 1030): GC freed 10458 objects / 676440 bytes in 128ms I/MyApplication( 3858): [ Activity STOPPED ] D/dalvikvm( 3865): GC freed 3538 objects / 254008 bytes in 84ms W/dalvikvm( 3865): threadid=19: thread exiting with uncaught exception (group=0x4001b180) E/AndroidRuntime( 3865): Uncaught handler: thread AsyncTask #1 exiting due to uncaught exception E/AndroidRuntime( 3865): java.lang.RuntimeException: An error occured while executing doInBackground() E/AndroidRuntime( 3865): at android.os.AsyncTask$3.done(AsyncTask.java:200) E/AndroidRuntime( 3865): at java.util.concurrent.FutureTask$Sync.innerSetException(FutureTask.java:273) E/AndroidRuntime( 3865): at java.util.concurrent.FutureTask.setException(FutureTask.java:124) E/AndroidRuntime( 3865): at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:307) E/AndroidRuntime( 3865): at java.util.concurrent.FutureTask.run(FutureTask.java:137) E/AndroidRuntime( 3865): at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1068) E/AndroidRuntime( 3865): at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:561) E/AndroidRuntime( 3865): at java.lang.Thread.run(Thread.java:1096) E/AndroidRuntime( 3865): Caused by: java.lang.NullPointerException E/AndroidRuntime( 3865): at com.android.vending.AssetItemAdapter$ReloadLocalAssetInformationTask.doInBackground(AssetItemAdapter.java:845) E/AndroidRuntime( 3865): at com.android.vending.AssetItemAdapter$ReloadLocalAssetInformationTask.doInBackground(AssetItemAdapter.java:831) E/AndroidRuntime( 3865): at android.os.AsyncTask$2.call(AsyncTask.java:185) E/AndroidRuntime( 3865): at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:305) E/AndroidRuntime( 3865): ... 4 more I/Process ( 1030): Sending signal. PID: 3865 SIG: 3 I/dalvikvm( 3865): threadid=7: reacting to signal 3 I/dalvikvm( 3865): Wrote stack trace to '/data/anr/traces.txt' I/DumpStateReceiver( 1030): Added state dump to 1 crashes D/AndroidRuntime( 3865): Shutting down VM W/dalvikvm( 3865): threadid=3: thread exiting with uncaught exception (group=0x4001b180) E/AndroidRuntime( 3865): Uncaught handler: thread main exiting due to uncaught exception E/AndroidRuntime( 3865): java.lang.NullPointerException E/AndroidRuntime( 3865): at com.android.vending.controller.AssetInfoActivityController.getIdDeferToLocal(AssetInfoActivityController.java:637) E/AndroidRuntime( 3865): at com.android.vending.AssetInfoActivity.displayAssetInfo(AssetInfoActivity.java:556) E/AndroidRuntime( 3865): at com.android.vending.AssetInfoActivity.access$800(AssetInfoActivity.java:74) E/AndroidRuntime( 3865): at com.android.vending.AssetInfoActivity$LoadAssetInfoAction$1.run(AssetInfoActivity.java:917) E/AndroidRuntime( 3865): at android.os.Handler.handleCallback(Handler.java:587) E/AndroidRuntime( 3865): at android.os.Handler.dispatchMessage(Handler.java:92) E/AndroidRuntime( 3865): at android.os.Looper.loop(Looper.java:123) E/AndroidRuntime( 3865): at android.app.ActivityThread.main(ActivityThread.java:4363) E/AndroidRuntime( 3865): at java.lang.reflect.Method.invokeNative(Native Method) E/AndroidRuntime( 3865): at java.lang.reflect.Method.invoke(Method.java:521) E/AndroidRuntime( 3865): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:860) E/AndroidRuntime( 3865): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:618) E/AndroidRuntime( 3865): at dalvik.system.NativeStart.main(Native Method) I/Process ( 1030): Sending signal. PID: 3865 SIG: 3 W/ActivityManager( 1030): Process com.android.vending has crashed too many times: killing! D/ActivityManager( 1030): Force finishing activity com.android.vending/.AssetInfoActivity I/dalvikvm( 3865): threadid=7: reacting to signal 3 D/ActivityManager( 1030): Force removing process ProcessRecord{44e48548 3865:com.android.vending/10023} (com.android.vending/10023) However, when I try to launch the market place for a package that exists in the market place say com.opera.mini.android, everything works. Log for this case: D/dalvikvm( 966): GC freed 2781 objects / 195056 bytes in 99ms I/MyApplication( 1165): Pressed OK button I/MyApplication( 1165): Broadcasting Intent: android.intent.action.VIEW, data: market://details?id=com.opera.mini.android I/ActivityManager( 78): Starting activity: Intent { act=android.intent.action.VIEW dat=market://details?id=com.opera.mini.android flg=0x10000000 cmp=com.android.vending/.AssetInfoActivity } I/AndroidRuntime( 1165): AndroidRuntime onExit calling exit(0) I/WindowManager( 78): WIN DEATH: Window{44c72308 myapp.testapp/myapp.testapp.MyApplication paused=true} I/ActivityManager( 78): Process myapp.testapp (pid 1165) has died. I/WindowManager( 78): WIN DEATH: Window{44c72958 myapp.testapp/myapp.testapp.MyApplication paused=false} D/dalvikvm( 78): GC freed 31778 objects / 1796368 bytes in 142ms I/ActivityManager( 78): Displayed activity com.android.vending/.AssetInfoActivity: 214 ms (total 22866 ms) W/KeyCharacterMap( 978): No keyboard for id 65540 W/KeyCharacterMap( 978): Using default keymap: /system/usr/keychars/qwerty.kcm.bin V/RenderScript_jni( 966): surfaceCreated V/RenderScript_jni( 966): surfaceChanged V/RenderScript( 966): setSurface 480 762 0x573430 D/ViewFlipper( 966): updateRunning() mVisible=true, mStarted=true, mUserPresent=true, mRunning=true D/dalvikvm( 978): GC freed 10065 objects / 624440 bytes in 95ms Any ideas? Thanks in advance!

    Read the article

  • Run Your Tests With Any NUnit Version

    - by Alois Kraus
    I always thought that the NUnit test runners and the test assemblies need to reference the same NUnit.Framework version. I wanted to be able to run my test assemblies with the newest GUI runner (currently 2.5.3). Ok so all I need to do is to reference both NUnit versions the newest one and the official for the current project. There is a nice article form Kent Bogart online how to reference the same assembly multiple times with different versions. The magic works by referencing one NUnit assembly with an alias which does prefix all types inside it. Then I could decorate my tests with the TestFixture and Test attribute from both NUnit versions and everything worked fine except that this was ugly. After playing a little bit around to make it simpler I found that I did not need to reference both NUnit.Framework assemblies. The test runners do not require the TestFixture and Test attribute in their specific version. That is really neat since the test runners are instructed by attributes what to do in a declarative way there is really no need to tie the runners to a specific version. At its core NUnit has this little method hidden to find matching TestFixtures and Tests   public bool CanBuildFrom(Type type) {     if (!(!type.IsAbstract || type.IsSealed))     {         return false;     }     return (((Reflect.HasAttribute(type,           "NUnit.Framework.TestFixtureAttribute", true) ||               Reflect.HasMethodWithAttribute(type, "NUnit.Framework.TestAttribute"       , true)) ||               Reflect.HasMethodWithAttribute(type, "NUnit.Framework.TestCaseAttribute"   , true)) ||               Reflect.HasMethodWithAttribute(type, "NUnit.Framework.TheoryAttribute"     , true)); } That is versioning and backwards compatibility at its best. I tell NUnit what to do by decorating my tests classes with NUnit Attributes and the runner executes my intent without the need to bind me to a specific version. The contract between NUnit versions is actually a bit more complex (think of AssertExceptions) but this is also handled nicely by using not the concrete type but simply to check for the catched exception type by string. What can we learn from this? Versioning can be easy if the contract is small and the users of your library use it in a declarative way (Attributes). Everything beyond it will force you to reference several versions of the same assembly with all its consequences. Type equality is lost between versions so none of your casts will work. That means that you cannot simply use IBigInterface in two versions. You will need a wrapper to call the correct versioned one. To get out of this mess you can use one (and only one) version agnostic driver to encapsulate your business logic from the concrete versions. This is of course more work but as NUnit shows it can be easy. Simplicity is therefore not a nice thing to have but also requirement number one if you intend to make things more complex in version two and want to support any version (older and newer). Any interaction model above easy will not be maintainable. There are different approached to versioning. Below are my own personal observations how versioning works within the  .NET Framwork and NUnit.   Versioning Models 1. Bug Fixing and New Isolated Features When you only need to fix bugs there is no need to break anything. This is especially true when you have a big API surface. Microsoft did this with the .NET Framework 3.0 which did leave the CLR as is but delivered new assemblies for the features WPF, WCF and Windows Workflow Foundations. Their basic model was that the .NET 2.0 assemblies were declared as red assemblies which must not change (well mostly but each change was carefully reviewed to minimize the risk of breaking changes as much as possible) whereas the new green assemblies of .NET 3,3.5 did not have such obligations since they did implement new unrelated features which did not have any impact on the red assemblies. This is versioning strategy aimed at maximum compatibility and the delivery of new unrelated features. If you have a big API surface you should strive hard to do the same or you will break your customers code with every release. 2. New Breaking Features There are times when really new things need to be added to an existing product. The .NET Framework 4.0 did change the CLR in many ways which caused subtle different behavior although the API´s remained largely unchanged. Sometimes it is possible to simply recompile an application to make it work (e.g. changed method signature void Func() –> bool Func()) but behavioral changes need much more thought and cannot be automated. To minimize the impact .NET 2.0,3.0,3.5 applications will not automatically use the .NET 4.0 runtime when installed but they will keep using the “old” one. What is interesting is that a side by side execution model of both CLR versions (2 and 4) within one process is possible. Key to success was total isolation. You will have 2 GCs, 2 JIT compilers, 2 finalizer threads within one process. The two .NET runtimes cannot talk  (except via the usual IPC mechanisms) to each other. Both runtimes share nothing and run independently within the same process. This enables Explorer plugins written for the CLR 2.0 to work even when a CLR 4 plugin is already running inside the Explorer process. The price for isolation is an increased memory footprint because everything is loaded and running two times.   3. New Non Breaking Features It really depends where you break things. NUnit has evolved and many different Assert, Expect… methods have been added. These changes are all localized in the NUnit.Framework assembly which can be easily extended. As long as the test execution contract (TestFixture, Test, AssertException) remains stable it is possible to write test executors which can run tests written for NUnit 10 because the execution contract has not changed. It is possible to write software which executes other components in a version independent way but this is only feasible if the interaction model is relatively simple.   Versioning software is hard and it looks like it will remain hard since you suddenly work in a severely constrained environment when you try to innovate and to keep everything backwards compatible at the same time. These are contradicting goals and do not play well together. The easiest way out of this is to carefully watch what your customers are doing with your software. Minimizing the impact is much easier when you do not need to guess how many people will be broken when this or that is removed.

    Read the article

  • Faye private pub web sockets Errno::ECONNREFUSED: Connection refused - connect(2)

    - by Rubytastic
    Faye private pub has issues connecting. It works from rails console and from inside application. It fails when called from background process like delayed_job or sidekiq. I have been unable to resolve this issue for some time now, does anyone know why this happens? Errno::ECONNREFUSED: Connection refused - connect(2) /Users/jordan/.rvm/rubies/ruby-2.0.0-p247/lib/ruby/2.0.0/resolv-replace.rb:23:in initialize' /Users/jordan/.rvm/rubies/ruby-2.0.0-p247/lib/ruby/2.0.0/resolv-replace.rb:23:in initialize' /Users/jordan/.rvm/rubies/ruby-2.0.0-p247/lib/ruby/2.0.0/net/http.rb:878:in open' /Users/jordan/.rvm/rubies/ruby-2.0.0-p247/lib/ruby/2.0.0/net/http.rb:878:in block in connect' /Users/jordan/.rvm/rubies/ruby-2.0.0-p247/lib/ruby/2.0.0/timeout.rb:52:in timeout' /Users/jordan/.rvm/rubies/ruby-2.0.0-p247/lib/ruby/2.0.0/net/http.rb:877:in connect' /Users/jordan/.rvm/rubies/ruby-2.0.0-p247/lib/ruby/2.0.0/net/http.rb:862:in do_start' /Users/jordan/.rvm/rubies/ruby-2.0.0-p247/lib/ruby/2.0.0/net/http.rb:851:in start' /Users/jordan/.rvm/gems/ruby-2.0.0-p247@books/gems/private_pub-1.0.3/lib/private_pub.rb:42:in publish_message' /Users/jordan/.rvm/gems/ruby-2.0.0-p247@books/gems/private_pub-1.0.3/lib/private_pub.rb:29:in publish_to' /srv/books/app/workers/session_reload.rb:16:in perform' /Users/jordan/.rvm/gems/ruby-2.0.0-p247@books/gems/sidekiq-2.16.0/lib/sidekiq/processor.rb:48:in block (3 levels) in process' /Users/jordan/.rvm/gems/ruby-2.0.0-p247@books/gems/sidekiq-2.16.0/lib/sidekiq/middleware/chain.rb:119:in call' /Users/jordan/.rvm/gems/ruby-2.0.0-p247@books/gems/sidekiq-2.16.0/lib/sidekiq/middleware/chain.rb:119:inblock in invoke' /Users/jordan/.rvm/gems/ruby-2.0.0-p247@books/gems/newrelic_rpm-3.6.8.168/lib/new_relic/agent/instrumentation/sidekiq.rb:25:in block in call' /Users/jordan/.rvm/gems/ruby-2.0.0-p247@books/gems/newrelic_rpm-3.6.8.168/lib/new_relic/agent/instrumentation/controller_instrumentation.rb:324:in perform_action_with_newrelic_trace' /Users/jordan/.rvm/gems/ruby-2.0.0-p247@books/gems/newrelic_rpm-3.6.8.168/lib/new_relic/agent/instrumentation/sidekiq.rb:21:in call' /Users/jordan/.rvm/gems/ruby-2.0.0-p247@books/gems/sidekiq-2.16.0/lib/sidekiq/middleware/chain.rb:121:inblock in invoke' /Users/jordan/.rvm/gems/ruby-2.0.0-p247@books/gems/sidekiq-failures-0.2.2/lib/sidekiq/failures/middleware.rb:10:in call' /Users/jordan/.rvm/gems/ruby-2.0.0-p247@books/gems/sidekiq-2.16.0/lib/sidekiq/middleware/chain.rb:121:inblock in invoke' /Users/jordan/.rvm/gems/ruby-2.0.0-p247@books/gems/sidekiq-2.16.0/lib/sidekiq/middleware/server/active_record.rb:6:in call' /Users/jordan/.rvm/gems/ruby-2.0.0-p247@books/gems/sidekiq-2.16.0/lib/sidekiq/middleware/chain.rb:121:inblock in invoke' /Users/jordan/.rvm/gems/ruby-2.0.0-p247@books/gems/sidekiq-2.16.0/lib/sidekiq/middleware/server/retry_jobs.rb:62:in call' /Users/jordan/.rvm/gems/ruby-2.0.0-p247@books/gems/sidekiq-2.16.0/lib/sidekiq/middleware/chain.rb:121:inblock in invoke' /Users/jordan/.rvm/gems/ruby-2.0.0-p247@books/gems/sidekiq-2.16.0/lib/sidekiq/middleware/server/logging.rb:11:in block in call' /Users/jordan/.rvm/gems/ruby-2.0.0-p247@books/gems/sidekiq-2.16.0/lib/sidekiq/logging.rb:22:in with_context' /Users/jordan/.rvm/gems/ruby-2.0.0-p247@books/gems/sidekiq-2.16.0/lib/sidekiq/middleware/server/logging.rb:7:in call' /Users/jordan/.rvm/gems/ruby-2.0.0-p247@books/gems/sidekiq-2.16.0/lib/sidekiq/middleware/chain.rb:121:inblock in invoke' /Users/jordan/.rvm/gems/ruby-2.0.0-p247@books/gems/sidekiq-2.16.0/lib/sidekiq/middleware/chain.rb:124:in call' /Users/jordan/.rvm/gems/ruby-2.0.0-p247@books/gems/sidekiq-2.16.0/lib/sidekiq/middleware/chain.rb:124:ininvoke' /Users/jordan/.rvm/gems/ruby-2.0.0-p247@books/gems/sidekiq-2.16.0/lib/sidekiq/processor.rb:47:in block (2 levels) in process' /Users/jordan/.rvm/gems/ruby-2.0.0-p247@books/gems/sidekiq-2.16.0/lib/sidekiq/processor.rb:102:in stats' /Users/jordan/.rvm/gems/ruby-2.0.0-p247@books/gems/sidekiq-2.16.0/lib/sidekiq/processor.rb:46:in block in process' /Users/jordan/.rvm/gems/ruby-2.0.0-p247@books/gems/sidekiq-2.16.0/lib/sidekiq/processor.rb:83:in do_defer' /Users/jordan/.rvm/gems/ruby-2.0.0-p247@books/gems/sidekiq-2.16.0/lib/sidekiq/processor.rb:37:in process' /Users/jordan/.rvm/gems/ruby-2.0.0-p247@books/gems/celluloid-0.15.2/lib/celluloid/calls.rb:25:in public_send' /Users/jordan/.rvm/gems/ruby-2.0.0-p247@books/gems/celluloid-0.15.2/lib/celluloid/calls.rb:25:in dispatch' /Users/jordan/.rvm/gems/ruby-2.0.0-p247@books/gems/celluloid-0.15.2/lib/celluloid/calls.rb:122:in dispatch' /Users/jordan/.rvm/gems/ruby-2.0.0-p247@books/gems/celluloid-0.15.2/lib/celluloid/actor.rb:322:in block in handle_message' /Users/jordan/.rvm/gems/ruby-2.0.0-p247@books/gems/celluloid-0.15.2/lib/celluloid/actor.rb:416:in block in task' /Users/jordan/.rvm/gems/ruby-2.0.0-p247@books/gems/celluloid-0.15.2/lib/celluloid/tasks.rb:55:in block in initialize' /Users/jordan/.rvm/gems/ruby-2.0.0-p247@books/gems/celluloid-0.15.2/lib/celluloid/tasks/task_fiber.rb:13:in block in create' Processor: dev-air.local:db67c04914cdef80c501043115298f6d-70211452597260

    Read the article

  • SQL SERVER – Step by Step Guide to Beginning Data Quality Services in SQL Server 2012 – Introduction to DQS

    - by pinaldave
    Data Quality Services is a very important concept of SQL Server. I have recently started to explore the same and I am really learning some good concepts. Here are two very important blog posts which one should go over before continuing this blog post. Installing Data Quality Services (DQS) on SQL Server 2012 Connecting Error to Data Quality Services (DQS) on SQL Server 2012 This article is introduction to Data Quality Services for beginners. We will be using an Excel file Click on the image to enlarge the it. In the first article we learned to install DQS. In this article we will see how we can learn about building Knowledge Base and using it to help us identify the quality of the data as well help correct the bad quality of the data. Here are the two very important steps we will be learning in this tutorial. Building a New Knowledge Base  Creating a New Data Quality Project Let us start the building the Knowledge Base. Click on New Knowledge Base. In our project we will be using the Excel as a knowledge base. Here is the Excel which we will be using. There are two columns. One is Colors and another is Shade. They are independent columns and not related to each other. The point which I am trying to show is that in Column A there are unique data and in Column B there are duplicate records. Clicking on New Knowledge Base will bring up the following screen. Enter the name of the new knowledge base. Clicking NEXT will bring up following screen where it will allow to select the EXCE file and it will also let users select the source column. I have selected Colors and Shade both as a source column. Creating a domain is very important. Here you can create a unique domain or domain which is compositely build from Colors and Shade. As this is the first example, I will create unique domain – for Colors I will create domain Colors and for Shade I will create domain Shade. Here is the screen which will demonstrate how the screen will look after creating domains. Clicking NEXT it will bring you to following screen where you can do the data discovery. Clicking on the START will start the processing of the source data provided. Pre-processed data will show various information related to the source data. In our case it shows that Colors column have unique data whereas Shade have non-unique data and unique data rows are only two. In the next screen you can actually add more rows as well see the frequency of the data as the values are listed unique. Clicking next will publish the knowledge base which is just created. Now the knowledge base is created. We will try to take any random data and attempt to do DQS implementation over it. I am using another excel sheet here for simplicity purpose. In reality you can easily use SQL Server table for the same. Click on New Data Quality Project to see start DQS Project. In the next screen it will ask which knowledge base to use. We will be using our Colors knowledge base which we have recently created. In the Colors knowledge base we had two columns – 1) Colors and 2) Shade. In our case we will be using both of the mappings here. User can select one or multiple column mapping over here. Now the most important phase of the complete project. Click on Start and it will make the cleaning process and shows various results. In our case there were two columns to be processed and it completed the task with necessary information. It demonstrated that in Colors columns it has not corrected any value by itself but in Shade value there is a suggestion it has. We can train the DQS to correct values but let us keep that subject for future blog posts. Now click next and keep the domain Colors selected left side. It will demonstrate that there are two incorrect columns which it needs to be corrected. Here is the place where once corrected value will be auto-corrected in future. I manually corrected the value here and clicked on Approve radio buttons. As soon as I click on Approve buttons the rows will be disappeared from this tab and will move to Corrected Tab. If I had rejected tab it would have moved the rows to Invalid tab as well. In this screen you can see how the corrected 2 rows are demonstrated. You can click on Correct tab and see previously validated 6 rows which passed the DQS process. Now let us click on the Shade domain on the left side of the screen. This domain shows very interesting details as there DQS system guessed the correct answer as Dark with the confidence level of 77%. It is quite a high confidence level and manual observation also demonstrate that Dark is the correct answer. I clicked on Approve and the row moved to corrected tab. On the next screen DQS shows the summary of all the activities. It also demonstrates how the correction of the quality of the data was performed. The user can explore their data to a SQL Server Table, CSV file or Excel. The user also has an option to either explore data and all the associated cleansing info or data only. I will select Data only for demonstration purpose. Clicking explore will generate the files. Let us open the generated file. It will look as following and it looks pretty complete and corrected. Well, we have successfully completed DQS Process. The process is indeed very easy. I suggest you try this out yourself and you will find it very easy to learn. In future we will go over advanced concepts. Are you using this feature on your production server? If yes, would you please leave a comment with your environment and business need. It will be indeed interesting to see where it is implemented. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Business Intelligence, Data Warehousing, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Data Quality Services, DQS

    Read the article

  • The Proper Use of the VM Role in Windows Azure

    - by BuckWoody
    At the Professional Developer’s Conference (PDC) in 2010 we announced an addition to the Computational Roles in Windows Azure, called the VM Role. This new feature allows a great deal of control over the applications you write, but some have confused it with our full infrastructure offering in Windows Hyper-V. There is a proper architecture pattern for both of them. Virtualization Virtualization is the process of taking all of the hardware of a physical computer and replicating it in software alone. This means that a single computer can “host” or run several “virtual” computers. These virtual computers can run anywhere - including at a vendor’s location. Some companies refer to this as Cloud Computing since the hardware is operated and maintained elsewhere. IaaS The more detailed definition of this type of computing is called Infrastructure as a Service (Iaas) since it removes the need for you to maintain hardware at your organization. The operating system, drivers, and all the other software required to run an application are still under your control and your responsibility to license, patch, and scale. Microsoft has an offering in this space called Hyper-V, that runs on the Windows operating system. Combined with a hardware hosting vendor and the System Center software to create and deploy Virtual Machines (a process referred to as provisioning), you can create a Cloud environment with full control over all aspects of the machine, including multiple operating systems if you like. Hosting machines and provisioning them at your own buildings is sometimes called a Private Cloud, and hosting them somewhere else is often called a Public Cloud. State-ful and Stateless Programming This paradigm does not create a new, scalable way of computing. It simply moves the hardware away. The reason is that when you limit the Cloud efforts to a Virtual Machine, you are in effect limiting the computing resources to what that single system can provide. This is because much of the software developed in this environment maintains “state” - and that requires a little explanation. “State-ful programming” means that all parts of the computing environment stay connected to each other throughout a compute cycle. The system expects the memory, CPU, storage and network to remain in the same state from the beginning of the process to the end. You can think of this as a telephone conversation - you expect that the other person picks up the phone, listens to you, and talks back all in a single unit of time. In “Stateless” computing the system is designed to allow the different parts of the code to run independently of each other. You can think of this like an e-mail exchange. You compose an e-mail from your system (it has the state when you’re doing that) and then you walk away for a bit to make some coffee. A few minutes later you click the “send” button (the network has the state) and you go to a meeting. The server receives the message and stores it on a mail program’s database (the mail server has the state now) and continues working on other mail. Finally, the other party logs on to their mail client and reads the mail (the other user has the state) and responds to it and so on. These events might be separated by milliseconds or even days, but the system continues to operate. The entire process doesn’t maintain the state, each component does. This is the exact concept behind coding for Windows Azure. The stateless programming model allows amazing rates of scale, since the message (think of the e-mail) can be broken apart by multiple programs and worked on in parallel (like when the e-mail goes to hundreds of users), and only the order of re-assembling the work is important to consider. For the exact same reason, if the system makes copies of those running programs as Windows Azure does, you have built-in redundancy and recovery. It’s just built into the design. The Difference Between Infrastructure Designs and Platform Designs When you simply take a physical server running software and virtualize it either privately or publicly, you haven’t done anything to allow the code to scale or have recovery. That all has to be handled by adding more code and more Virtual Machines that have a slight lag in maintaining the running state of the system. Add more machines and you get more lag, so the scale is limited. This is the primary limitation with IaaS. It’s also not as easy to deploy these VM’s, and more importantly, you’re often charged on a longer basis to remove them. your agility in IaaS is more limited. Windows Azure is a Platform - meaning that you get objects you can code against. The code you write runs on multiple nodes with multiple copies, and it all works because of the magic of Stateless programming. you don’t worry, or even care, about what is running underneath. It could be Windows (and it is in fact a type of Windows Server), Linux, or anything else - but that' isn’t what you want to manage, monitor, maintain or license. You don’t want to deploy an operating system - you want to deploy an application. You want your code to run, and you don’t care how it does that. Another benefit to PaaS is that you can ask for hundreds or thousands of new nodes of computing power - there’s no provisioning, it just happens. And you can stop using them quicker - and the base code for your application does not have to change to make this happen. Windows Azure Roles and Their Use If you need your code to have a user interface, in Visual Studio you add a Web Role to your project, and if the code needs to do work that doesn’t involve a user interface you can add a Worker Role. They are just containers that act a certain way. I’ll provide more detail on those later. Note: That’s a general description, so it’s not entirely accurate, but it’s accurate enough for this discussion. So now we’re back to that VM Role. Because of the name, some have mistakenly thought that you can take a Virtual Machine running, say Linux, and deploy it to Windows Azure using this Role. But you can’t. That’s not what it is designed for at all. If you do need that kind of deployment, you should look into Hyper-V and System Center to create the Private or Public Infrastructure as a Service. What the VM Role is actually designed to do is to allow you to have a great deal of control over the system where your code will run. Let’s take an example. You’ve heard about Windows Azure, and Platform programming. You’re convinced it’s the right way to code. But you have a lot of things you’ve written in another way at your company. Re-writing all of your code to take advantage of Windows Azure will take a long time. Or perhaps you have a certain version of Apache Web Server that you need for your code to work. In both cases, you think you can (or already have) code the the software to be “Stateless”, you just need more control over the place where the code runs. That’s the place where a VM Role makes sense. Recap Virtualizing servers alone has limitations of scale, availability and recovery. Microsoft’s offering in this area is Hyper-V and System Center, not the VM Role. The VM Role is still used for running Stateless code, just like the Web and Worker Roles, with the exception that it allows you more control over the environment of where that code runs.

    Read the article

  • Computer crashes on resume from standby almost every time

    - by Los Frijoles
    I am running Ubuntu 12.04 on a Core i5 2500K and ASRock Z68 Pro3-M motherboard (no graphics card, hd is a WD Green 1TB, and cd drive is some cheap lite-on drive). Since installing 12.04, my computer has been freezing after resume, but not every time. When I start to resume, it starts going normally with a blinking cursor on the screen and then sometimes it will continue on to the gnome 3 unlock screen. Most of the time, however, it will blink for a little bit and then the monitor will flip modes and shut off due to no signal. Pressing keys on the keyboard gets no response (num lock light doesn't respond, Ctrl-Alt-F1 fails to drop it into a terminal, Ctrl-Alt-Backspace doesn't work) and so I assume the computer is crashed. The worst part is, the logs look entirely normal. Here is my system log during one of these crashes and my subsequent hard poweroff and restart: Jun 6 21:54:43 kcuzner-desktop udevd[10448]: inotify_add_watch(6, /dev/dm-2, 10) failed: No such file or directory Jun 6 21:54:43 kcuzner-desktop udevd[10448]: inotify_add_watch(6, /dev/dm-2, 10) failed: No such file or directory Jun 6 21:54:43 kcuzner-desktop udevd[10448]: inotify_add_watch(6, /dev/dm-1, 10) failed: No such file or directory Jun 6 21:54:43 kcuzner-desktop udevd[12419]: inotify_add_watch(6, /dev/dm-0, 10) failed: No such file or directory Jun 6 21:54:43 kcuzner-desktop udevd[10448]: inotify_add_watch(6, /dev/dm-0, 10) failed: No such file or directory Jun 6 22:09:01 kcuzner-desktop CRON[9061]: (root) CMD ( [ -x /usr/lib/php5/maxlifetime ] && [ -d /var/lib/php5 ] && find /var/lib/php5/ -depth -mindepth 1 -maxdepth 1 -type f -cmin +$(/usr/lib/php5/maxlifetime) ! -execdir fuser -s {} 2>/dev/null \; -delete) Jun 6 22:17:01 kcuzner-desktop CRON[22142]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly) Jun 6 22:39:01 kcuzner-desktop CRON[26909]: (root) CMD ( [ -x /usr/lib/php5/maxlifetime ] && [ -d /var/lib/php5 ] && find /var/lib/php5/ -depth -mindepth 1 -maxdepth 1 -type f -cmin +$(/usr/lib/php5/maxlifetime) ! -execdir fuser -s {} 2>/dev/null \; -delete) Jun 6 22:54:21 kcuzner-desktop kernel: [57905.560822] show_signal_msg: 36 callbacks suppressed Jun 6 22:54:21 kcuzner-desktop kernel: [57905.560828] chromium-browse[9139]: segfault at 0 ip 00007f3a78efade0 sp 00007fff7e2d2c18 error 4 in chromium-browser[7f3a76604000+412b000] Jun 6 23:05:43 kcuzner-desktop kernel: [58586.415158] chromium-browse[21025]: segfault at 0 ip 00007f3a78efade0 sp 00007fff7e2d2c18 error 4 in chromium-browser[7f3a76604000+412b000] Jun 6 23:09:01 kcuzner-desktop CRON[13542]: (root) CMD ( [ -x /usr/lib/php5/maxlifetime ] && [ -d /var/lib/php5 ] && find /var/lib/php5/ -depth -mindepth 1 -maxdepth 1 -type f -cmin +$(/usr/lib/php5/maxlifetime) ! -execdir fuser -s {} 2>/dev/null \; -delete) Jun 6 23:12:43 kcuzner-desktop kernel: [59006.317590] usb 2-1.7: USB disconnect, device number 8 Jun 6 23:12:43 kcuzner-desktop kernel: [59006.319672] sd 7:0:0:0: [sdg] Synchronizing SCSI cache Jun 6 23:12:43 kcuzner-desktop kernel: [59006.319737] sd 7:0:0:0: [sdg] Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK Jun 6 23:17:01 kcuzner-desktop CRON[26580]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly) Jun 6 23:19:04 kcuzner-desktop acpid: client connected from 29925[0:0] Jun 6 23:19:04 kcuzner-desktop acpid: 1 client rule loaded Jun 6 23:19:07 kcuzner-desktop rtkit-daemon[1835]: Successfully made thread 30131 of process 30131 (n/a) owned by '104' high priority at nice level -11. Jun 6 23:19:07 kcuzner-desktop rtkit-daemon[1835]: Supervising 1 threads of 1 processes of 1 users. Jun 6 23:19:07 kcuzner-desktop rtkit-daemon[1835]: Successfully made thread 30162 of process 30131 (n/a) owned by '104' RT at priority 5. Jun 6 23:19:07 kcuzner-desktop rtkit-daemon[1835]: Supervising 2 threads of 1 processes of 1 users. Jun 6 23:19:07 kcuzner-desktop rtkit-daemon[1835]: Successfully made thread 30163 of process 30131 (n/a) owned by '104' RT at priority 5. Jun 6 23:19:07 kcuzner-desktop rtkit-daemon[1835]: Supervising 3 threads of 1 processes of 1 users. Jun 6 23:19:07 kcuzner-desktop bluetoothd[1140]: Endpoint registered: sender=:1.239 path=/MediaEndpoint/HFPAG Jun 6 23:19:07 kcuzner-desktop bluetoothd[1140]: Endpoint registered: sender=:1.239 path=/MediaEndpoint/A2DPSource Jun 6 23:19:07 kcuzner-desktop bluetoothd[1140]: Endpoint registered: sender=:1.239 path=/MediaEndpoint/A2DPSink Jun 6 23:19:07 kcuzner-desktop rtkit-daemon[1835]: Successfully made thread 30166 of process 30166 (n/a) owned by '104' high priority at nice level -11. Jun 6 23:19:07 kcuzner-desktop rtkit-daemon[1835]: Supervising 4 threads of 2 processes of 1 users. Jun 6 23:19:07 kcuzner-desktop pulseaudio[30166]: [pulseaudio] pid.c: Daemon already running. Jun 6 23:19:10 kcuzner-desktop acpid: client 2942[0:0] has disconnected Jun 6 23:19:10 kcuzner-desktop acpid: client 29925[0:0] has disconnected Jun 6 23:19:10 kcuzner-desktop acpid: client connected from 1286[0:0] Jun 6 23:19:10 kcuzner-desktop acpid: 1 client rule loaded Jun 6 23:19:31 kcuzner-desktop bluetoothd[1140]: Endpoint unregistered: sender=:1.239 path=/MediaEndpoint/HFPAG Jun 6 23:19:31 kcuzner-desktop bluetoothd[1140]: Endpoint unregistered: sender=:1.239 path=/MediaEndpoint/A2DPSource Jun 6 23:19:31 kcuzner-desktop bluetoothd[1140]: Endpoint unregistered: sender=:1.239 path=/MediaEndpoint/A2DPSink Jun 6 23:28:12 kcuzner-desktop kernel: imklog 5.8.6, log source = /proc/kmsg started. Jun 6 23:28:12 kcuzner-desktop rsyslogd: [origin software="rsyslogd" swVersion="5.8.6" x-pid="1053" x-info="http://www.rsyslog.com"] start Jun 6 23:28:12 kcuzner-desktop rsyslogd: rsyslogd's groupid changed to 103 Jun 6 23:28:12 kcuzner-desktop rsyslogd: rsyslogd's userid changed to 101 Jun 6 23:28:12 kcuzner-desktop rsyslogd-2039: Could not open output pipe '/dev/xconsole' [try http://www.rsyslog.com/e/2039 ] Jun 6 23:28:12 kcuzner-desktop modem-manager[1070]: <info> Loaded plugin Ericsson MBM Jun 6 23:28:12 kcuzner-desktop modem-manager[1070]: <info> Loaded plugin Sierra Jun 6 23:28:12 kcuzner-desktop modem-manager[1070]: <info> Loaded plugin Generic Jun 6 23:28:12 kcuzner-desktop modem-manager[1070]: <info> Loaded plugin Huawei Jun 6 23:28:12 kcuzner-desktop modem-manager[1070]: <info> Loaded plugin Linktop Jun 6 23:28:12 kcuzner-desktop bluetoothd[1072]: Failed to init gatt_example plugin Jun 6 23:28:12 kcuzner-desktop bluetoothd[1072]: Listening for HCI events on hci0 Jun 6 23:28:12 kcuzner-desktop NetworkManager[1080]: <info> NetworkManager (version 0.9.4.0) is starting... Jun 6 23:28:12 kcuzner-desktop NetworkManager[1080]: <info> Read config file /etc/NetworkManager/NetworkManager.conf Jun 6 23:28:12 kcuzner-desktop NetworkManager[1080]: <info> VPN: loaded org.freedesktop.NetworkManager.pptp Jun 6 23:28:12 kcuzner-desktop NetworkManager[1080]: <info> DNS: loaded plugin dnsmasq Jun 6 23:28:12 kcuzner-desktop kernel: [ 0.000000] Initializing cgroup subsys cpuset Jun 6 23:28:12 kcuzner-desktop kernel: [ 0.000000] Initializing cgroup subsys cpu Sorry it's so huge; the restart happens at 23:28:12 I believe and all I see is that chromium segfaulted a few times. I wouldn't think a segfault from an individual program on the computer would crash it, but could that be the issue?

    Read the article

  • LightDM will not start after stopping it

    - by Sweeters
    I am running Ubuntu 11.10 "Oneiric Ocelot", and in trying to install the nvidia CUDA developer drivers I switched to a virtual terminal (Ctrl-Alt-F5) and stopped lightdm (installation required that no X server instance be running) through sudo service lightdm stop. Re-starting lightdm with sudo service lightdm start did not work: A couple of * Starting [...] lines where displayed, but the process hanged. (I do not remember at which point, but I think it was * Starting System V runlevel compatibility. I manually rebooted my laptop, and ever since booting seems to hang, usually around the * Starting anac(h)ronistic cron [OK] log line (not consistently at that point, though). From that point on, I seem to be able to interact with my system only through a tty session (Ctrl-Alt-F1). I've tried purging and reinstalling both lightdm and gdm, as well as selecting both as the default display managers (through sudo dpkg-reconfigure [lightdm / gdm] or by manually editing /etc/X11/default-display-manager) through both apt-get and aptitude (that shouldn't make a difference anyway) after updating the packages, but the problem persists. Some of the responses I'm getting are the following: After running sudo dpkg-reconfigure lightdm (but not ... gdm) I get the following message: dpkg-maintscript-helper:warning: environment variable DPKG_MATINSCRIPT_NAME missing dpkg-maintscript-helper:warning: environment variable DPKG_MATINSCRIPT_PACKAGE missing After trying sudo service lightdm start or sudo start lightdm I get to see the boot loading screen again but nothing changes. If I go back to the tty shell I see lightdm start/running, process <num> but ps -e | grep lightdm gives no output. After trying sudo service gdm start or sudo starg gdm I get the gdm start/running, process <num> message, and gdm-binary is supposedly an active process, but all that happens is that the screen blinks a couple of times and nothing else. Other candidate solutions that I'd found on the web included running startx but when I try that I get an error output [...] Fatal server error: no screens found [...]. Moreover, I made sure that lightdm-gtk-greeter is installed but that did not help either. Please excuse my not including complete outputs/logs; I am writing this post from another computer and it's hard to manually copy the complete logs. Also, I've seen several posts that had to do with similar problems, but either there was no fix, or the one suggested did not work for me. In closing: Please help! I very much hope to avoid re-installing Ubuntu from scratch! :) Alex @mosi I did not manage to fix the NVIDIA kernel driver as per your instructions. I should perhaps mention that I'm on a Dell XPS15 laptop with an NVIDIA Optimus graphics card, and that I have bumblebee installed (which installs nvidia drivers during its installation, I believe). Issuing the mentioned commands I get the following: ~$uname -r 3.0.0-12-generic ~$lsmod | grep -i nvidia nvidia 11713772 0 ~$dmesg | grep -i nvidia [ 8.980041] nvidia: module license 'NVIDIA' taints kernel. [ 9.354860] nvidia 0000:01:00.0: power state changed by ACPI to D0 [ 9.354864] nvidia 0000:01:00.0: power state changed by ACPI to D0 [ 9.354868] nvidia 0000:01:00.0: enabling device (0006 -> 0007) [ 9.354873] nvidia 0000:01:00.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16 [ 9.354879] nvidia 0000:01:00.0: setting latency timer to 64 [ 9.355052] NVRM: loading NVIDIA UNIX x86_64 Kernel Module 280.13 Wed Jul 27 16:53:56 PDT 2011 Also, running aptitude search nvidia gives me the following: p nvidia-173 - NVIDIA binary Xorg driver, kernel module a p nvidia-173-dev - NVIDIA binary Xorg driver development file p nvidia-173-updates - NVIDIA binary Xorg driver, kernel module a p nvidia-173-updates-dev - NVIDIA binary Xorg driver development file p nvidia-96 - NVIDIA binary Xorg driver, kernel module a p nvidia-96-dev - NVIDIA binary Xorg driver development file p nvidia-96-updates - NVIDIA binary Xorg driver, kernel module a p nvidia-96-updates-dev - NVIDIA binary Xorg driver development file p nvidia-cg-toolkit - Cg Toolkit - GPU Shader Authoring Language p nvidia-common - Find obsolete NVIDIA drivers i nvidia-current - NVIDIA binary Xorg driver, kernel module a p nvidia-current-dev - NVIDIA binary Xorg driver development file c nvidia-current-updates - NVIDIA binary Xorg driver, kernel module a p nvidia-current-updates-dev - NVIDIA binary Xorg driver development file i nvidia-settings - Tool of configuring the NVIDIA graphics dr p nvidia-settings-updates - Tool of configuring the NVIDIA graphics dr v nvidia-va-driver - v nvidia-va-driver - I've tried manually installing (sudo aptitude install <package>) packages nvidia-common and nvidia-settings-updates but to no avail. For example, sudo aptitude install nvidia-settings-updates returns the following log: Reading package lists... Building dependency tree... Reading state information... Reading extended state information... Initializing package states... Writing extended state information... No packages will be installed, upgraded, or removed. 0 packages upgraded, 0 newly installed, 0 to remove and 83 not upgraded. Need to get 0 B of archives. After unpacking 0 B will be used. Writing extended state information... Reading package lists... Building dependency tree... Reading state information... Reading extended state information... Initializing package states... Writing extended state information... The same happens with the Linux headers (i.e. I cannot seem to be able to install linux-headers-3.0.0-12-generic). The output of aptitude search linux-headers is as follows: v linux-headers - v linux-headers - v linux-headers-2.6 - i linux-headers-2.6.38-11 - Header files related to Linux kernel versi i linux-headers-2.6.38-11-generic - Linux kernel headers for version 2.6.38 on i A linux-headers-2.6.38-8 - Header files related to Linux kernel versi i A linux-headers-2.6.38-8-generic - Linux kernel headers for version 2.6.38 on v linux-headers-3 - v linux-headers-3.0 - v linux-headers-3.0 - i A linux-headers-3.0.0-12 - Header files related to Linux kernel versi p linux-headers-3.0.0-12-generic - Linux kernel headers for version 3.0.0 on p linux-headers-3.0.0-12-generic- - Linux kernel headers for version 3.0.0 on p linux-headers-3.0.0-12-server - Linux kernel headers for version 3.0.0 on p linux-headers-3.0.0-12-virtual - Linux kernel headers for version 3.0.0 on p linux-headers-generic - Generic Linux kernel headers p linux-headers-generic-pae - Generic Linux kernel headers v linux-headers-lbm - v linux-headers-lbm - v linux-headers-lbm-2.6 - v linux-headers-lbm-2.6 - p linux-headers-lbm-3.0.0-12-gene - Header files related to linux-backports-mo p linux-headers-lbm-3.0.0-12-gene - Header files related to linux-backports-mo p linux-headers-lbm-3.0.0-12-serv - Header files related to linux-backports-mo p linux-headers-server - Linux kernel headers on Server Equipment. p linux-headers-virtual - Linux kernel headers for virtual machines @heartsmagic I did try purging and reinstalling any nvidia driver packages, but it did not seem to make a difference, My xorg.conf file contains the following: # nvidia-xconfig: X configuration file generated by nvidia-xconfig # nvidia-xconfig: version 280.13 ([email protected]) Wed Jul 27 17:15:58 PDT 2011 Section "ServerLayout" Identifier "Layout0" Screen 0 "Screen0" 0 0 InputDevice "Keyboard0" "CoreKeyboard" InputDevice "Mouse0" "CorePointer" EndSection Section "Files" EndSection Section "InputDevice" # generated from default Identifier "Mouse0" Driver "mouse" Option "Protocol" "auto" Option "Device" "/dev/psaux" Option "Emulate3Buttons" "no" Option "ZAxisMapping" "4 5" EndSection Section "InputDevice" # generated from default Identifier "Keyboard0" Driver "kbd" EndSection Section "Monitor" Identifier "Monitor0" VendorName "Unknown" ModelName "Unknown" HorizSync 28.0 - 33.0 VertRefresh 43.0 - 72.0 Option "DPMS" EndSection Section "Device" Identifier "Device0" Driver "nvidia" VendorName "NVIDIA Corporation" EndSection Section "Screen" Identifier "Screen0" Device "Device0" Monitor "Monitor0" DefaultDepth 24 SubSection "Display" Depth 24 EndSubSection EndSection

    Read the article

  • Portal And Content - Content Integration - Best Practices

    - by Stefan Krantz
    Lately we have seen an increase in projects that have failed to either get user friendly content integration or non satisfactory performance. Our intention is to mitigate any knowledge gap that our previous post might have left you with, therefore this post will repeat some recommendation or reference back to old useful post. Moreover this post will help you understand ground up how to design, architect and implement business enabled, responsive and performing portals with complex requirements on business centric information publishing. Design the Information Model The key to successful portal deployments is Information modeling, it's a key task to understand the use case you designing for, therefore I have designed a set of question you need to ask yourself or your customer: Question: Who will own the content, IT or Business? Answer: BusinessQuestion: Who will publish the content, IT or Business? Answer: BusinessQuestion: Will there be multiple publishers? Answer: YesQuestion: Are the publishers computer scientist?Answer: NoQuestion: How often do the information changes, daily, weekly, monthly?Answer: Daily, weekly If your answers to the questions matches at least 2, we strongly recommend you design your content with following principles: Divide your pages in to logical sections, where each section is marked with its purpose Assign capabilities to each section, does it contain text, images, formatting and/or is it static and is populated through other contextual information Select editor/design element type WYSIWYG - Rich Text Plain Text - non-format text Image - Image object Static List - static list of formatted informationDynamic Data List - assembled information from multiple data files through CMIS query The result of such design map could look like following below examples: Based on the outcome of the required elements in the design column 3 from the left you will now simply design a data model in WebCenter Content - Site Studio by creating a Region Definition structure matching your design requirements.For more information on how to create a Region definition see following post: Region Definition Post - note see instruction 7 for details. Each region definition can now be used to instantiate data files, a data file will hold the actual data for each element in the region definition. Another way you can see this is to compare the region definition as an extension to the metadata model in WebCenter Content for each data file item. Design content templates With a solid dependable information model we can now proceed to template creation and page design, in this phase focuses on how to place the content sections from the region definition on the page via a Content Presenter template. Remember by creating content presenter templates you will leverage the latest and most integrated technology WebCenter has to offer. This phase is much easier since the you already have the information model and design wire-frames to base the logic on, however there is still few considerations to pay attention to: Base the template on ADF and make only necessary exceptions to markup when required Leverage ADF design components for Tabs, Accordions and other similar components, this way the design in the content published areas will comply with other design areas based on custom ADF taskflows There is no performance impact when using meta data or region definition based data All data access regardless of type, metadata or xml data it can be accessed via the Content Presenter - Node. See below for applied examples on how to access data Access metadata property from Document - #{node.propertyMap['myProp'].value}myProp in this example can be for instance (dDocName, dDocTitle, xComments or any other available metadata) Access element data from data file xml - #{node.propertyMap['[Region Definition Name]:[Element name]'].asTextHtml}Region Definition Name is the expect region definition that the current data file is instantiatingElement name is the element value you like to grab from the data file I recommend you read following  useful post on content template topic:CMIS queries and template creation - note see instruction 9 for detailsStatic List template rendering For more information on templates:Single Item Content TemplateMulti Item Content TemplateExpression Language Internationalization Considerations When integrating content assets via content presenter you by now probably understand that the content item/data file is wired to the page, what is also pretty common at this stage is that the content item/data file only support one language since its not practical or business friendly to mix that into a complex structure. Therefore you will be left with a very common dilemma that you will have to either build a complete new portal for each locale, which is not an good option! However with little bit of information modeling and clear naming convention this can be addressed. Basically you can simply make sure that all content item/data file are named with a predictable naming convention like "Content1_EN" for the English rendition and "Content1_ES" for the Spanish rendition. This way through simple none complex customizations you will be able to dynamically switch the actual content item/data file just before rendering. By following proposed approach above you not only enable a simple mechanism for internationalized content you also preserve the functionality in the content presenter to support business accessible run-time publishing of information on existing and new pages. I recommend you read following useful post on Internationalization topics:Internationalize with Content Presenter Integrate with Review & Approval processes Today the Review and approval functionality and configuration is based out of WebCenter Content - Criteria Workflows. Criteria Workflows uses the metadata of the checked in document to evaluate if the document is under any review/approval process. So for instance if a Criteria Workflow is configured to force any documents with Version = "2" or "higher" and Content Type is "Instructions", any matching content item version on check in will now enter the workflow before getting released for general access. Few things to consider when configuring Criteria Workflows: Make sure to not trigger on version one for Content Items that are Data Files - if you trigger on version 1 you will not only approve an empty document you will also have a content presenter pointing to a none existing document - since the document will only be available after successful completion of the workflow Approval workflows sometimes requires more complex criteria, the recommendation if that is the case is that the meta data triggering such criteria is automatically populated, this can be achieved through many approaches including Content Profiles Criteria workflows are configured and managed in WebCenter Content Administration Applets where you can configure one or more workflows. When you configured Criteria workflows the Content Presenter will support the editors with the approval process directly inline in the "Contribution mode" of the portal. In addition to approve/reject and details of the task, the content presenter natively support the user to view the current and future version of the change he/she is approving. See below for example: Architectural recommendation To support review&approval processes - minimize the amount of data files per page Each CMIS query can consume significant time depending on the complexity of the query - minimize the amount of CMIS queries per page Use Content Presenter Templates based on ADF - this way you minimize the design considerations and optimize the usage of caching Implement the page in as few Data files as possible - simplifies publishing process, increases performance and simplifies release process Named data file (node) or list of named nodes when integrating to pages increases performance vs. querying for data Named data file (node) or list of named nodes when integrating to pages enables business centric page creation and publishing and reduces the need for IT department interaction Summary Just because one architectural decision solves a business problem it doesn't mean its the right one, when designing portals all architecture has to be in harmony and not impacting each other. For instance the most technical complex solution is not always the best since it will most likely defeat the business accessibility, performance or both, therefore the best approach is to first design for simplicity that even a non-technical user can operate, after that consider the performance impact and final look at the technology challenges these brings and workaround them first with out-of-the-box features, after that design and develop functions to complement the short comings.

    Read the article

  • nginx problem accessing virtual hosts

    - by Sc0rian
    I am setting up nginx as a reverse proxy. The server runs on directadmin and lamp stack. I have nginx running on port 81. I can access all my sites (including virtual ips) on the port 81. However when I forward the traffic from port 80 to 81, the virtual ips have a message saying "Apache is running normally". Server IPs are fine, and I can still access virtual IP's on 81. [root@~]# netstat -an | grep LISTEN | egrep ":80|:81" tcp 0 0 <virtual ip>:81 0.0.0.0:* LISTEN tcp 0 0 <virtual ip>:81 0.0.0.0:* LISTEN tcp 0 0 <serverip>:81 0.0.0.0:* LISTEN tcp 0 0 :::80 :::* LISTEN apache 24090 0.6 1.3 29252 13612 ? S 18:34 0:00 /usr/sbin/httpd -k start -DSSL apache 24092 0.9 2.1 39584 22056 ? S 18:34 0:00 /usr/sbin/httpd -k start -DSSL apache 24096 0.2 1.9 35892 20256 ? S 18:34 0:00 /usr/sbin/httpd -k start -DSSL apache 24120 0.3 1.7 35752 17840 ? S 18:34 0:00 /usr/sbin/httpd -k start -DSSL apache 24495 0.0 1.4 30892 14756 ? S 18:35 0:00 /usr/sbin/httpd -k start -DSSL apache 24496 1.0 2.1 39892 22164 ? S 18:35 0:00 /usr/sbin/httpd -k start -DSSL apache 24516 1.5 3.6 55496 38040 ? S 18:35 0:00 /usr/sbin/httpd -k start -DSSL apache 24519 0.1 1.2 28996 13224 ? S 18:35 0:00 /usr/sbin/httpd -k start -DSSL apache 24521 2.7 4.0 58244 41984 ? S 18:35 0:00 /usr/sbin/httpd -k start -DSSL apache 24522 0.0 1.2 29124 12672 ? S 18:35 0:00 /usr/sbin/httpd -k start -DSSL apache 24524 0.0 1.1 28740 12364 ? S 18:35 0:00 /usr/sbin/httpd -k start -DSSL apache 24535 1.1 1.7 36008 17876 ? S 18:35 0:00 /usr/sbin/httpd -k start -DSSL apache 24536 0.0 1.1 28592 12084 ? S 18:35 0:00 /usr/sbin/httpd -k start -DSSL apache 24537 0.0 1.1 28592 12112 ? S 18:35 0:00 /usr/sbin/httpd -k start -DSSL apache 24539 0.0 0.0 0 0 ? Z 18:35 0:00 [httpd] <defunct> apache 24540 0.0 1.1 28592 11540 ? S 18:35 0:00 /usr/sbin/httpd -k start -DSSL apache 24541 0.0 1.1 28592 11548 ? S 18:35 0:00 /usr/sbin/httpd -k start -DSSL root 24548 0.0 0.0 4132 752 pts/0 R+ 18:35 0:00 egrep apache|nginx root 28238 0.0 0.0 19576 284 ? Ss May29 0:00 nginx: master process /usr/local/nginx/sbin/nginx -c /usr/local/nginx/conf/nginx.conf apache 28239 0.0 0.0 19888 804 ? S May29 0:00 nginx: worker process apache 28240 0.0 0.0 19888 548 ? S May29 0:00 nginx: worker process apache 28241 0.0 0.0 19736 484 ? S May29 0:00 nginx: cache manager process here is my nginx conf: cat /usr/local/nginx/conf/nginx.conf user apache apache; worker_processes 2; # Set it according to what your CPU have. 4 Cores = 4 worker_rlimit_nofile 8192; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] ' '"$request" $status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; server_tokens off; access_log /var/log/nginx_access.log main; error_log /var/log/nginx_error.log debug; server_names_hash_bucket_size 64; sendfile on; tcp_nopush on; tcp_nodelay off; keepalive_timeout 30; gzip on; gzip_comp_level 9; gzip_proxied any; proxy_buffering on; proxy_cache_path /usr/local/nginx/proxy_temp levels=1:2 keys_zone=one:15m inactive=7d max_size=1000m; proxy_buffer_size 16k; proxy_buffers 100 8k; proxy_connect_timeout 60; proxy_send_timeout 60; proxy_read_timeout 60; server { listen <server ip>:81 default rcvbuf=8192 sndbuf=16384 backlog=32000; # Real IP here server_name <server host name> _; # "_" is for handle all hosts that are not described by server_name charset off; access_log /var/log/nginx_host_general.access.log main; location / { proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://<server ip>; # Real IP here client_max_body_size 16m; client_body_buffer_size 128k; proxy_buffering on; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 120; proxy_buffer_size 16k; proxy_buffers 32 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; } location /nginx_status { stub_status on; access_log off; allow 127.0.0.1; deny all; } } include /usr/local/nginx/vhosts/*.conf; } here is my vhost conf: # cat /usr/local/nginx/vhosts/1.conf server { listen <virt ip>:81 default rcvbuf=8192 sndbuf=16384 backlog=32000; # Real IP here server_name <virt domain name>.com ; # "_" is for handle all hosts that are not described by server_name charset off; access_log /var/log/nginx_host_general.access.log main; location / { proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://<virt ip>; # Real IP here client_max_body_size 16m; client_body_buffer_size 128k; proxy_buffering on; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 120; proxy_buffer_size 16k; proxy_buffers 32 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; } }

    Read the article

  • XBRL US Conference Highlights

    - by john.orourke(at)oracle.com
    Back in early November I had an opportunity to attend the XBRL US National Conference in Philadelphia.  At the event, XBRL US announced that Oracle had joined the initiative, so I had a chance to participate in a press conference and attend a number of sessions.  Oracle joined XBRL US so we can stay ahead of the standard and leverage it in our products, and to help drive awareness with customers and improve adoption of XBRL. There were roughly 250 attendees at the event, about half of which were vendors and consultants and the rest financial reporting staff from corporate filers.  Event sponsors included Ernst & Young, SWIFT and Fujitsu.  There were also a number of XBRL technology and service providers exhibiting at the conference.  On Monday Nov. 8th, the XBRL US Steering Committee meetings and Annual Members meeting and reception were held.  At the Annual Members meeting the big news was that current XBRL US President, Mark Bolgiano, is moving to a new position at Howard Hughes Medical Center.  Campbell Pryde, who had led the Taxonomy Development for XBRL US, is taking over as XBRL US President. Other items that were highlighted at the members meeting included: The US GAAP XBRL taxonomy is being used by over 1500 SEC filers and has now been handed over to the FASB to maintain and enhance 16 filer training events were held in 2010 XBRL Global Magazine was launched Corporate Actions proposal was submitted to the SEC with SWIFT in May XBRL Labs for iPhone, XBRL US Consistency Suite launched ISO 2022 Corporate Actions Alignment with XBRL achieved The XBRL Credit Rating taxonomy was accepted Tuesday Nov. 9th included Keynotes, General Sessions, Innovation Workshop for Governments and Securities Professionals, and an Opening Reception.  General sessions included: Lessons Learned from the SEC's rollout of XBRL.  More than 18,000 errors were identified in reviews of filings between June 2009 and September 2010.  Most of these related to negative values being used where they shouldn't have.  Also, the SEC feels there are too many taxonomy extensions being created - mostly in the Cash Flow Statements.  They emphasize using existing elements in the US GAAP taxonomy and advise filers not to  create extensions to improve the visual formatting of XBRL filings. Investors and XBRL - Setting the Standard for Data Quality.  In this panel discussion, the key learning was that CFA's, academics and the financial community are not using XBRL as expected.  The issues raised include the  accuracy and completeness of filings, number of taxonomy extensions, and limited number of tools available to help analyze XBRL data.  Another big issue that was raised is the lack of historic results in XBRL - most analysts need 10 quarters of historic data.  On the positive side, XBRL has the potential to eliminate re-keying of data and errors here and can improve analytic capabilities for financial analysts once more historic data is available and more companies are providing detailed tagging of their filings. A US Roadmap for XBRL Financial Reporting.  This was a panel discussion featuring Jeff Neumann(SEC), Campbell Pryde(XBRL US), and Louis Matherne(FASB).  Key points included the fact that XBRL is currently used by 1500 companies, with 8000 more companies coming in 2011.  XBRL for Mutual Fund Reporting will start in 2011 for 8000 funds, and a Credit Rating Taxonomy has now been submitted for review.  The XBRL tagging/filing process is improving each quarter - more education is helping here.  The FASB is looking at extensions to date, and potential additions to US GAAP taxonomy, while the SEC is evaluating filings for accuracy, consistency in tagging, and tools for analyzing data.  The big news is that the FASB 2011 US GAAP Taxonomy has been completed and reviewed by SEC.  The 2011 US GAAP Taxonomy supports new FASB accounting standards issued since 2009, has new taxonomy elements for certain industries (i.e airlines) and the elimination of 500 concepts.  (meaning they can't be used going forward but are still supported for historical comparison)  The 2011 US GAAP Taxonomy will be available for usage with Q2 2011 SEC filings.  More information about this can be found on the FASB web site.  http://www.fasb.org/home Accounting Firms and XBRL.  This session covered the Role of Audit Firms, which includes awareness and education, validation of XBRL filings, and in-house transition planning.  The main advice provided was that organizations should document XBRL mapping process, perform peer comparisons, and risk assessments on a regular basis. Wednesday Nov. 10th included more Keynotes, General Sessions on Corporate Actions, and XBRL Essentials Workshop Training for corporate filers.  The XBRL Essentials Training included: Getting Started Once you Have the Basics Detailed Footnote Tagging and Handling Tables Quality Control and Trust in the XBRL Process Bringing XBRL In-House:  What are the Options, What should you consider? The US GAAP Financial Reporting Taxonomy - Overview of the 2011 release The XBRL Essentials Training was well-attended with about 80 people.  This included a good overview of the SEC's XBRL mandate, limited liability issue, tagging levels, recommended planning process, internal vs. outsourced approach, and how to manage service providers.  I learned a lot from the session on detailed tagging.  This is the requirement that kicks in during a company's second year of XBRL filing with the SEC and applies to financial statements, footnotes and disclosures (it does not apply to MD&A, executive communications and other information).  The review of the Linkbase model, or dimensional table structure, was very interesting and can be complex to understand.  The key takeaway here is that using dimensional tables in XBRL filings can help limit the number of taxonomy extensions that are required.  The slides from this session are posted on the XBRL US web site. (http://xbrl.us/events/Pages/archive.aspx) For me, the main summary points and takeaways from the XBRL US conference are: XBRL for financial reporting has turned the corner and gone mainstream - with 1500 companies currently using it and 8000 more coming in 2011 The expected value is not being achieved by filers or consumers of XBRL data - this will improve when more companies are filing in XBRL, more history is available, and more software tools are available for analysis (hmm, sounds like an opportunity for Oracle) XBRL is becoming the global standard for all business communications beyond just the financials - i.e. adoption for mutual funds, corporate actions and others planned for the future If you would like to learn more about XBRL and the various training programs, services and software tools that are available check out the XBRL US web site and even better - become a member.  Here's a link:  http://xbrl.us/Pages/default.aspx

    Read the article

< Previous Page | 195 196 197 198 199 200 201 202 203 204 205 206  | Next Page >