Search Results

Search found 1802 results on 73 pages for 'a bright worker'.

Page 64/73 | < Previous Page | 60 61 62 63 64 65 66 67 68 69 70 71  | Next Page >

  • Unable to start rabbitmq-server on Ubuntu 12.04

    - by lxyu
    I try to install rabbitmq-server on ubuntu-server 12.04 but failed. Then I add the apt source list following the guide in http://www.rabbitmq.com/install-debian.html But reinstall still have the same error as following: $ sudo aptitude install rabbitmq-server ... Setting up rabbitmq-server (2.8.7-1) ... * Starting message broker rabbitmq-server * FAILED - check /var/log/rabbitmq/startup_\{log, _err\} ...fail! invoke-rc.d: initscript rabbitmq-server, action "start" failed. dpkg: error processing rabbitmq-server (--configure): subprocess installed post-installation script returned error exit status 1 No apport report written because MaxReports is reached already Processing triggers for libc-bin ... ldconfig deferred processing now taking place Errors were encountered while processing: rabbitmq-server E: Sub-process /usr/bin/dpkg returned an error code (1) A package failed to install. Trying to recover: Setting up rabbitmq-server (2.8.7-1) ... * Starting message broker rabbitmq-server * FAILED - check /var/log/rabbitmq/startup_\{log, _err\} ...fail! invoke-rc.d: initscript rabbitmq-server, action "start" failed. dpkg: error processing rabbitmq-server (--configure): subprocess installed post-installation script returned error exit status 1 Errors were encountered while processing: rabbitmq-server And error log seems show nothing useful neither: # startup_err shows this Crash dump was written to: erl_crash.dump Kernel pid terminated (application_controller) ({application_start_failure,kernel,{shutdown,{kernel,start,[normal,[]]}}}) # startup_log shows this {error_logger,{{2012,10,10},{22,31,54}},"Protocol: ~p: register error: ~p~n",["inet_tcp",{{badmatch,{error,epmd_close}},[{inet_tcp_dist,listen,1},{net_kernel,start_protos,4},{net_kernel,start_protos,3},{net_kernel,init_node,2},{net_kernel,init,1},{gen_server,init_it,6},{proc_lib,init_p_do_apply,3}]}]} {error_logger,{{2012,10,10},{22,31,54}},crash_report,[[{initial_call,{net_kernel,init,['Argument__1']}},{pid,<0.20.0>},{registered_name,[]},{error_info,{exit,{error,badarg},[{gen_server,init_it,6},{proc_lib,init_p_do_apply,3}]}},{ancestors,[net_sup,kernel_sup,<0.9.0>]},{messages,[]},{links,[#Port<0.90>,<0.17.0>]},{dictionary,[{longnames,false}]},{trap_exit,true},{status,running},{heap_size,610},{stack_size,24},{reductions,511}],[]]} {error_logger,{{2012,10,10},{22,31,54}},supervisor_report,[{supervisor,{local,net_sup}},{errorContext,start_error},{reason,{'EXIT',nodistribution}},{offender,[{pid,undefined},{name,net_kernel},{mfargs,{net_kernel,start_link,[[rabbitmqprelaunch18417,shortnames]]}},{restart_type,permanent},{shutdown,2000},{child_type,worker}]}]} {error_logger,{{2012,10,10},{22,31,54}},supervisor_report,[{supervisor,{local,kernel_sup}},{errorContext,start_error},{reason,shutdown},{offender,[{pid,undefined},{name,net_sup},{mfargs,{erl_distribution,start_link,[]}},{restart_type,permanent},{shutdown,infinity},{child_type,supervisor}]}]} {error_logger,{{2012,10,10},{22,31,54}},std_info,[{application,kernel},{exited,{shutdown,{kernel,start,[normal,[]]}}},{type,permanent}]} {"Kernel pid terminated",application_controller,"{application_start_failure,kernel,{shutdown,{kernel,start,[normal,[]]}}}"} I have googled for some time but got nothing useful. One solution on the internet is to make sure hostname pingable, but my /etc/hosts already have this line on top: 127.0.0.1 localhost myserver Any suggestion on how to get up rabbitmq-server?

    Read the article

  • IIS/ASP.NET performance incident - Perfmon Current Annonymous Users going through roof but Requests/sec low

    - by Laurence
    Setup: ASP.NET 4.0 website on IIS 6.0 on Win 2003 64 bit, 8xCPUs, 16GB memory, separate SQL 2005 DB server. Had a serious slowdown today with any otherwise fairly well performing ASP.NET site. For a period of a couple of hours all page requests were taking a very long time to be served - e.g. 30-60s compared to usual 2s. The w3wp.exe's CPU and memory usage on the webserver was not much higher than normal. The application pool was not in the middle of recycling (and it hadn't recycled for several hours). Bottlenecks in the database were ruled out - no blocks occurring and query results were being returned quickly. I couldn't make any sense of it and set up the following Perfmon counters: Current Anonymous Users (for site in question) Get requests/sec (ditto) Requests/sec for the ASP.NET application running the site Get requests/sec was averaging 100-150. Requests/sec for ASP.NET was averaging 5-10. However Current Anonymous Users was around 200. And then as I was watching, the Current Anonymous Users began to climb steeply going up to about 500 within a few minutes. All this time Get requests/sec & Requests/sec for ASP.NET was if anything going down. I did a whole load of things (in a panic!) to try to get the site working, like shutting it down, recycling the app pool, and adding another worker process to the pool. I also extended the expiration time for content (in IIS under HTTP Headers) in an attempt to lower the number of requests for static files (there are a lot of images on the site). The site is now back to normal, and the counters are fairly steady and reading (added Current Connections counter): Current Anonymous Users : average 30 Get requests/sec : average 100 Requests/sec for ASP.NET : 5 Current Connections : average 300 I have also observed an inverse relationship between Get requests/sec & Current Anonymous Users. Usually both are fairly steady but there will be short periods when Get requests/sec will go down dramatically and Current Anonymous Users will go up in a perfect mirror image. Then they will flip back to their usual levels. So, my questions are: Thinking of the original performance issue - if w3wp.exe CPU, memory usage were normal and there was no DB bottleneck, what could explain page requests taking 20 times longer to be served than usual? What other counters should I be looking at if this happens again? What explains the inverse relationship between Get requests/sec & Current Anonymous Users? What could explain Current Anonymous Users going from 200 to 500 within a few minutes? Many thanks for any insight into this.

    Read the article

  • Intermittently, IIS7 requests get stuck in WindowsAuthenticationModule

    - by Richard Beier
    We're running an IIS7 server hosting several dozen websites. Several of these websites are all part of the same legacy app we've developed. These sites all run the same code and run in the same app pool. Roughly once a month over the past few months, we've found that all requests for this app pool start hanging indefinitely. When this happens, we receive an alert and we recycle the app pool. After that, the sites start working again. This only ever affects this one app pool - never any others on the same server. A couple times, before recycling the pool, I've looked at the currently-executing requests in the worker process. They all show up as executing inside the WindowsAuthenticationModule. Which is strange, because the vast majority of the application does not require authentication. There is a small admin section which uses Windows auth... but all the other requests should be anonymous. Does anyone have any idea as to what might be causing this? There are several unusual things about the way these sites are set up. As I mentioned, they all run the same code - multiple sites point at the same physical directory. The only difference is the host header bindings. I'm not sure why there isn't just one site with all the host headers, but that's how it works. In several of these sites, the same physical directory is mapped at two levels - as the root of the site and again as an application within the site. So if a user goes to http://oursite.com/index.aspx, that maps to c:\files\oursite\index.aspx. If a user goes to http://oursite.com/foo/index.aspx, that also maps to c:\files\oursite\index.aspx. I think there is code which looks at the request URL and handles the two requests differently. This is strange because the same web.config ends up being interpreted as a site config file, and also as an application config file within the site. I don't know if this might be related to the authentication problem. If we can't find the cause, we're thinking of a few workarounds we could try: Move the admin section into a separate site, and give the client a new admin URL. Run that separate site in its own app pool. Then in the web.config shared by all the other sites, remove the WindowsAuthenticationModule. That way there should be no possibility of a hang within the WindowsAuthenticationModule. Try running all these sites in the classic pipeline instead of the integrated pipeline. They were working fine on our old IIS6 server... (If we get desperate) Set up a watchdog script which monitors the sites and auto-recycles the app pool when it detects that requests are getting stuck. What do you think? Thanks for your help, Richard

    Read the article

  • Apache server completely freezes until it gets restarted

    - by nbv4
    My server does this every few days. What sucks is that it always seems to do this right after I go to bed, so when I wake up, I'm greeted with the fact that my server has been down for the past 6 or 7 hours. When I first noticed this, I added a cronjob that tries to restart the server every 15 minutes, but I guess that didn't fix it. Once I noticed the server was down, I can this command: /etc/init.d/apache2 restart * Restarting web server apache2 apache2: Could not reliably determine the server's fully qualified domain name, using 127.0.0.1 for ServerName ... waiting ...........................................................apache2: Could not reliably determine the server's fully qualified domain name, using 127.0.0.1 for ServerName httpd (pid 17597) already running ...which is odd, because a restart should restart the server, even if it's already running, correct? I eventually had to "stop" then "start" to get it working again. I then looked through the logs, and found something very weird. It seems that around the time the server crashed, the logs have entries that are wildly out of order. It looks a little like this: xx.xxx.xxx.x - - [21/Apr/2010:06:32:05 -0400] "GET / blah" xx.xxx.xxx.x - - [21/Apr/2010:06:51:25 -0400] "GET / blah" x.xx.xxx.xxx - - [21/Apr/2010:06:38:23 -0400] "GET / blah" xxx.xx.xx.xx - - [21/Apr/2010:06:31:56 -0400] "GET / blah" xxx.xx.xx.xx - - [21/Apr/2010:06:51:49 -0400] "GET / blah" xx.xx.xxx.xx - - [21/Apr/2010:06:33:20 -0400] "GET / blah" I don't think the problem is memory, because this: tells me that right before the crash, memory usage is fine. I'm running apache with the worker mpm, here are the settings for that: <IfModule mpm_worker_module> StartServers 1 MaxClients 100 MinSpareThreads 5 MaxSpareThreads 10 ThreadsPerChild 10 MaxRequestsPerChild 3000 </IfModule> This apache server is running a bunch of stuff, but most of the traffic comes from a django project I'm hosting, that uses mod_wsgi. There also is a simple machines forum that is running off of mod_fcgid. Those setting are below: <IfModule mod_fcgid.c> MaxRequestsPerProcess 500 MaxProcessCount 3 AddHandler fcgid-script .php .fcgi AddHandler cgi-script .cgi .pl FCGIWrapper "/usr/bin/php-cgi" .php </IfModule> Anyone know of anything else I can check? I've just about tweaked every single setting I can think of, yet these freezes still happen.

    Read the article

  • MySQL timeout only from office network to remote server, but other connections are fine

    - by Adam
    I've been developing these apps just fine on a local machine as has my co worker. We recently moved our work desks so we're now on a different floor of the building, but we only have one router that we're connected to. Since then, connecting to this one server appears to timeout more often than not. Occasionally I get through, and the loading is instantaneous. Anyhow we have these connections that were tested 1. my computer -> office network -> php pdo -> mysql server A - timeout 2. my computer -> office network -> mysql cli -> mysql server A - timeout 3. my computer -> office network -> mysql cli -> mysql server A - timeout 4. another pc -> office network -> mysql cli -> mysql server A - timeout 5. my computer -> mobile network -> mysql cli -> mysql server A - ok 6. my computer -> office network -> ssh server A -> mysql server A - ok 7. my computer -> office network -> ssh server B -> mysql server A - ok 8. server B web app -> php pdo -> mysql server A - ok 9. my computer -> office network -> php pdo -> mysql server B - ok 10. my computer -> office network -> mysql cli -> mysql server B - ok This has really stumped me.

    Read the article

  • Localhost has just stopped working (using xampp)

    - by Joe Taylor
    I installed Xampp to use for local development of a Drupal site. Its been working fine out of the box until now. The main Xampp localhost welcome menu loads, however my subdirectory (localhost/drupal) doesn't. It just spins in the browser for ages and nothing happens. Just a blank screen. I've tried the edit people suggest in the hosts file but that hasn't work and I'm getting no errors so not sure what to do. Anyone have any ideas what might be wrong? PS I'm running Windows 7 edit: Log files: Fatal error: Allowed memory size of 134217728 bytes exhausted (tried to allocate 123731968 bytes) in C:\xampp\apps\drupal\htdocs\sites\all\themes\directory\node--job.tpl.php on line 41 Fatal error: Allowed memory size of 134217728 bytes exhausted (tried to allocate 123731968 bytes) in C:\xampp\apps\drupal\htdocs\sites\all\themes\directory\node--job.tpl.php on line 41 [Tue Nov 05 20:52:07.242454 2013] [ssl:warn] [pid 8432:tid 260] AH01909: RSA certificate configured for www.example.com:443 does NOT include an ID which matches the server name [Tue Nov 05 20:52:07.331459 2013] [core:warn] [pid 8432:tid 260] AH00098: pid file C:/xampp/apache/logs/httpd.pid overwritten -- Unclean shutdown of previous Apache run? [Tue Nov 05 20:52:07.820487 2013] [ssl:warn] [pid 8432:tid 260] AH01909: RSA certificate configured for www.example.com:443 does NOT include an ID which matches the server name [Tue Nov 05 20:52:07.898492 2013] [mpm_winnt:notice] [pid 8432:tid 260] AH00455: Apache/2.4.4 (Win32) OpenSSL/0.9.8y PHP/5.4.16 configured -- resuming normal operations [Tue Nov 05 20:52:07.898492 2013] [mpm_winnt:notice] [pid 8432:tid 260] AH00456: Server built: Feb 23 2013 13:07:34 [Tue Nov 05 20:52:07.898492 2013] [core:notice] [pid 8432:tid 260] AH00094: Command line: 'c:\xampp\apache\bin\httpd.exe -d C:/xampp/apache' [Tue Nov 05 20:52:07.905492 2013] [mpm_winnt:notice] [pid 8432:tid 260] AH00418: Parent: Created child process 7588 [Tue Nov 05 20:52:08.882548 2013] [ssl:warn] [pid 7588:tid 272] AH01909: RSA certificate configured for www.example.com:443 does NOT include an ID which matches the server name [Tue Nov 05 20:52:09.467582 2013] [ssl:warn] [pid 7588:tid 272] AH01909: RSA certificate configured for www.example.com:443 does NOT include an ID which matches the server name [Tue Nov 05 20:52:09.534585 2013] [mpm_winnt:notice] [pid 7588:tid 272] AH00354: Child: Starting 150 worker threads. Fatal error: Allowed memory size of 134217728 bytes exhausted (tried to allocate 123731968 bytes) in C:\xampp\apps\drupal\htdocs\sites\all\themes\directory\node--job.tpl.php on line 41 Fatal error: Allowed memory size of 134217728 bytes exhausted (tried to allocate 123731968 bytes) in C:\xampp\apps\drupal\htdocs\sites\all\themes\directory\node--job.tpl.php on line 41

    Read the article

  • Apache certificates for some urls not working

    - by Vegaasen
    We are having a rather strange problem with a Apache-installation. Here is a short summary: Currently I'm setting up Apache with https, and server-certificates. This is fairly easy and works straight out of the box - as expected. This is the configuration for this setup: Listen 443 SSLEngine on SSLCertificateFile "/progs/apache/ssl/example-site.no.pem" SSLCertificateKeyFile "/progs/apache/ssl/example-site.no.key" SSLCACertificateFile "/progs/apache/ssl/ca/example_root.pem" SSLCADNRequestFile "/progs/apache/ssl/ca/example_intermediate.pem" SSLVerifyClient none SSLVerifyDepth 3 SSLOptions +StdEnvVars +ExportCertData RequestHeader set ssl-ClientCert-Subject-CN "%{SSL_CLIENT_S_DN}s" RewriteEngine On ProxyPreserveHost On ProxyRequests On SSLProxyEngine On ... <LocationMatch /secureStuff/$> SSLVerifyClient require Order deny,allow Allow from All </LocationMatch> ... <Proxy balancer://exBalancer> Header add Set-Cookie "EX_ROUTE=EB.%{BALANCER_WORKER_ROUTE}e; path=/" env=BALANCER_ROUTE_CHANGED BalancerMember http://10.0.0.1:7200 route=ee1 retry=300 flushpackets=off keepalive=on BalancerMember http://10.0.0.2:7200 route=ee2 retry=300 flushpackets=off keepalive=on status=+H ProxySet stickysession=EX_ROUTE scolonpathdelim=Off timeout=10 nofailover=off failonstatus=505 maxattempts=1 lbmethod=bybusyness Order deny,allow Allow from all </Proxy> RewriteCond %{REQUEST_URI} !^/index.html [NC] RewriteRule ^/(.*)$ balancer://exBalancer/$1 [P,NC] ProxyPassReverse / balancer://exBalancer/ Header edit Set-Cookie "(.*)" "$1;HttpsOnly" ... So - everything works fine and as expected for all of the pages that are not a part of the LocationMatch-directive. When requesting something that matches the LocationMatch-directive, I'm asked for a certificate (hence the SSLVerifyClient required attribute) - and getting all the correct certificates in my browser that is based on the root/intermediate chain. After choosing a certificate and clicking "OK", this is what pops up in the apache logs: [ssl:info] [pid 9530:tid 25] [client :43357] AH01998: Connection closed to child 86 with abortive shutdown ( [Thu Oct 11 09:27:36.221876 2012] [ssl:debug] [pid 9530:tid 25] ssl_engine_io.c(1171): (70014)End of file found: [client 10.235.128.55:45846] AH02007: SSL handshake interrupted by system [Hint: Stop button pressed in browser?!] And this just spams the logs. What is happening here? I can see this configuration working on my local machine, but not on one of our servers. There is no configration differences between the servers, only minor application-wise-changes. I've tried the following: 1) Removing CA-certificate-checking (works) 2) Adding required CA-certificate for the whole site (works) 3) Adding "SSLVerifyClient optional" does not work 4) ++ Server/Application Information Local: -OpenSSL v.1.0.1x -Apache 2.4.3 -Ubuntu -mpm: event -every configuration should be turned on (failing) server: -OpenSSL 0.9.8e -Apache 2.4.2 -SunOS -mpm: worker -every configuration should be turned on Please let me know if more information is needed, I'll provide it instantly. Brief sum-up: -Running apache 2.4 -Server certificates works just fine -Client certificates for some /Locations does not work, fails with errors PS: Could it be related with the OpenSSL version and the "Renegotiation" stuff related to TLS/SSLv3?

    Read the article

  • How to unmangle PDF format into a usable text or spreadsheet document?

    - by Chuck
    Upon requesting some daily/hourly sales data from a coworker who is responsible for such requests, I was given a series of PDF files. The point of sale program that is used, for some reason, answers requests for this type of information in the form of PDF files. The issue: The PDF files look to be in a format that should easily be copy and pasted into a spreadsheet. There are three columns that look to be neatly organized across two pages. When copy/pasting the first page, all three columns from the PDF's first page are dumped into a single column consisting of the Date followed by the Hours for the transactions on that day. The end of this Date/Time information is followed by all of the Total Sales values that should be attached a Date and Time of the transaction. (NOTE: There are no duplicated Dates in the Date column, ie, Multiple transactions for a day only have one yyyy/mm/dd listed for the first row but not the following rows.) While it was a huge pain, it was possible to, in about four or five steps, get the single column of data broken out into three columns that matched the PDF. The second page of the PDF file, when attempting to copy/paste into a spreadsheet, creates a single column with the first third of the cells being the Dates from the PDF, the second third of the cells being the Hours of the transactions and the final third of the cells being filled with the Total Sales. After the copy/paste there is no way to figure out which Hours belong to which Dates or Total Sales due to the lack of the duplicated Dates in the Date column as mentioned above. My PDF-fu is next to non-existent. I've just now started to work with PDF editors and some www.convertmyPDFforfree.com websites, so far, with absolutely nothing remotely coming anywhere near usable output. (Both methods have so far done nothing but product blank documents.) Before I go back and pester my co-worker into figuring out a way to create a report in some other format than PDF, is there any method by which to take the data that looks to be formatted correctly in a PDF and copy/paste it into a spreadsheet that will look the same? I appreciate any help that can be made available. The sales data isn't so sensitive that I couldn't part with a bit to let somebody actually see what it is that needs to be dealt with, just let me know. The PDF's are less than 100kb each so sending them shouldn't be a burden to any interested party.

    Read the article

  • Getting 403 on apache with php on fedora 17

    - by Js Lim
    I put the projects on ~/public_html/project and create a soft-link in /var/www/html/project which point to ~/public_html/project. my /etc/httpd/conf/httpd.conf is shown below ServerRoot "/etc/httpd" PidFile run/httpd.pid Timeout 60 KeepAlive Off MaxKeepAliveRequests 100 KeepAliveTimeout 5 <IfModule prefork.c> StartServers 8 MinSpareServers 5 MaxSpareServers 20 ServerLimit 256 MaxClients 256 MaxRequestsPerChild 4000 </IfModule> <IfModule worker.c> StartServers 4 MaxClients 300 MinSpareThreads 25 MaxSpareThreads 75 ThreadsPerChild 25 MaxRequestsPerChild 0 </IfModule> Listen 80 Include conf.d/*.conf User apache Group apache ServerAdmin root@localhost UseCanonicalName Off DocumentRoot "/var/www/html" <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory "/var/www/html"> Options Indexes FollowSymLinks AllowOverride All Order allow,deny Allow from all </Directory> # Allow SVN access from public <Directory "/var/www/svn"> Options Indexes FollowSymLinks AllowOverride None Order allow,deny Allow from all </Directory> <IfModule mod_userdir.c> UserDir disabled # UserDir public_html </IfModule> DirectoryIndex index.html index.html.var AccessFileName .htaccess <FilesMatch "^\.ht"> Order allow,deny Deny from all Satisfy All </FilesMatch> TypesConfig /etc/mime.types DefaultType text/plain <IfModule mod_mime_magic.c> # MIMEMagicFile /usr/share/magic.mime MIMEMagicFile conf/magic </IfModule> HostnameLookups Off <IfModule mod_dav_fs.c> # Location of the WebDAV lock database. DAVLockDB /var/lib/dav/lockdb </IfModule> ScriptAlias /cgi-bin/ "/var/www/cgi-bin/" <Directory "/var/www/cgi-bin"> AllowOverride None Options None Order allow,deny Allow from all </Directory> in /var/log/httpd/error_log [error] [client 127.0.0.1] Symbolic link not allowed or link target not accessible: /var/www/html/project [error] [client 127.0.0.1] File does not exist: /var/www/html/favicon.ico in browser Forbidden You don't have permission to access /project on this server. I get this error. ls -l result: drwxrwxrwx 3 js js 4.0K Nov 1 14:43 public_html/ for project drwxr-xr-x. 6 js js 4.0K Nov 1 16:38 public_html/project/ I cannot figure out the problem.

    Read the article

  • Code Camp 2011 – Summary

    - by hajan
    Waiting whole twelve months to come this year’s Code Camp 2011 event was something which all Microsoft technologies (and even non-Microsoft techs.) developers were doing in the past year. Last year’s success was enough big to be heard and to influence everything around our developer community and beyond. Code Camp 2011 was nothing else but a invincible success which will remain in our memory for a long time from now. Darko Milevski (president of MKDOT.NET UG and SharePoint MVP) said something interesting at the event keynote that up to now we were looking at the past by saying what we did… now we will focus on the future and how to develop our community more and more in the future days, weeks, months and I hope so for many years… Even though it was held only two days ago (26th of November 2011), I already feel the nostalgia for everything that happened there and for the excellent time we have spent all together. ORGANIZED BY ENTHUSIASTS AND EXPERTS Code Camp 2011 was organized by number of community enthusiasts and experts who have unselfishly contributed with all their free time to make the best of this event. The event was organized by a known community group called MKDOT.NET User Group, name of a user group which is known not only in Macedonia, but also in many countries abroad. Organization mainly consists of software developers, technical leaders, team leaders in several known companies in Macedonia, as well as Microsoft MVPs. SPEAKERS There were 24 speakers at five parallel tracks. At Code Camp 2011 we had two groups of speakers: Professional Experts in various technologies and Student Speakers. The new interesting thing here is the Student Speakers, which draw attention a lot, especially to other students who were interested to see what their colleagues are going to speak about and how do they use Microsoft technologies in different coding scenarios and practices, in different topics. From the rest of the professional speakers, there were 7 Microsoft MVPs: Two ASP.NET/IIS MVPs, Two C# MVPs, and One MVP in SharePoint, SQL Server and Exchange Server. I must say that besides the MVP Speakers, who definitely did a great job as always… there were other excellent speakers as well, which were speaking on various technologies, such as: Web Development, Windows Phone Development, XNA, Windows 8, Games Development, Entity Framework, Event-driven programming, SOLID, SQLCLR, T-SQL, e.t.c. SESSIONS There were 25 sessions mainly all related to Microsoft technologies, but ranging from Windows 8, WP7, ASP.NET till Games Development, XNA and Event-driven programming. Sessions were going in five parallel tracks named as Red, Yellow, Green, Blue and Student track. Five presentations in each track, each with level 300 or 400. More info MY SESSION (ASP.NET MVC Best Practices) I must say that from the big number of speaking engagements I have had, this was one of my best performances and definitely I have set new records of attendees at my sessions and probably overall. I spoke on topic ASP.NET MVC Best Practices, where I have shown tips, tricks, guidelines and best practices on what to use and what to avoid by developing with one of the best web development frameworks nowadays, ASP.NET MVC. I had approximately 350+ attendees, the hall was full so that there was no room for staying at feet. Besides .NET developers, there were a lot of other technology oriented developers, who has also received the presentation very well and I really hope I gave them reason to think about ASP.NET as one of the best options for web development nowadays (if you ask me, it’s the best one ;-)). I have included 10 tips in using ASP.NET MVC each of them followed by a demo. Besides these 10 tips, I have briefly introduced the concept of ASP.NET MVC for those that haven’t been working with the framework and at the end some bonus tips. I must say there was lot of laugh for some funny sentences I have stated, like “If you code ASP.NET MVC, girls will love you more” – same goes for girls, only replace girls with boys :). [LINK TO SESSION WILL GO HERE, ONCE SESSIONS ARE AVAILABLE ON MK CODECAMP WEBSITE] VOLUNTEERS Without strong organization, such events wouldn’t be able to gather hundreds of attendees at one place and still stay perfectly organized to the smallest details, without dedicated organization and volunteers. I would like to dedicate this space in my blog to them and to say one big THANK YOU for supporting us before the event and during the whole day in the event. With such young and dedicated volunteers, we couldn’t achieve anything but great results. THANK YOU EVERYONE FOR YOUR CONTRIBUTION! NETWORKING One of the main reasons why we do such events is to gather all professionals in one place. Networking is what everyone wants because through this way of networking, we can meet incredible people in one place. It is amazing feeling to share your knowledge with others and exchange thoughts on various topics. Meet and talk to interesting people. I have had very special moments with many attendees especially after my presentation. Special Thank You to all of them who come to meet me in person, whether to ask a question, say congrats for my session or simply meet me and just smile :)… everything counts! Thank You! TWITTER During the event, twitter was one of the most useful event-wide communication tool where everyone could tweet with hash tag #mkcodecamp or #mkdotnet and say what he/she wants to say about the current state and happenings at that moment… In my next blog post I will list the top craziest tweets that were posted at this event… FUTURE OF MKDOT.NET Having such strong community around MKDOT.NET, the future seems very bright. The initial plans are to have sub-groups in several technologies, however all these sub-groups will belong to the MKDOT.NET UG which will be, somehow, the HEAD of these sub-groups. We are doing this to provide better divisions by technologies and organize ourselves better since our community is very big, around 500 members in MKDOT.NET.We will have five sub-groups:- Web User Group (Lead:Hajan Selmani - me)- Mobile User Group (Lead: Filip Kerazovski)- Visual C# User Group (Lead: Vekoslav Stefanovski)- SharePoint User Group (Lead: Darko Milevski)- Dynamics User Group (Lead: Vladimir Senih) SUMMARY Online registered attendees: ~1.200 Event attendees: ~800 Number of members in organization: 40+ Organized by: MKDOT.NET User Group Number of tracks: 5 Number of speakers: 24 Number of sessions: 25 Event official website: http://codecamp.mkdot.net Total number of sponsors: 20 Platinum Sponsors: Microsoft, INETA, Telerik Place held: FON University City and Country: Skopje, Macedonia THANK YOU FOR BEING PART OF THE BEST EVENT IN MACEDONIA, CODE CAMP 2011. Regards, Hajan

    Read the article

  • Communities - The importance of exchange and discussion

    Communication with your environment is an essential part of everyone's life. And it doesn't matter whether you are actually living in a rural area in the middle of nowhere, within the pulsating heart of a big city, or in my case on a wonderful island in the Indian Ocean. The ability to exchange your thoughts, your experience and your worries with another person helps you to get different points of view and new ideas on how to resolve an issue you might be confronted with. Benefits of community work What happens to be common sense in your daily life, also applies to your work environment. Working in IT, or ICT as it is called in Mauritius, requires a lot of reading and learning. Not only during your lectures at the university but with your colleagues in a project assignment and hopefully with 'unknown' pals in the universe of online communities. At least I can say that I learned quite a lot from other developers code, their responses in various forums, their numerous blog articles, and while attending local user group meetings. When I started to work as a professional software developer (or engineer some may say) years ago I immediately checked the existence of communities on the programming language, the database technology and other vital information on software development in general. Luckily, it wasn't too difficult to find. My employer had a subscription of the monthly magazines and newsletters of a national organisation which also run the biggest forum in that area. Getting in touch with other developers and reading their common problems but also solutions was a huge benefit to my growth. Image courtesy of Michael Kappel (CC BY-NC 2.0) Active participation and regular contribution to this community gave me some nice advantages, too. Within three years I was listed as a conference speaker at the annual developer's conference and provided several sessions on different topics during consecutive years. Back in 2004, I took over the responsibility and management of the monthly meetings of a regional user group, and organised it for more than two years. Furthermore, I was invited to the newly-founded community program of Microsoft Germany (Community Leader/Insider Program - CLIP). My website on Active FoxPro Pages was nominated in the second batch of online communities. Due to my community work and providing advice to others, I had the honour to be awarded as Microsoft Most Valuable Professional (MVP) - Visual Developer for Visual FoxPro in the years 2006 and 2007. It was a great experience to meet with other like-minded people and I'm really grateful for that. Just in case, more details are listed in my Curriculum Vitae. But this all changed when I moved to Mauritius... Cyber island Mauritius? During the first months in Mauritius I was way too busy to think about community activities at all. First of all, there was the new company that had to be set up, the new staff had to be trained and of course the communication work-flows and so on with the project managers back in Germany had to be sorted out, too. Second, I had to get a grip of my private matters like getting the basics for my new household or exploring the neighbourhood, and last but not least I needed a break from the hectic and intensive work prior to my departure. As soon as the sea literally calmed down, I started to have conversations with my colleagues about communities and user groups. Sadly, it turned out that there were none, or at least no one was aware of any at that time. Oh oh, what did I do? Anyway, having this kind of background and very positive experience with off-line and on-line activities I decided for myself that some day I'm going to found a community in Mauritius for all kind of IT/ICT-related fields. The main focus might be on software development but not on a certain technology or methodology. It was clear to me that it should be an open infrastructure and anyone is welcome to join, to experience, to share and to contribute if they would like to. That was the idea at that time... Ok, fast-forward to recent events. At the end of October 2012 I was invited to an event called Open Days organised by Microsoft Indian Ocean Islands together with other local partners and resellers. There I got in touch with local Technical Evangelist Arnaud Meslier and we had a good conversation on communities during the breaks. Eventually, I left a good impression on him, as we are having chats on Facebook or Skype irregularly. Well, seeing that my personal and professional surroundings have been settled and running smooth, having that great exchange and contact with Microsoft IOI (again), and being really eager to re-animate my intentions from 2007, I recently founded a new community: Mauritius Software Craftsmanship Community - #MSCC It took me a while to settle down with the name but it was obvious that the community should not be attached to one single technology, like ie. .NET user group, Oracle developers, or Joomla friends (these are fictitious names). There are several other reasons why I came up with 'Craftsmanship' as the core topic of this community. The expression of 'engineering' didn't feel right with the fields covered. Software development in all kind of facets is a craft, and therefore demands a lot of practice but also guidance from more experienced developers. It also includes the process of designing, modelling and drafting the ideas. Has to deal with various types of tests and test methodologies, and of course should be focused on flexible and agile ways of acting. In order to meet and to excel a customer's request for a solution. Next, I was looking for an easy way to handle the organisation of events and meeting appointments. Using all kind of social media platforms like Google+, LinkedIn, Facebook, Xing, etc. I was never really confident about their features of event handling. More by chance I stumbled upon Meetup.com and in combination with the other entities (G+ Communities, FB Pages or in Groups) I am looking forward to advertise and manage all future activities here: Mauritius Software Craftsmanship Community This is a community for those who care and are proud of what they do. For those developers, regardless how experienced they are, who want to improve and master their craft. This is a community for those who believe that being average is just not good enough. I know, there are not many 'craftsmen' yet but it's a start... Let's see how it looks like by the end of the year. There are free smartphone apps for Android and iOS from Meetup.com that allow you to keep track of meetings and to stay informed on latest updates. And last but not least, there will be a Trello workspace to collect and share ideas and provide downloads of slides, etc. Sharing is caring! As mentioned, the #MSCC is present in various social media networks in order to cover as many people as possible here in Mauritius. Following is an overview of the current networks: Twitter - Latest updates and quickies Google+ - Community channel Facebook - Community Page LinkedIn - Community Group Trello - Collaboration workspace to share and develop ideas Hopefully, this covers the majority of computer-related people in Mauritius. Please spread the word about the #MSCC between your colleagues, your friends and other interested 'geeks'. Your future looks bright Running and participating in a user group or any kind of community usually provides quite a number of advantages for anyone. On the one side it is very joyful for me to organise appointments and get in touch with people that might be interested to present a little demo of their projects or their recent problems they had to tackle down, and on the other side there are lots of companies that have various support programs or sponsorships especially tailored for user groups. At the moment, I already have a couple of gimmicks that I would like to hand out in small contests or raffles during one of the upcoming meetings, and as said, companies provide all kind of goodies, books free of charge, or sometimes even licenses for communities. Meeting other software developers or IT guys also opens up your point of view on the local market and there might be interesting projects or job offers available, too. A community like the Mauritius Software Craftsmanship Community is great for freelancers, self-employed, students and of course employees. Meetings will be organised on a regular basis, and I'm open to all kind of suggestions from you. Please leave a comment here in blog or join the conversations in the above mentioned social networks. Let's get this community up and running, my fellow Mauritians!

    Read the article

  • Goodbye my beloved Nexus One, hello Windows Phone 7

    - by George Clingerman
    Last night my wife’s Nexus One finally bit the dust. You may not know but I’ve been nursing her Nexus One one along for quite a while after her screen shattered. I was able to replace it on my own (go me!) but little quirks have been popping up and the phone was quickly deteriorating. Lately it’s been the power button. Wifey would often have to press the power button several times to get her phone to turn on and last night it just wouldn’t wake up again. I took it apart and tried my best to see if I could somehow make it live once again but no luck this time. It was finally ready to retire. We looked at first for a replacement phone for her but she wasn’t really seeing anything she liked. So I decided to make the ultimate sacrifice and offer up my much loved Nexus One and I would then get a new Windows Phone 7 device. I love T-Mobile for my service so my choices were immediately limited to basically just a single phone. The HTC HD7. I read reviews and they were all over the board from people loving to people hating the phone but I decided, hey, why not, let’s take this plunge. And I did. I’ve only had the phone for about two days now so below is my list of first reaction pros/cons. These are basically things I’ve missed or things I’ve noticed that I really like about my new Windows Phone. Cons: * No Google Talk – I used this a LOT on my Nexus. I’ve found an application called “Flory” but it’s just an ok substitute, not the same as the full featured GTalk I had on my Nexus. * Seesmic is limited– I loved the way Seesmic worked on my Nexus. It was my mobile twitter client of choice. Everything about it worked really well. On Windows Phone 7 it’s just ok. I don’t get notification of new tweets, it’s several clicks to even see a new tweet. It’s definitely got some more development before it has the same features as it did on my Nexus. * Buttons don’t give great feedback – I’d read this on the reviews about the HTC HD7 and I’m finding it true myself. Pressing the buttons on the side of the phone and the power button on the top is finicky and I have to be looking at my phone to make sure I actually got them to press. * Web browsing is slow – I’m not sure what’s up with this, I’m connected to my wireless network at my house but it’s noticeably slower on my WP7 device than my Nexus. I even switched back to verify and it’s definitely true. Retrieving tweets, hitting up the XNA forums and just general web activities are all much slower on my WP7. I can’t think of any reason this would be true but it almost seems like it’s not using my wireless for everything.   Pros: * It’s pretty – the phone is really gorgeous. I loved the form of my Nexus One by the HTC HD7 is just as pretty, maybe even prettier! It’s got a nice large, bright screen. It feels good in my hand. And it even has a little kickstand to set the phone up for movie watching. Definitely a gorgeous phone. * LIVE integration – I lost a lot of nice integration with Google services but I gained a lot of integration with LIVE services that I also use. Now I can see when I get new GMail messages AND Hotmail messages. And having the Xbox LIVE integration is admittedly cool as well. * Tile notification rock – The Windows Phone 7 commercials are TRYING to get this message out but they’re doing a really poor job of this. Tile notifications really do save you from your phone. I have a whole little mini-informational dashboard at a glance. I unlock my phone and at a glace I can see new IMs, new mail messages, software updates etc. All just letting me know in the tiles I have arranged. That’s pretty cool. * The interface works really well – I feel super hip and cool swiping and sliding things around on my Windows Phone 7. Everything works that way and it’s great and fast and really good looking. I’m all about me feeling cool. * I’m gaming more – I had gotten a few games on my Nexus One but there really weren’t a lot of good developers flocking to the service. Just browsing through the Windows Phone 7 marketplace I’m already seeing a ton of games I want to try and buy. And I sat down and bet Pixel Man 0 just yesterday on my phone. I’m already gaming more than I did on my Nexus One. * Netflix integration is fantastic - It works just like it does on my Xbox 360 and I love having this feature on my phone. * It’s basically a Zune – I’ve been taking my Zune to work and listening to music off of that while I code. I no longer need to take it with me, now I just sync songs onto my phone and it’s my new Zune. I freaking love that. One less device to carry around.   All in all my cons have really little to do with the phone (just the buttons and the web browsing) and more to do with the applications needing to catch up a bit to what I’m used to. And the Pros are things that ARE phone specific so I’m seeing that as a good sign that I’m going to be very happy with my Windows Phone 7. So Wifey is happy having her Nexus One again, I’m happy with my new Windows Phone 7. Life is good. Now I just need to make a game to pay for it….

    Read the article

  • Why Is Vertical Resolution Monitor Resolution so Often a Multiple of 360?

    - by Jason Fitzpatrick
    Stare at a list of monitor resolutions long enough and you might notice a pattern: many of the vertical resolutions, especially those of gaming or multimedia displays, are multiples of 360 (720, 1080, 1440, etc.) But why exactly is this the case? Is it arbitrary or is there something more at work? Today’s Question & Answer session comes to us courtesy of SuperUser—a subdivision of Stack Exchange, a community-driven grouping of Q&A web sites. The Question SuperUser reader Trojandestroy recently noticed something about his display interface and needs answers: YouTube recently added 1440p functionality, and for the first time I realized that all (most?) vertical resolutions are multiples of 360. Is this just because the smallest common resolution is 480×360, and it’s convenient to use multiples? (Not doubting that multiples are convenient.) And/or was that the first viewable/conveniently sized resolution, so hardware (TVs, monitors, etc) grew with 360 in mind? Taking it further, why not have a square resolution? Or something else unusual? (Assuming it’s usual enough that it’s viewable). Is it merely a pleasing-the-eye situation? So why have the display be a multiple of 360? The Answer SuperUser contributor User26129 offers us not just an answer as to why the numerical pattern exists but a history of screen design in the process: Alright, there are a couple of questions and a lot of factors here. Resolutions are a really interesting field of psychooptics meeting marketing. First of all, why are the vertical resolutions on youtube multiples of 360. This is of course just arbitrary, there is no real reason this is the case. The reason is that resolution here is not the limiting factor for Youtube videos – bandwidth is. Youtube has to re-encode every video that is uploaded a couple of times, and tries to use as little re-encoding formats/bitrates/resolutions as possible to cover all the different use cases. For low-res mobile devices they have 360×240, for higher res mobile there’s 480p, and for the computer crowd there is 360p for 2xISDN/multiuser landlines, 720p for DSL and 1080p for higher speed internet. For a while there were some other codecs than h.264, but these are slowly being phased out with h.264 having essentially ‘won’ the format war and all computers being outfitted with hardware codecs for this. Now, there is some interesting psychooptics going on as well. As I said: resolution isn’t everything. 720p with really strong compression can and will look worse than 240p at a very high bitrate. But on the other side of the spectrum: throwing more bits at a certain resolution doesn’t magically make it better beyond some point. There is an optimum here, which of course depends on both resolution and codec. In general: the optimal bitrate is actually proportional to the resolution. So the next question is: what kind of resolution steps make sense? Apparently, people need about a 2x increase in resolution to really see (and prefer) a marked difference. Anything less than that and many people will simply not bother with the higher bitrates, they’d rather use their bandwidth for other stuff. This has been researched quite a long time ago and is the big reason why we went from 720×576 (415kpix) to 1280×720 (922kpix), and then again from 1280×720 to 1920×1080 (2MP). Stuff in between is not a viable optimization target. And again, 1440P is about 3.7MP, another ~2x increase over HD. You will see a difference there. 4K is the next step after that. Next up is that magical number of 360 vertical pixels. Actually, the magic number is 120 or 128. All resolutions are some kind of multiple of 120 pixels nowadays, back in the day they used to be multiples of 128. This is something that just grew out of LCD panel industry. LCD panels use what are called line drivers, little chips that sit on the sides of your LCD screen that control how bright each subpixel is. Because historically, for reasons I don’t really know for sure, probably memory constraints, these multiple-of-128 or multiple-of-120 resolutions already existed, the industry standard line drivers became drivers with 360 line outputs (1 per subpixel). If you would tear down your 1920×1080 screen, I would be putting money on there being 16 line drivers on the top/bottom and 9 on one of the sides. Oh hey, that’s 16:9. Guess how obvious that resolution choice was back when 16:9 was ‘invented’. Then there’s the issue of aspect ratio. This is really a completely different field of psychology, but it boils down to: historically, people have believed and measured that we have a sort of wide-screen view of the world. Naturally, people believed that the most natural representation of data on a screen would be in a wide-screen view, and this is where the great anamorphic revolution of the ’60s came from when films were shot in ever wider aspect ratios. Since then, this kind of knowledge has been refined and mostly debunked. Yes, we do have a wide-angle view, but the area where we can actually see sharply – the center of our vision – is fairly round. Slightly elliptical and squashed, but not really more than about 4:3 or 3:2. So for detailed viewing, for instance for reading text on a screen, you can utilize most of your detail vision by employing an almost-square screen, a bit like the screens up to the mid-2000s. However, again this is not how marketing took it. Computers in ye olden days were used mostly for productivity and detailed work, but as they commoditized and as the computer as media consumption device evolved, people didn’t necessarily use their computer for work most of the time. They used it to watch media content: movies, television series and photos. And for that kind of viewing, you get the most ‘immersion factor’ if the screen fills as much of your vision (including your peripheral vision) as possible. Which means widescreen. But there’s more marketing still. When detail work was still an important factor, people cared about resolution. As many pixels as possible on the screen. SGI was selling almost-4K CRTs! The most optimal way to get the maximum amount of pixels out of a glass substrate is to cut it as square as possible. 1:1 or 4:3 screens have the most pixels per diagonal inch. But with displays becoming more consumery, inch-size became more important, not amount of pixels. And this is a completely different optimization target. To get the most diagonal inches out of a substrate, you want to make the screen as wide as possible. First we got 16:10, then 16:9 and there have been moderately successful panel manufacturers making 22:9 and 2:1 screens (like Philips). Even though pixel density and absolute resolution went down for a couple of years, inch-sizes went up and that’s what sold. Why buy a 19″ 1280×1024 when you can buy a 21″ 1366×768? Eh… I think that about covers all the major aspects here. There’s more of course; bandwidth limits of HDMI, DVI, DP and of course VGA played a role, and if you go back to the pre-2000s, graphics memory, in-computer bandwdith and simply the limits of commercially available RAMDACs played an important role. But for today’s considerations, this is about all you need to know. Have something to add to the explanation? Sound off in the the comments. Want to read more answers from other tech-savvy Stack Exchange users? Check out the full discussion thread here.     

    Read the article

  • Disaster, or Migration?

    - by Rob Farley
    This post is in two parts – technical and personal. And I should point out that it’s prompted in part by this month’s T-SQL Tuesday, hosted by Allen Kinsel. First, the technical: I’ve had a few conversations with people recently about migration – moving a SQL Server database from one box to another (sometimes, but not primarily, involving an upgrade). One question that tends to come up is that of downtime. Obviously there will be some period of time between the old server being available and the new one. The way that most people seem to think of migration is this: Build a new server. Stop people from using the old server. Take a backup of the old server Restore it on the new server. Reconfigure the client applications (or alternatively, configure the new server to use the same address as the old) Make the new server online. There are other things involved, such as testing, of course. But this is essentially the process that people tell me they’re planning to follow. The bit that I want to look at today (as you’ve probably guessed from my title) is the “backup and restore” section. If a SQL database is using the Simple Recovery Model, then the only restore option is the last database backup. This backup could be full or differential. The transaction log never gets backed up in the Simple Recovery Model. Instead, it truncates regularly to stay small. One that’s using the Full Recovery Model (or Bulk-Logged) won’t truncate its log – the log must be backed up regularly. This provides the benefit of having a lot more option available for restores. It’s a requirement for most systems of High Availability, because if you’re making sure that a spare box is up-and-running, ready to take over, then you have to be interested in the logs that are happening on the current box, rather than truncating them all the time. A High Availability system such as Mirroring, Replication or Log Shipping will initialise the spare machine by restoring a full database backup (and maybe a differential backup if available), and then any subsequent log backups. Once the secondary copy is close, transactions can be applied to keep the two in sync. The main aspect of any High Availability system is to have a redundant system that is ready to take over. So the similarity for migration should be obvious. If you need to move a database from one box to another, then introducing a High Availability mechanism can help. By turning on the Full Recovery Model and then taking a backup (so that the now-interesting logs have some context), logs start being kept, and are therefore available for getting the new box ready (even if it’s an upgraded version). When the migration is ready to occur, a failover can be done, letting the new server take over the responsibility of the old, just as if a disaster had happened. Except that this is a planned failover, not a disaster at all. There’s a fine line between a disaster and a migration. Failovers can be useful in patching, upgrading, maintenance, and more. Hopefully, even an unexpected disaster can be seen as just another failover, and there can be an opportunity there – perhaps to get some work done on the principal server to increase robustness. And if I’ve just set up a High Availability system for even the simplest of databases, it’s not necessarily a bad thing. :) So now the personal: It’s been an interesting time recently... June has been somewhat odd. A court case with which I was involved got resolved (through mediation). I can’t go into details, but my lawyers tell me that I’m allowed to say how I feel about it. The answer is ‘lousy’. I don’t regret pursuing it as long as I did – but in the end I had to make a decision regarding the commerciality of letting it continue, and I’m going to look forward to the days when the kind of money I spent on my lawyers is small change. Mind you, if I had a similar situation with an employer, I’d do the same again, but that doesn’t really stop me feeling frustrated about it. The following day I had to fly to country Victoria to see my grandmother, who wasn’t expected to last the weekend. She’s still around a week later as I write this, but her 92-year-old body has basically given up on her. She’s been a Christian all her life, and is looking forward to eternity. We’ll all miss her though, and it’s hard to see my family grieving. Then on Tuesday, I was driving back to the airport with my family to come home, when something really bizarre happened. We were travelling down the freeway, just pulled out to go past a truck (farm-truck sized, not a semi-trailer), when a car-sized mass of metal fell off it. It was something like an industrial air-conditioner, but from where I was sitting, it was just a mass of spinning metal, like something out of a movie (one friend described it as “holidays by Michael Bay”). Somehow, and I’m really don’t know how, the part of it nearest us bounced high enough to clear the car, and there wasn’t even a scratch. We pulled over the check, and I was just thanking God that we’d changed lanes when we had, and that we remained unharmed. I had all kinds of thoughts about what could’ve happened if we’d had something that size land on the windscreen... All this has drilled home that while I feel that I haven’t provided as well for the family as I could’ve done (like by pursuing an expensive legal case), I shouldn’t even consider that I have proper control over things. I get to live life, and make decisions based on what I feel is right at the time. But I’m not going to get everything right, and there will be things that feel like disasters, some which could’ve been in my control and some which are very much beyond my control. The case feels like something I could’ve pursued differently, a disaster that could’ve been avoided in some way. Gran dying is lousy of course. An accident on the freeway would have been awful. I need to recognise that the worst disasters are ones that I can’t affect, and that I need to look at things in context – perhaps seeing everything that happens as a migration instead. Life is never the same from one day to the next. Every event has a before and an after – sometimes it’s clearly positive, sometimes it’s not. I remember good events in my life (such as my wedding), and bad (such as the loss of my father when I was ten, or the back injury I had eight years ago). I’m not suggesting that I know how to view everything from the “God works all things for good” perspective, but I am trying to look at last week as a migration of sorts. Those things are behind me now, and the future is in God’s hands. Hopefully I’ve learned things, and will be able to live accordingly. I’ve come through this time now, and even though I’ll miss Gran, I’ll see her again one day, and the future is bright.

    Read the article

  • JSF index out of bounds exception when submiting a form

    - by selvin
    When I click submit button in a JSF form the following exception occurs. It says an Indexout of bounds exception, but I did not use any ArrayList associated with the code. Is this a bug? what should i do to get rid of this error.. Mojarra: 2.0.2 FCS with primefaces 2.2 JSF: 2.0 NetBeans IDE 6.8 Glassfish Domain V3 Form Code: <p:panel id="jobres" style="min-width: 200px" header="Reservation" widgetVar="jres" closable="true" toggleable="true" > <h:form id="arj" prependId="false" style="width:550px;max-height:400px;overflow:auto;"> <p:tooltip global="true"/> <h:panelGrid columns="2" > <p:panel style="min-width: 220px"> <h:outputLabel value="1.Job type:"/> </p:panel> <h:panelGroup> <h:messages id="aerr"/> <h:selectOneMenu title="Choose a Jobtype" value="#{arjob.jobtype}"> <f:selectItem itemLabel="Sequential" itemValue="sequential"/> <f:selectItem itemLabel="Parallel" itemValue="parallel"/> </h:selectOneMenu> </h:panelGroup> <p:panel> <h:outputLabel value="2.Executable: *"/> </p:panel> <h:panelGroup> <p:fileUpload id="aexeupload" fileUploadListener="#{arjob.chooseListener}" auto="true" update="adlist :erdialog" description="Resource Files"> </p:fileUpload> <br/> <h:panelGroup id="aexelistwrapper"> <p:dataList var="fileList" type="ordered" id="adlist" value="#{arjob.fexelist}"> <p:column> #{fileList}&nbsp; <p:commandLink ajax="true" update="aexelistwrapper" actionListener="#{arjob.removeExe(fileList)}"> <p:graphicImage value="images/closebar.png"/> </p:commandLink> </p:column> </p:dataList> </h:panelGroup> </h:panelGroup> <p:panel> <h:outputLabel value="3.Argument(s):"/> </p:panel> <h:panelGroup> <u style="color:orange"> <i> <p:inplace emptyLabel="Add Arguments" onEditUpdate="aarglist"> <h:inputText title="Enter the arguments" id="aiparg" value="#{arjob.args}"> <f:ajax event="valueChange"/> </h:inputText> <p:commandButton update="aarglistwrapper erdialog" value="add" actionListener="#{arjob.addArg}"/> </p:inplace> </i> </u> </h:panelGroup> <h:panelGroup> </h:panelGroup> <h:panelGroup> <h:panelGrid id="aarglistwrapper"> <p:dataList id="aarglist" type="ordered" var="args" value="#{arjob.arglist}"> <p:column id="col2"> #{args}&nbsp; <p:commandLink ajax="true" update="arj:arglistwrapper" actionListener="#{arjob.removeArgs(args)}"> <p:graphicImage title="remove" value="images/closebar.png"/> </p:commandLink> </p:column> </p:dataList> </h:panelGrid> </h:panelGroup> <p:panel> <h:outputLabel value="4.InputFile(s): *"/> </p:panel> <h:panelGroup> <p:fileUpload id="ainpupload" fileUploadListener="#{arjob.inputChooseListener}" auto="true" update="aipfilelistwrapper :erdialog" description="Resource Files"> </p:fileUpload> <br/> <h:panelGroup id="aipfilelistwrapper"> <p:dataList var="ipfile" type="ordered" id="aipflist" value="#{arjob.finlist}"> <p:column> #{ipfile}&nbsp; <p:commandLink ajax="true" update="aipfilelistwrapper" actionListener="#{arjob.removeInfile(ipfile)}"> <p:graphicImage value="images/closebar.png"/> </p:commandLink> </p:column> </p:dataList> </h:panelGroup> </h:panelGroup> <h:panelGroup> <p:panel > 5)Output File(s): </p:panel> </h:panelGroup> <h:panelGroup> <u style="color:orange"> <i> <p:inplace emptyLabel="Add file name" id="aipexe"> <h:inputText title="Enter the output filenames" id="aexe" value="#{arjob.ofilename}"> <f:ajax event="valueChange"/> </h:inputText> <p:commandButton update="adoutlist :erdialog" value="add" actionListener="#{arjob.addOutfile}"/> </p:inplace> </i> </u> </h:panelGroup> <h:panelGroup> </h:panelGroup> <h:panelGroup> <h:panelGrid id="afilelistwrapper"> <p:dataList id="adoutlist" type="ordered" var="ofile" value="#{arjob.foutlist}"> <p:column id="acol"> #{ofile}&nbsp; <p:commandLink ajax="true" update="afilelistwrapper" actionListener="#{arjob.removeOutfile(ofile)}"> <p:graphicImage value="images/closebar.png"/> </p:commandLink> </p:column> </p:dataList> </h:panelGrid> </h:panelGroup> <p:panel> <h:outputLabel value="6) Operating System"/> </p:panel> <h:panelGroup> <h:selectOneMenu title="Select an OperatingSystem" value="#{arjob.os}"> <f:selectItem itemLabel="CentOS Release 5.2" itemValue="Cent OS 5.2"/> <f:selectItem itemLabel="RHEL Server Release 5" itemValue="RHEL server 5"/> <f:selectItem itemLabel="RHEL Server Release 5.2" itemValue="RHEL server 5.2"/> </h:selectOneMenu> </h:panelGroup> <h:panelGroup> <p:panel> <h:outputLabel value="7) Physical Memory:"/> </p:panel> </h:panelGroup> <h:panelGroup> <p:spinner min="0" style="width: 100px" stepFactor="10" value="#{arjob.mem}"> </p:spinner>(MB) </h:panelGroup> <h:panelGroup> <p:panel> <h:outputLabel value="8) Disk Space:"/> </p:panel> </h:panelGroup> <h:panelGroup> <p:spinner min="0" style="width: 100px" stepFactor="10" value="#{arjob.diskspace}"> </p:spinner>(MB) </h:panelGroup> <h:panelGroup> <p:panel> <h:outputLabel value="9) CPU Mhz:"/> </p:panel> </h:panelGroup> <h:panelGroup> <p:spinner min="0" style="width: 100px" stepFactor="10" value="#{arjob.cpumhz}"> </p:spinner>(Mhz) </h:panelGroup> <p:panel> <h:outputLabel value="10) Start Time:"/> </p:panel> <h:panelGroup> <p:inputMask title="(YYYY-MM-DD HH:MM:SS)" mask="9999-99-99 99:99:99" value="#{arjob.startt}"> </p:inputMask> </h:panelGroup> <p:panel> <h:outputLabel value="11) End Time:"/> </p:panel> <h:panelGroup> <p:inputMask title="(YYYY-MM-DD HH:MM:SS)" mask="9999-99-99 99:99:99" value="#{arjob.endt}"> <p:ajax event="valueChange"/> </p:inputMask> </h:panelGroup> <p:panel> <h:outputLabel value="12) LRMS type"/> </p:panel> <h:panelGroup> <h:selectOneMenu value="#{arjob.lrms}"> <f:selectItem itemLabel="PBS" itemValue="PBS"/> <f:selectItem itemLabel="SGE" itemValue="SGE"/> </h:selectOneMenu> </h:panelGroup> <h:panelGroup> <p:panel> <h:outputLabel value="13)Number of Nodes: *"/> </p:panel> </h:panelGroup> <h:panelGroup> <h:panelGroup> <p:spinner style="width: 100px" min="1" max="100" value="#{arjob.numnodes}"> </p:spinner> </h:panelGroup> </h:panelGroup> <h:panelGroup> </h:panelGroup> <h:panelGroup> <p:commandButton ajax="false" value="Submit" action="#{arjob.jobSubmitAction}"/> </h:panelGroup> </h:panelGrid> </h:form> <p:draggable for="jobres" handle=".ui-panel-titlebar"/> </p:panel> Exception: SEVERE: javax.faces.FacesException: Unexpected error restoring state for component with id j_idt7. Cause: java.lang.IndexOutOfBoundsException: Index: 0, Size: 0. at com.sun.faces.application.view.StateManagementStrategyImpl$2.visit(StateManagementStrategyImpl.java:239) at com.sun.faces.component.visit.FullVisitContext.invokeVisitCallback(FullVisitContext.java:147) at javax.faces.component.UIComponent.visitTree(UIComponent.java:1446) at javax.faces.component.UIComponent.visitTree(UIComponent.java:1457) at javax.faces.component.UIComponent.visitTree(UIComponent.java:1457) at javax.faces.component.UIComponent.visitTree(UIComponent.java:1457) at com.sun.faces.application.view.StateManagementStrategyImpl.restoreView(StateManagementStrategyImpl.java:223) at com.sun.faces.application.StateManagerImpl.restoreView(StateManagerImpl.java:177) at com.sun.faces.application.view.ViewHandlingStrategy.restoreView(ViewHandlingStrategy.java:131) at com.sun.faces.application.view.FaceletViewHandlingStrategy.restoreView(FaceletViewHandlingStrategy.java:430) at com.sun.faces.application.view.MultiViewHandler.restoreView(MultiViewHandler.java:143) at javax.faces.application.ViewHandlerWrapper.restoreView(ViewHandlerWrapper.java:288) at com.sun.faces.lifecycle.RestoreViewPhase.execute(RestoreViewPhase.java:199) at com.sun.faces.lifecycle.Phase.doPhase(Phase.java:101) at com.sun.faces.lifecycle.RestoreViewPhase.doPhase(RestoreViewPhase.java:110) at com.sun.faces.lifecycle.LifecycleImpl.execute(LifecycleImpl.java:118) at javax.faces.webapp.FacesServlet.service(FacesServlet.java:312) at org.apache.catalina.core.StandardWrapper.service(StandardWrapper.java:1523) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:343) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:215) at org.primefaces.webapp.filter.FileUploadFilter.doFilter(FileUploadFilter.java:79) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:256) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:215) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:277) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:188) at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:641) at com.sun.enterprise.web.WebPipeline.invoke(WebPipeline.java:97) at com.sun.enterprise.web.PESessionLockingStandardPipeline.invoke(PESessionLockingStandardPipeline.java:85) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:185) at org.apache.catalina.connector.CoyoteAdapter.doService(CoyoteAdapter.java:332) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:233) at com.sun.enterprise.v3.services.impl.ContainerMapper.service(ContainerMapper.java:165) at com.sun.grizzly.http.ProcessorTask.invokeAdapter(ProcessorTask.java:791) at com.sun.grizzly.http.ProcessorTask.doProcess(ProcessorTask.java:693) at com.sun.grizzly.http.ProcessorTask.process(ProcessorTask.java:954) at com.sun.grizzly.http.DefaultProtocolFilter.execute(DefaultProtocolFilter.java:170) at com.sun.grizzly.DefaultProtocolChain.executeProtocolFilter(DefaultProtocolChain.java:135) at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:102) at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:88) at com.sun.grizzly.http.HttpProtocolChain.execute(HttpProtocolChain.java:76) at com.sun.grizzly.ProtocolChainContextTask.doCall(ProtocolChainContextTask.java:53) at com.sun.grizzly.SelectionKeyContextTask.call(SelectionKeyContextTask.java:57) at com.sun.grizzly.ContextTask.run(ContextTask.java:69) at com.sun.grizzly.util.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:330) at com.sun.grizzly.util.AbstractThreadPool$Worker.run(AbstractThreadPool.java:309) at java.lang.Thread.run(Thread.java:619) Caused by: java.lang.IndexOutOfBoundsException: Index: 0, Size: 0 at java.util.ArrayList.RangeCheck(ArrayList.java:547) at java.util.ArrayList.get(ArrayList.java:322) at javax.faces.component.AttachedObjectListHolder.restoreState(AttachedObjectListHolder.java:161) at javax.faces.component.UIComponentBase.restoreState(UIComponentBase.java:1427) at com.sun.faces.application.view.StateManagementStrategyImpl$2.visit(StateManagementStrategyImpl.java:231) ... 45 more

    Read the article

  • JSON error Caused by: java.lang.NullPointerException

    - by user3821853
    im trying to make a register page on android using JSON. everytime i press register button on avd, i get an error "unfortunately database has stopped". i have a error on my logcat that i cannot understand. this my code. please someone help me. this my register.java import android.app.Activity; import android.app.ProgressDialog; import android.os.AsyncTask; import android.os.Bundle; import android.util.Log; import android.view.View; import android.view.View.OnClickListener; import android.widget.Button; import android.widget.EditText; import android.widget.Toast; import org.apache.http.NameValuePair; import org.apache.http.message.BasicNameValuePair; import org.json.JSONException; import org.json.JSONObject; import java.util.ArrayList; import java.util.List; public class Register extends Activity implements OnClickListener{ private EditText user, pass; private Button mRegister; // Progress Dialog private ProgressDialog pDialog; // JSON parser class JSONParser jsonParser = new JSONParser(); //php register script //localhost : //testing on your device //put your local ip instead, on windows, run CMD > ipconfig //or in mac's terminal type ifconfig and look for the ip under en0 or en1 // private static final String REGISTER_URL = "http://xxx.xxx.x.x:1234/webservice/register.php"; //testing on Emulator: private static final String REGISTER_URL = "http://10.0.2.2:1234/webservice/register.php"; //testing from a real server: //private static final String REGISTER_URL = "http://www.mybringback.com/webservice/register.php"; //ids private static final String TAG_SUCCESS = "success"; private static final String TAG_MESSAGE = "message"; @Override protected void onCreate(Bundle savedInstanceState) { // TODO Auto-generated method stub super.onCreate(savedInstanceState); setContentView(R.layout.register); user = (EditText)findViewById(R.id.username); pass = (EditText)findViewById(R.id.password); mRegister = (Button)findViewById(R.id.register); mRegister.setOnClickListener(this); } @Override public void onClick(View v) { // TODO Auto-generated method stub new CreateUser().execute(); } class CreateUser extends AsyncTask<String, String, String> { @Override protected void onPreExecute() { super.onPreExecute(); pDialog = new ProgressDialog(Register.this); pDialog.setMessage("Creating User..."); pDialog.setIndeterminate(false); pDialog.setCancelable(true); pDialog.show(); } @Override protected String doInBackground(String... args) { // TODO Auto-generated method stub // Check for success tag int success; String username = user.getText().toString(); String password = pass.getText().toString(); try { // Building Parameters List<NameValuePair> params = new ArrayList<NameValuePair>(); params.add(new BasicNameValuePair("username", username)); params.add(new BasicNameValuePair("password", password)); Log.d("request!", "starting"); //Posting user data to script JSONObject json = jsonParser.makeHttpRequest( REGISTER_URL, "POST", params); // full json response Log.d("Registering attempt", json.toString()); // json success element success = json.getInt(TAG_SUCCESS); if (success == 1) { Log.d("User Created!", json.toString()); finish(); return json.getString(TAG_MESSAGE); }else{ Log.d("Registering Failure!", json.getString(TAG_MESSAGE)); return json.getString(TAG_MESSAGE); } } catch (JSONException e) { e.printStackTrace(); } return null; } protected void onPostExecute(String file_url) { // dismiss the dialog once product deleted pDialog.dismiss(); if (file_url != null){ Toast.makeText(Register.this, file_url, Toast.LENGTH_LONG).show(); } } } } this is JSONparser.java import android.util.Log; import org.apache.http.HttpEntity; import org.apache.http.HttpResponse; import org.apache.http.NameValuePair; import org.apache.http.client.ClientProtocolException; import org.apache.http.client.entity.UrlEncodedFormEntity; import org.apache.http.client.methods.HttpGet; import org.apache.http.client.methods.HttpPost; import org.apache.http.client.utils.URLEncodedUtils; import org.apache.http.impl.client.DefaultHttpClient; import org.json.JSONException; import org.json.JSONObject; import java.io.BufferedReader; import java.io.IOException; import java.io.InputStream; import java.io.InputStreamReader; import java.io.UnsupportedEncodingException; import java.util.List; public class JSONParser { static InputStream is = null; static JSONObject jObj = null; static String json = ""; // constructor public JSONParser() { } public JSONObject getJSONFromUrl(final String url) { // Making HTTP request try { // Construct the client and the HTTP request. DefaultHttpClient httpClient = new DefaultHttpClient(); HttpPost httpPost = new HttpPost(url); // Execute the POST request and store the response locally. HttpResponse httpResponse = httpClient.execute(httpPost); // Extract data from the response. HttpEntity httpEntity = httpResponse.getEntity(); // Open an inputStream with the data content. is = httpEntity.getContent(); } catch (UnsupportedEncodingException e) { e.printStackTrace(); } catch (ClientProtocolException e) { e.printStackTrace(); } catch (IOException e) { e.printStackTrace(); } try { // Create a BufferedReader to parse through the inputStream. BufferedReader reader = new BufferedReader(new InputStreamReader( is, "iso-8859-1"), 8); // Declare a string builder to help with the parsing. StringBuilder sb = new StringBuilder(); // Declare a string to store the JSON object data in string form. String line = null; // Build the string until null. while ((line = reader.readLine()) != null) { sb.append(line + "\n"); } // Close the input stream. is.close(); // Convert the string builder data to an actual string. json = sb.toString(); } catch (Exception e) { Log.e("Buffer Error", "Error converting result " + e.toString()); } // Try to parse the string to a JSON object try { jObj = new JSONObject(json); } catch (JSONException e) { Log.e("JSON Parser", "Error parsing data " + e.toString()); } // Return the JSON Object. return jObj; } // function get json from url // by making HTTP POST or GET mehtod public JSONObject makeHttpRequest(String url, String method, List<NameValuePair> params) { // Making HTTP request try { // check for request method if(method == "POST"){ // request method is POST // defaultHttpClient DefaultHttpClient httpClient = new DefaultHttpClient(); HttpPost httpPost = new HttpPost(url); httpPost.setEntity(new UrlEncodedFormEntity(params)); HttpResponse httpResponse = httpClient.execute(httpPost); HttpEntity httpEntity = httpResponse.getEntity(); is = httpEntity.getContent(); }else if(method == "GET"){ // request method is GET DefaultHttpClient httpClient = new DefaultHttpClient(); String paramString = URLEncodedUtils.format(params, "utf-8"); url += "?" + paramString; HttpGet httpGet = new HttpGet(url); HttpResponse httpResponse = httpClient.execute(httpGet); HttpEntity httpEntity = httpResponse.getEntity(); is = httpEntity.getContent(); } } catch (UnsupportedEncodingException e) { e.printStackTrace(); } catch (ClientProtocolException e) { e.printStackTrace(); } catch (IOException e) { e.printStackTrace(); } try { BufferedReader reader = new BufferedReader(new InputStreamReader( is, "iso-8859-1"), 8); StringBuilder sb = new StringBuilder(); String line = null; while ((line = reader.readLine()) != null) { sb.append(line + "\n"); } is.close(); json = sb.toString(); } catch (Exception e) { Log.e("Buffer Error", "Error converting result " + e.toString()); } // try parse the string to a JSON object try { jObj = new JSONObject(json); } catch (JSONException e) { Log.e("JSON Parser", "Error parsing data " + e.toString()); } // return JSON String return jObj; } } and this my error 08-18 23:40:02.381 2000-2018/com.example.blackcustomzier.database E/Buffer Error? Error converting result java.lang.NullPointerException: lock == null 08-18 23:40:02.381 2000-2018/com.example.blackcustomzier.database E/JSON Parser? Error parsing data org.json.JSONException: End of input at character 0 of 08-18 23:40:02.391 2000-2018/com.example.blackcustomzier.database W/dalvikvm? threadid=15: thread exiting with uncaught exception (group=0xb0f37648) 08-18 23:40:02.391 2000-2018/com.example.blackcustomzier.database E/AndroidRuntime? FATAL EXCEPTION: AsyncTask #4 java.lang.RuntimeException: An error occured while executing doInBackground() at android.os.AsyncTask$3.done(AsyncTask.java:299) at java.util.concurrent.FutureTask.finishCompletion(FutureTask.java:352) at java.util.concurrent.FutureTask.setException(FutureTask.java:219) at java.util.concurrent.FutureTask.run(FutureTask.java:239) at android.os.AsyncTask$SerialExecutor$1.run(AsyncTask.java:230) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1080) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:573) at java.lang.Thread.run(Thread.java:841) Caused by: java.lang.NullPointerException at com.example.blackcustomzier.database.Register$CreateUser.doInBackground(Register.java:108) at com.example.blackcustomzier.database.Register$CreateUser.doInBackground(Register.java:74) at android.os.AsyncTask$2.call(AsyncTask.java:287) at java.util.concurrent.FutureTask.run(FutureTask.java:234)             at android.os.AsyncTask$SerialExecutor$1.run(AsyncTask.java:230)             at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1080)             at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:573)             at java.lang.Thread.run(Thread.java:841) 08-18 23:40:02.501 2000-2000/com.example.blackcustomzier.database W/EGL_emulation? eglSurfaceAttrib not implemented 08-18 23:40:02.591 2000-2000/com.example.blackcustomzier.database W/EGL_emulation? eglSurfaceAttrib not implemented 08-18 23:40:02.981 2000-2000/com.example.blackcustomzier.database E/WindowManager? Activity com.example.blackcustomzier.database.Register has leaked window com.android.internal.policy.impl.PhoneWindow$DecorView{b1294c60 V.E..... R......D 0,0-1026,288} that was originally added here android.view.WindowLeaked: Activity com.example.blackcustomzier.database.Register has leaked window com.android.internal.policy.impl.PhoneWindow$DecorView{b1294c60 V.E..... R......D 0,0-1026,288} that was originally added here at android.view.ViewRootImpl.<init>(ViewRootImpl.java:345) at android.view.WindowManagerGlobal.addView(WindowManagerGlobal.java:239) at android.view.WindowManagerImpl.addView(WindowManagerImpl.java:69) at android.app.Dialog.show(Dialog.java:281) at com.example.blackcustomzier.database.Register$CreateUser.onPreExecute(Register.java:85) at android.os.AsyncTask.executeOnExecutor(AsyncTask.java:586) at android.os.AsyncTask.execute(AsyncTask.java:534) at com.example.blackcustomzier.database.Register.onClick(Register.java:70) at android.view.View.performClick(View.java:4240) at android.view.View.onKeyUp(View.java:7928) at android.widget.TextView.onKeyUp(TextView.java:5606) at android.view.KeyEvent.dispatch(KeyEvent.java:2647) at android.view.View.dispatchKeyEvent(View.java:7343) at android.view.ViewGroup.dispatchKeyEvent(ViewGroup.java:1393) at android.view.ViewGroup.dispatchKeyEvent(ViewGroup.java:1393) at android.view.ViewGroup.dispatchKeyEvent(ViewGroup.java:1393) at android.view.ViewGroup.dispatchKeyEvent(ViewGroup.java:1393) at com.android.internal.policy.impl.PhoneWindow$DecorView.superDispatchKeyEvent(PhoneWindow.java:1933) at com.android.internal.policy.impl.PhoneWindow.superDispatchKeyEvent(PhoneWindow.java:1408) at android.app.Activity.dispatchKeyEvent(Activity.java:2384) at com.android.internal.policy.impl.PhoneWindow$DecorView.dispatchKeyEvent(PhoneWindow.java:1860) at android.view.ViewRootImpl$ViewPostImeInputStage.processKeyEvent(ViewRootImpl.java:3791) at android.view.ViewRootImpl$ViewPostImeInputStage.onProcess(ViewRootImpl.java:3774) at android.view.ViewRootImpl$InputStage.deliver(ViewRootImpl.java:3379) at android.view.ViewRootImpl$InputStage.onDeliverToNext(ViewRootImpl.java:3429) at android.view.ViewRootImpl$InputStage.forward(ViewRootImpl.java:3398) at android.view.ViewRootImpl$AsyncInputStage.forward(ViewRootImpl.java:3483) at android.view.ViewRootImpl$InputStage.apply(ViewRootImpl.java:3406) at android.view.ViewRootImpl$AsyncInputStage.apply(ViewRootImpl.java:3540) at android.view.ViewRootImpl$InputStage.deliver(ViewRootImpl.java:3379) at android.view.ViewRootImpl$InputStage.onDeliverToNext(ViewRootImpl.java:3429) at android.view.ViewRootImpl$InputStage.forward(ViewRootImpl.java:3398) at android.view.ViewRootImpl$InputStage.apply(ViewRootImpl.java:3406) at android.view.ViewRootImpl$InputStage.deliver(ViewRootImpl.java:3379) at android.view.ViewRootImpl$InputStage.onDeliverToNext(ViewRootImpl.java:3429) at android.view.ViewRootImpl$InputStage.forward(ViewRootImpl.java:3398) at android.view.ViewRootImpl$AsyncInputStage.forward(ViewRootImpl.java:3516) at android.view.ViewRootImpl$ImeInputStage.onFinishedInputEvent(ViewRootImpl.java:3666) at android.view.inputmethod.InputMethodManager$PendingEvent.run(InputMethodManager.java:1982) at android.view.inputmethod.InputMethodManager.invokeFinishedInputEventCallback(InputMethodManager.java:1698) at android.view.inputmethod.InputMethodManager.finishedInputEvent(InputMethodManager.java:1689) at android.view.inputmethod.InputMethodManager$ImeInputEventSender.onInputEventFinished(InputMethodManager.java:1959) at android.view.InputEventSender.dispatchInputEventFinished(InputEventSender.java:141) at android.os.MessageQueue.nativePollOnce(Native Method) at android.os.MessageQueue.next(MessageQueue.java:132) at android.os.Looper.loop(Looper.java:124) at android.app.ActivityThread.main(ActivityThread.java:5103) at java.lang.reflect.Method.invokeNative(Native Method) at java.lang.reflect.Method.invoke(Method.java:525) at com.android.internal.os.ZygoteInit$MethodAndArgsCal please help me to solve this thx

    Read the article

  • jenkins-maven-android when running throwing the error "android-sdk-linux/platforms" is not a directory"

    - by Sam
    I start setting up the jenkins-maven-android and i'm facing an issue when running the jenkin job. My Machine Details $uname -a Linux development2 3.0.0-12-virtual #20-Ubuntu SMP Fri Oct 7 18:19:02 UTC 2011 x86_64 x86_64 x86_64 GNU/Linux Steps to install the Android SDK in Ubuntu https://help.ubuntu.com/community/AndroidSDK since i'm working on headless env (ssh to client machine) i used following command to install the platform tools android update sdk --no-ui download apache maven and install on http://maven.apache.org/download.html mvn -version output root@development2:/opt/android-sdk-linux/tools# mvn -version Apache Maven 3.0.4 (r1232337; 2012-01-17 08:44:56+0000) Maven home: /opt/apache-maven-3.0.4 Java version: 1.6.0_24, vendor: Sun Microsystems Inc. Java home: /usr/lib/jvm/java-6-openjdk/jre Default locale: en_US, platform encoding: UTF-8 OS name: "linux", version: "3.0.0-12-virtual", arch: "amd64", family: "unix" root@development2:/opt/android-sdk-linux/tools# ran the following two command as mention in below sudo apt-get update sudo apt-get install ia32-libs Problems with Eclipse and Android SDK http://developer.android.com/sdk/installing/index.html As error suggest i gave the path to android SDK in jenkins build config still im getting the error clean install -Dandroid.sdk.path=/opt/android-sdk-linux Can someone help me to resolve this. Thanks Error I'm Getting Waiting for Jenkins to finish collecting data mavenExecutionResult exceptions not empty message : Failed to execute goal com.jayway.maven.plugins.android.generation2:android-maven-plugin:3.1.1:generate-sources (default-generate-sources) on project base-template: Execution default-generate-sources of goal com.jayway.maven.plugins.android.generation2:android-maven-plugin:3.1.1:generate-sources failed: Path "/opt/android-sdk-linux/platforms" is not a directory. Please provide a proper Android SDK directory path as configuration parameter <sdk><path>...</path></sdk> in the plugin <configuration/>. As an alternative, you may add the parameter to commandline: -Dandroid.sdk.path=... or set environment variable ANDROID_HOME. cause : Execution default-generate-sources of goal com.jayway.maven.plugins.android.generation2:android-maven-plugin:3.1.1:generate-sources failed: Path "/opt/android-sdk-linux/platforms" is not a directory. Please provide a proper Android SDK directory path as configuration parameter <sdk><path>...</path></sdk> in the plugin <configuration/>. As an alternative, you may add the parameter to commandline: -Dandroid.sdk.path=... or set environment variable ANDROID_HOME. Stack trace : org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute goal com.jayway.maven.plugins.android.generation2:android-maven-plugin:3.1.1:generate-sources (default-generate-sources) on project base-template: Execution default-generate-sources of goal com.jayway.maven.plugins.android.generation2:android-maven-plugin:3.1.1:generate-sources failed: Path "/opt/android-sdk-linux/platforms" is not a directory. Please provide a proper Android SDK directory path as configuration parameter <sdk><path>...</path></sdk> in the plugin <configuration/>. As an alternative, you may add the parameter to commandline: -Dandroid.sdk.path=... or set environment variable ANDROID_HOME. at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:225) at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153) at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145) at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:84) at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:59) at org.apache.maven.lifecycle.internal.LifecycleStarter.singleThreadedBuild(LifecycleStarter.java:183) at org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:161) at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:320) at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:156) at org.jvnet.hudson.maven3.launcher.Maven3Launcher.main(Maven3Launcher.java:79) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:616) at org.codehaus.plexus.classworlds.launcher.Launcher.launchStandard(Launcher.java:329) at org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:239) at org.jvnet.hudson.maven3.agent.Maven3Main.launch(Maven3Main.java:158) at hudson.maven.Maven3Builder.call(Maven3Builder.java:98) at hudson.maven.Maven3Builder.call(Maven3Builder.java:64) at hudson.remoting.UserRequest.perform(UserRequest.java:118) at hudson.remoting.UserRequest.perform(UserRequest.java:48) at hudson.remoting.Request$2.run(Request.java:326) at hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:72) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334) at java.util.concurrent.FutureTask.run(FutureTask.java:166) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603) at java.lang.Thread.run(Thread.java:679) Caused by: org.apache.maven.plugin.PluginExecutionException: Execution default-generate-sources of goal com.jayway.maven.plugins.android.generation2:android-maven-plugin:3.1.1:generate-sources failed: Path "/opt/android-sdk-linux/platforms" is not a directory. Please provide a proper Android SDK directory path as configuration parameter <sdk><path>...</path></sdk> in the plugin <configuration/>. As an alternative, you may add the parameter to commandline: -Dandroid.sdk.path=... or set environment variable ANDROID_HOME. at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:110) at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:209) ... 27 more Caused by: com.jayway.maven.plugins.android.InvalidSdkException: Path "/opt/android-sdk-linux/platforms" is not a directory. Please provide a proper Android SDK directory path as configuration parameter <sdk><path>...</path></sdk> in the plugin <configuration/>. As an alternative, you may add the parameter to commandline: -Dandroid.sdk.path=... or set environment variable ANDROID_HOME. at com.jayway.maven.plugins.android.AndroidSdk.assertPathIsDirectory(AndroidSdk.java:125) at com.jayway.maven.plugins.android.AndroidSdk.getPlatformDirectories(AndroidSdk.java:285) at com.jayway.maven.plugins.android.AndroidSdk.findAvailablePlatforms(AndroidSdk.java:260) at com.jayway.maven.plugins.android.AndroidSdk.<init>(AndroidSdk.java:80) at com.jayway.maven.plugins.android.AbstractAndroidMojo.getAndroidSdk(AbstractAndroidMojo.java:844) at com.jayway.maven.plugins.android.phase01generatesources.GenerateSourcesMojo.generateR(GenerateSourcesMojo.java:329) at com.jayway.maven.plugins.android.phase01generatesources.GenerateSourcesMojo.execute(GenerateSourcesMojo.java:102) at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:101) ... 28 more channel stopped Finished: FAILURE* android home Echo root@development2:~# echo $ANDROID_HOME /opt/android-sdk-linux

    Read the article

  • Using CMS for App Configuration - Part 1, Deploying Umbraco

    - by Elton Stoneman
    Originally posted on: http://geekswithblogs.net/EltonStoneman/archive/2014/06/04/using-cms-for-app-configurationndashpart-1-deploy-umbraco.aspxSince my last post on using CMS for semi-static API content, How about a new platform for your next API… a CMS?, I’ve been using the idea for centralized app configuration, and this post is the first in a series that will walk through how to do that, step-by-step. The approach gives you a platform-independent, easily configurable way to specify your application configuration for different environments, with a built-in approval workflow, change auditing and the ability to easily rollback to previous settings. It’s like Azure Web and Worker Roles where you can specify settings that change at runtime, but it's not specific to Azure - you can use it for any app that needs changeable config, provided it can access the Internet. The series breaks down into four posts: Deploying Umbraco – the CMS that will store your configurable settings and the current values; Publishing your config – create a document type that encapsulates your settings and a template to expose them as JSON; Consuming your config – in .NET, a simple client that uses dynamic objects to access settings; Config lifecycle management – how to publish, audit, and rollback settings. Let’s get started. Deploying Umbraco There’s an Umbraco package on Azure Websites, so deploying your own instance is easy – but there are a couple of things to watch out for, so this step-by-step will put you in a good place. Create From Gallery The easiest way to get started is with an Azure subscription, navigate to add a new Website and then Create From Gallery. Under CMS, you’ll see an Umbraco package (currently at version 7.1.3): Configure Your App For high availability and scale, you’ll want your CMS on separate kit from anything else you have in Azure, so in the configuration of Umbraco I’d create a new SQL Azure database – which Umbraco will use to store all its content: You can use the free 20mb database option if you don’t have demanding NFRs, or if you’re just experimenting. You’ll need to specify a password for a SQL Server account which the Umbraco service will use, and changing from the default username umbracouser is probably wise. Specify Database Settings You can create a new database on an existing server if you have one, or create new. If you create a new server *do not* use the same username for the database server login as you used for the Umbraco account. If you do, the deployment will fail later. Think of this as the SQL Admin account that you can use for managing the db, the previous account was the service account Umbraco uses to connect. Make Tea If you have a fast kettle. It takes about two minutes for Azure to create and provision the website and the database. Install Umbraco So far we’ve deployed an empty instance of Umbraco using the Azure package, and now we need to browse to the site and complete installation. My Website was called my-app-config, so to complete installation I browse to http://my-app-config.azurewebsites.net:   Enter the credentials you want to use to login – this account will have full admin rights to the Umbraco instance. Note that between deploying your new Umbraco instance and completing installation in this step, anyone can browse to your website and complete the installation themselves with their own credentials, if they know the URL. Remote possibility, but it’s there. From this page *do not* click the big green Install button. If you do, Umbraco will configure itself with a local SQL Server CE database (.sdf file on the Web server), and ignore the SQL Azure database you’ve carefully provisioned and may be paying for. Instead, click on the Customize link and: Configure Your Database You need to enter your SQL Azure database details here, so you’ll have to get the server name from the Azure Management Console. You don’t need to explicitly grant access to your Umbraco website for the database though. Click Continue and you’ll be offered a “starter” website to install: If you don’t know Umbraco at all (but you are familiar with ASP.NET MVC) then a starter website is worthwhile to see how it all hangs together. But after a while you’ll have a bunch of artifacts in your CMS that you don’t want and you’ll have to work out which you can safely delete. So I’d click “No thanks, I do not want to install a starter website” and give yourself a clean Umbraco install. When it completes, the installation will log you in to the welcome screen for managing Umbraco – which you can access from http://my-app-config.azurewebsites.net/umbraco: That’s It Easy. Umbraco is installed, using a dedicated SQL Azure instance that you can separately scale, sync and backup, and ready for your content. In the next post, we’ll define what our app config looks like, and publish some settings for the dev environment.

    Read the article

  • FAT Volume and CE

    - by Kate Moss' Open Space
    Whenever we format a disk volume, it is a good idea to name the label so it will be easier to categorize. To label a volume, we can use LABEL command or UI depends on your preference. Windows CE does provide FAT driver and support various format (FAT12, FAT16,FAT32, ExFAT and TFAT - transaction-safe FAT) and many feature to let you scan and even defrag the volume but not labeling. At any time you format a volume in CE and then mount it on PC, the label is always empty! Of course, you can always label the volume on PC, even it is formatted in CE. So looks like CE does not care about the volume label at all, neither report the label to OS nor changing the label on FAT.So how can we set the volume label in CE? To Answer this question, we need to know how does FAT stores the volume label. Here are some on-line resources are handy for parsing FAT. http://en.wikipedia.org/wiki/File_Allocation_Table http://www.pjrc.com/tech/8051/ide/fat32.html http://www.microsoft.com/whdc/system/platform/firmware/fatgen.mspx You can refer to PUBLIC\COMMON\OAK\DRIVERS\FSD\FATUTIL\MAIN\bootsec.h and dosbpb.h or the above links for the fields we discuss here. The first sector of a FAT Volume (it is not necessary to be the first sector of a disk.) is a FAT boot sector and BPB (BIOS Parameter Block). And at offset 43, bgbsVolumeLabel (or bsVolumeLabel on FAT16) is for storing the volume lable, but note in the spec also indicates "FAT file system drivers should make sure that they update this field when the volume label file in the root directory has its name changed or created.". So we can't just simply update the bgbsVolumeLabel but also need to create a volume lable file in root directory. The volume lable file is not a real file but just a file entry in root directory with zero file lenth and a very special file attribute, ATTR_VOLUME_ID. (defined in public\common\oak\drivers\fsd\fatutil\MAIN\fatutilp.h) Locating and accessing bootsector is quite straight forward, as long as we know the starting sector of a FAT volume, that's it. But where is the root directory? The layout of a typical FAT is like this Boot sector (Volume ID in the figure) followed by Reserved Sectors (1 on FAT12/16 and 32 on FAT32), then FAT chain table(s) (can be 1 or 2), after that is the root directory (FAT12/16 and not shows in the figure) then begining of the File and Directories. In FAT12/16, the root directory is placed right after FAT so it is not hard to caculate the offset in the volume. But in FAT32, this rule is no longer true: the first cluster of the root directory is determined by BGBPB_RootDirStrtClus (or offset 44 in boot sector). Although this field is usually 0x00000002 (it is how CE initial the root directory after formating a volume. Note we should never assume it is always true) which means the first cluster contains data but not like the root directory is contiguous in FAT12/16, it is just like a regular file can be fragmented. So we need to access the root directory (of FAT32) hopping one cluster to another by traversing FAT table. Let's trace the code now. Although the source of FAT driver is not available in CE Shared Source program, but the formatter, Fatutil.dll, is available in public\common\oak\drivers\fsd\fatutil\MAIN\formatdisk.cpp. Be aware the public code only provides formatter for FAT12/16/32 for ExFAT it is still not available. FormatVolumeInternal is the main worker function. With the knowledge here, you should be able the trace the code easily. But I would like to discuss the following code pieces     dwReservedSectors = (fo.dwFatVersion == 32) ? 32 : 1;     dwRootEntries = (fo.dwFatVersion == 32) ? 0 : fo.dwRootEntries; Note the dwReservedSectors is 32 in FAT32 and 1 in FAT12/16. Root Entries is another different mentioned in previous paragraph, 0 for FAT32 (dynamic allocated) and fixed size (usually 512, defined in DEFAULT_ROOT_ENTRIES in public\common\sdk\inc\fatutil.h) And then here   memset(pBootSec->bsVolumeLabel, 0x20, sizeof(pBootSec->bsVolumeLabel)); It sets the Volume Label as empty string. Now let's carry on to the next section - write the root directory.     if (fo.dwFatVersion == 32) {         if (!(fo.dwFlags & FATUTIL_FORMAT_TFAT)) {             dwRootSectors = dwSectorsPerCluster;         }         else {             DIRENTRY    dirEntry;             DWORD       offset;             int               iVolumeNo;             memset(pbBlock, 0, pdi->di_bytes_per_sect);             memset(&dirEntry, 0, sizeof(DIRENTRY));                         dirEntry.de_attr = ATTR_VOLUME_ID;             // the first one is volume label             memcpy(dirEntry.de_name, "TFAT       ", sizeof (dirEntry.de_name));             memcpy(pbBlock, &dirEntry, sizeof(dirEntry));              ...             // Skip the next step of zeroing out clusters             dwCurrentSec += dwSectorsPerCluster;             dwRootSectors = 0;         }     }     // Each new root directory sector needs to be zeroed.     memset(pbBlock, 0, cbSizeBlk);     iRootSec=0;     while ( iRootSec < dwRootSectors) { Basically, the code zero out the each entry in root directory depends on dwRootSectors. In FAT12/16, the dwRootSectors is calculated as the sectors we need for the root entries (512 for most of the case) and in FAT32 it just zero out the one cluster. Please note that, if it is a TFAT volume, it initialize the root directory with special volume label entries for some special purpose. Despite to its unusual initialization process for TFAT, it does provide a example for how to create a volume entry. With some minor modification, we can assign the volume label in FAT formatter and also remember to sync the volume label with bsVolumeLabel or bgbsVolumeLabel in boot sector.

    Read the article

  • SQL Developer Data Modeler v3.3 Early Adopter: Collaborative Design via Excel?

    - by thatjeffsmith
    As you may have heard last week, we have a new version of Oracle SQL Developer Data Modeler now available as an Early Adopter release. Version 3.3 has quite a few new features and I’ll be previewing them here. Today’s topic is our new Excel integration. It builds off of last week’s lesson: Search, so you may want to go read that first. They say it takes a village to raise a child. I say it takes a team to build a data model. You have your techie folks, your business folks, your in-betweeners, and your database geeks. Who gets to define how customers are represented and stored in your database? That data lives forever, so you better get it right from the beginning, or you’ll be living in a hacker’s paradise for years to come. Lots of good rantings, ravings, and advice on this topic in general on Karen Lopez’s (@datachick) blog. But let’s say you are the primary modeler on a project. You dutifully interview the business folks for their requirements. You sit down and start to model and think you’re pretty close. Now you need someone to confirm your assumptions and provide some feedback. Do you send your model over? Take a screenshot and blow it up on a whiteboard? Export to HTML and let them take a magic marker to their monitors? Or maybe you bite the bullet and install your modeling software on their desktops and take the hours or days required to train them up on how to use the the tool. Wouldn’t it be nice if they could just mark up their corrections in Excel and let you suck the updates back in? This is what we have started to build in Oracle SQL Developer Data Modeler. Let’s say you have a new table called ‘UT_STARTUPS.’ It looks a little something like this: A table in Oracle SQL Developer Data Modeler What I would like to do is have my team or co-worker review how I have defined those columns. Perhaps TIMESTAMP is overkill or maybe the column names themselves aren’t up to snuff. What I am going to do is now search for all the columns in my table, then export that to Excel. So do a search for UT_STARTUPS. Search, filter, then Report With the filter set to ‘Columns,’ if I do a report I’ll be only getting the columns that are resolving to my search term. So as long as my table name is unique in the model, I should get what I’m looking for. Here’s what I see when I click on the Report button: XLS or XLSX, either format is just fine I want to decide how the Column data is exported to Excel though, so I’m going to create a report template that I can use going forward. So click the ‘Manage’ button and setup a new template. I’m going to call mine ‘CollaborativeDevelopment.’ The templates allow me to define what properties are included in the reports. Once this is set, I’ll have the XLS file generated, and get to work Now let the Excel junkies do their stuff Note that not ALL of the report properties are update-able (yes, I made up a new word there) via Excel. We’ll have the full list of properties documented going forward, but in my Excel sheet, note that I can’t change the table name or the data types for the columns. I’m going to update some column names and supply ‘nice’ comments so the database users know what’s what. Here’s my input for the designer/architect/database dude: Be kind, please rew…use comments. Save the file, email it back to your modeler. Update the model from Excel That’s right, it’s a right mouse click from your model in the tree If everything goes right, you’ll see a nice confirmation message: It’s alive! Another to-do item on tap – making this dialog more informative. We’ll be showing exactly what in your model was updated from Excel. Let’s take another look at the model now Voila! Why are we doing this again? The goal is to reduce the number of round-trips from the modeler and the business process owner. One is used to working with Excel – why not allow them to mark up their changes in the tool they already know? This is an early adopter release and I anticipate this feature getting a good bit of tuning up before we release. Why don’t you download 3.3, give it a whirl, and let us know what you think?

    Read the article

  • The Krewe App Post-Mortem

    - by Chris Gardner
    Originally posted on: http://geekswithblogs.net/freestylecoding/archive/2014/05/23/the-krewe-app-post-mortem.aspxNow that teched has come and gone, I thought I would use this opportunity to do a little post-mortem on The Krewe app. It is one thing to test the app at home. It is a completely different animal to see how it responds in the environment TechEd creates. At a future time, I will list all the things that I would like to change with the app. At this point, I will find some good way to get community feedback. I want to break all this down screen by screen. We'll start with the screen I got right. The first of these is the events calendar. This is the one screen that, to you guys, just worked. However, there was an issue here. When I wrote v1 for last year, I was lazy and placed everything in CST. This caused problems with the achievements, which I will explain later. Furthermore, the event locations were not check-in locations. This created another problem with the achievements. Next, we get to the Twitter page. For what this page does, it works great. For those that don't know, I have an Azure Worker Role that polls Twitter pretty close to the rate limit. I cache these results in my database, and serve them upon request. This gives me great control over the content. I just have to remember to flush past tweets after a period, to save database growth. The next screen is the check-in screen. This screen has been the bane of my existence since I first created the thing. Last year, I used a background task to check people out of locations after they traveled. This year, I removed the background task in favor of a foursquare model. You are checked out after 3 hours or when you check-in to some other location. This seemed to work well, until those pesky achievements came into the mix. Again, more on this later. Next, I want to address the Connect and Connections screens together. I wanted to use some of the capabilities of the phone, and NFC seemed a natural choice. From this, I came up with the gamification aspects of the app. Since we are, fundamentally, a networking organization, I wanted to encourage people to actually network. Users could make and share a profile, similar to a virtual business card. I just had to figure out how to get people to use the feature. Why not just give someone a business card? Thus, the achievements were born. This was such a good idea. It would have been a great idea, if I have come up with it about two months earlier... When I came up with these ideas, I had about 2 weeks to implement them. Version 1 of the app was, basically, a pure consumption app. We provided data and centralized it. With version 2, the app became a much more interactive experience. The API was not ready for this change in such a short period of time. Most of this became apparent when I started implementing the achievements. The achievements based on count and specific person when fairly easy. The problem came with tying them to locations and events. This took some true SQL kung fu. This also showed me the rookie mistake of putting CST, not UTC, in the database. Once I got all of that cleaned up, I had to find a way to get the achievement system to talk to the phone. I knew I needed to be able to dynamically add achievements. I wouldn't know the precise location of some things until I got to Houston. I wanted the server to approve the achievements. This, unfortunately, required a decent data connection. Some achievements required GPS levels of location accuracy in areas of network triangulation. All of this became a huge nightmare. My flagship feature was based on some silly assumptions. Still, I managed to get 31 people to get the first achievement (Make 1 Connection.) Quite a few of those managed to get to the higher levels. Soon, I will post a list of the feature and changes that need to happen to the API. This includes things like proper objects for communication, geo-fencing, and caching. However, that is for another day.

    Read the article

  • The illusion of Competence

    - by tony_lombardo
    Working as a contractor opened my eyes to the developer food chain.  Even though I had similar experiences earlier in my career, the challenges seemed much more vivid this time through.  I thought I’d share a couple of experiences with you, and the lessons that can be taken from them. Lesson 1: Beware of the “funnel” guy.  The funnel guy is the one who wants you to funnel all thoughts, ideas and code changes through him.  He may say it’s because he wants to avoid conflicts in source control, but the real reason is likely that he wants to hide your contributions.  Here’s an example.  When I finally got access to the code on one of my projects, I was told by the developer that I had to funnel all of my changes through him.  There were 4 of us coding on the project, but only 2 of us working on the UI.  The other 2 were working on a separate application, but part of the overall project.  So I figured, I’ll check it into SVN, he reviews and accepts then merges in.  Not even close.  I didn’t even have checkin rights to SVN, I had to email my changes to the developer so he could check those changes in.  Lesson 2: If you point out flaws in code to someone supposedly ‘higher’ than you in the developer chain, they’re going to get defensive.  My first task on this project was to review the code, familiarize myself with it.  So of course, that’s what I did.  And in familiarizing myself with it, I saw so many bad practices and code smells that I immediately started coming up with solutions to fix it.  Of course, when I reviewed these changes with the developer (guy who originally wrote the code), he smiled and nodded and said, we can’t make those changes now, it’s too destabilizing.  I recommended we create a new branch and start working on refactoring, but branching was a new concept for this guy and he was worried we would somehow break SVN. How about some concrete examples? I started out by recommending we remove NUnit dependency and tests from the application project, and create a separate Unit testing project.  This was met with a little bit of resistance because - “How do I access the private methods?”  As it turned out there weren’t really any private methods that weren’t exposed by public methods, so I quickly calmed this fear. Win 1 Loss 0 Next, I recommended that all of the File IO access be wrapped in Using clauses, or at least properly wrapped in try catch finally.  This recommendation was accepted.. but never implemented. Win 2  Loss 1 Next recommendation was to refactor the command pattern implementation.  The command pattern was implemented, but it wasn’t really necessary for the application.  More over, the fact that we had 100 different command classes, each with it’s own specific command parameters class, made maintenance a huge hassle.  The same code repeated over and over and over.  This recommendation was declined, the code was too fragile and this change would destabilize it.  I couldn’t disagree, though it was the commands themselves in many cases that were fragile. Win 2 Loss 2 Next recommendation was to aid performance (and responsiveness) of the application by using asynchronous service calls.  This on was accepted. Win 2 Loss 3 If you’re paying any attention, you’re wondering why the async service calls was scored as a loss.. Let me explain.  The service call was made using the async pattern.  Followed by a thread.sleep  <facepalm>. Now it’s easy to be harsh on this kind of code, especially if you’re an experienced developer.  But I understood how most of this happened.  One junior guy, working as hard as he can to build his first real world application, with little or no guidance from anyone else.  He had his pattern book and theory of programming to help him, but no real world experience.  He didn’t know how difficult it would be to trace the crashes to the coding issues above, but he will one day.  The part that amazed me was the management position that “this guy should be a team lead, because he’s worked so hard”.  I’m all for rewarding hard work, but when you reward someone by promoting them past the point of their competence, you’re setting yourself and them up for failure.  And that’s lesson 3.  Just because you’ve got a hard worker, doesn’t mean he should be leading a development project.  If you’re a junior guy busting your ass, keep at it.  I encourage you to try new things, but most importantly to learn from your mistakes.  And correct your mistakes.  And if someone else looks at your code and shows you a laundry list of things that should be done differently, don’t take it personally – they’re really trying to help you.  And if you’re a senior guy, working with a junior guy, it’s your duty to point out the flaws in the code.  Even if it does make you the bad guy.  And while I’ve used “guy” above, I mean both men and women.  And in some cases mutant dinosaurs. 

    Read the article

  • Integration Patterns with Azure Service Bus Relay, Part 2: Anonymous full-trust .NET consumer

    - by Elton Stoneman
    This is the second in the IPASBR series, see also: Integration Patterns with Azure Service Bus Relay, Part 1: Exposing the on-premise service Part 2 is nice and easy. From Part 1 we exposed our service over the Azure Service Bus Relay using the netTcpRelayBinding and verified we could set up our network to listen for relayed messages. Assuming we want to consume that service in .NET from an environment which is fairly unrestricted for us, but quite restricted for attackers, we can use netTcpRelay and shared secret authentication. Pattern applicability This is a good fit for scenarios where: the consumer can run .NET in full trust the environment does not restrict use of external DLLs the runtime environment is secure enough to keep shared secrets the service does not need to know who is consuming it the service does not need to know who the end-user is So for example, the consumer is an ASP.NET website sitting in a cloud VM or Azure worker role, where we can keep the shared secret in web.config and we don't need to flow any identity through to the on-premise service. The service doesn't care who the consumer or end-user is - say it's a reference data service that provides a list of vehicle manufacturers. Provided you can authenticate with ACS and have access to Service Bus endpoint, you can use the service and it doesn't care who you are. In this post, we’ll consume the service from Part 1 in ASP.NET using netTcpRelay. The code for Part 2 (+ Part 1) is on GitHub here: IPASBR Part 2 Authenticating and authorizing with ACS In this scenario the consumer is a server in a controlled environment, so we can use a shared secret to authenticate with ACS, assuming that there is governance around the environment and the codebase which will prevent the identity being compromised. From the provider's side, we will create a dedicated service identity for this consumer, so we can lock down their permissions. The provider controls the identity, so the consumer's rights can be revoked. We'll add a new service identity for the namespace in ACS , just as we did for the serviceProvider identity in Part 1. I've named the identity fullTrustConsumer. We then need to add a rule to map the incoming identity claim to an outgoing authorization claim that allows the identity to send messages to Service Bus (see Part 1 for a walkthrough creating Service Idenitities): Issuer: Access Control Service Input claim type: http://schemas.xmlsoap.org/ws/2005/05/identity/claims/nameidentifier Input claim value: fullTrustConsumer Output claim type: net.windows.servicebus.action Output claim value: Send This sets up a service identity which can send messages into Service Bus, but cannot register itself as a listener, or manage the namespace. Adding a Service Reference The Part 2 sample client code is ready to go, but if you want to replicate the steps, you’re going to add a WSDL reference, add a reference to Microsoft.ServiceBus and sort out the ServiceModel config. In Part 1 we exposed metadata for our service, so we can browse to the WSDL locally at: http://localhost/Sixeyed.Ipasbr.Services/FormatService.svc?wsdl If you add a Service Reference to that in a new project you'll get a confused config section with a customBinding, and a set of unrecognized policy assertions in the namespace http://schemas.microsoft.com/netservices/2009/05/servicebus/connect. If you NuGet the ASB package (“windowsazure.servicebus”) first and add the service reference - you'll get the same messy config. Either way, the WSDL should have downloaded and you should have the proxy code generated. You can delete the customBinding entries and copy your config from the service's web.config (this is already done in the sample project in Sixeyed.Ipasbr.NetTcpClient), specifying details for the client:     <client>       <endpoint address="sb://sixeyed-ipasbr.servicebus.windows.net/net"                 behaviorConfiguration="SharedSecret"                 binding="netTcpRelayBinding"                 contract="FormatService.IFormatService" />     </client>     <behaviors>       <endpointBehaviors>         <behavior name="SharedSecret">           <transportClientEndpointBehavior credentialType="SharedSecret">             <clientCredentials>               <sharedSecret issuerName="fullTrustConsumer"                             issuerSecret="E3feJSMuyGGXksJi2g2bRY5/Bpd2ll5Eb+1FgQrXIqo="/>             </clientCredentials>           </transportClientEndpointBehavior>         </behavior>       </endpointBehaviors>     </behaviors>   The proxy is straight WCF territory, and the same client can run against Azure Service Bus through any relay binding, or directly to the local network service using any WCF binding - the contract is exactly the same. The code is simple, standard WCF stuff: using (var client = new FormatService.FormatServiceClient()) { outputString = client.ReverseString(inputString); } Running the sample First, update Solution Items\AzureConnectionDetails.xml with your service bus namespace, and your service identity credentials for the netTcpClient and the provider:   <!-- ACS credentials for the full trust consumer (Part2): -->   <netTcpClient identityName="fullTrustConsumer"                 symmetricKey="E3feJSMuyGGXksJi2g2bRY5/Bpd2ll5Eb+1FgQrXIqo="/> Then rebuild the solution and verify the unit tests work. If they’re green, your service is listening through Azure. Check out the client by navigating to http://localhost:53835/Sixeyed.Ipasbr.NetTcpClient. Enter a string and hit Go! - your string will be reversed by your on-premise service, routed through Azure: Using shared secret client credentials in this way means ACS is the identity provider for your service, and the claim which allows Send access to Service Bus is consumed by Service Bus. None of the authentication details make it through to your service, so your service is not aware who the consumer is (MSDN calls this "anonymous authentication").

    Read the article

  • Finding it Hard to Deliver Right Customer Experience: Think BPM!

    - by Ajay Khanna
    Our relationship with our customers is not a just a single interaction and we should not treat it like one. A customer’s relationship with a vendor is like a journey which starts way before customer makes a purchase and lasts long after that. The journey may start with customer researching a product that may lead to the eventual purchase and may continue with support or service needs for the product. A typical customer journey can be represented as shown below: As you may notice, customers tend to use multiple channels to interact with a company throughout their journey.  They also expect that they should get consistent experience, no matter what interaction channel they may choose. Customers do not like to repeat the information they have already provided and expect companies to remember their preferences, and offer them relevant products and services. If the company fails to meet this expectation, customers not only will abandon the purchase and go to the competitor but may also influence others’ purchase decision. Gone are the days when word of mouth was the only medium, and the customer could influence “Six” others. This is the age of social media and customer’s good or bad experience, especially bad get highly amplified and may influence hundreds of others. Challenges that face B2C companies today include: Delivering consistent experience: The reason that delivering consistent experience is challenging is due to fragmented data, disjointed systems and siloed multichannel interactions. Customers tend to get different service quality if they use web vs. phone vs. store. They get different responses from different service agents or get inconsistent answers if they call sales vs. service group in the company. Such inconsistent experiences result in lower customer satisfaction or NPS (net promoter score) numbers. Increasing Revenue: To stay competitive companies frequently introduce new products and services. Delay in launching such offerings has a significant impact on revenue realization. In addition to new product revenue, there are multiple opportunities to up-sell and cross-sell that impact bottom line. If companies are not able to identify such opportunities, bring a product to market quickly, or not offer the right product to the right customer at the right time, significant loss of revenue may occur. Ensuring Compliance: Companies must be compliant to ever changing regulations, these could be about Know Your Customer (KYC), Export/Import regulations, or taxation policies. In addition to government agencies, companies also need to comply with the SLA that they have committed to their customers. Lapse in meeting any of these requirements may lead to serious fines, penalties and loss in business. Companies have to make sure that they are in compliance will all such regulations and SLA commitments, at any given time. With the advent of social networks and mobile technology, companies not only need to focus on process efficiency but also on customer engagement. Improving engagement means delivering the customer experience as the customer is expecting and interacting with the customer at right time using right channel. Customers expect to be able to contact you via any channel of their choice (web, email, chat, mobile, social media), purchase via any viable channel (web, phone, store, mobile). Customers expect companies to understand their particular needs and remember their preferences on repeated visits. To deliver such an integrated, consistent, and contextual experience, power of BPM in must. Your company may be organized in departments like Marketing, Sales, Service. You may hold prospect data in SFA, order information in ERP, customer issues in CRM. However, the experience delivered to the customer must not be constrained by your system legacy. BPM helps in designing the right experience for the right customer and integrates all the underlining channels, systems, applications to make sure right information will be delivered to the right knowledge worker or to the customer every single time.     Orchestrating information across all systems (MDM, CRM, ERP), departments (commerce, merchandising, marketing service) and channels (Email, phone, web, social)  is the key, and that’s what BPM delivers. In addition to orchestrating systems and channels for consistency, BPM also provides an ability for analysis and decision management. By using data from historical transactions, social media and from other systems, users can determine the customer preferences, customer value, and churn propensity. This information, in the context, is then used while making a decision at a process step. Working with real-time decision management system can also suggest right up-sell or cross-sell offers, discounts or next-best-action steps for a particular customer. Timely action on customer issues or request is also a key tenet of a good customer experience. BPM’s complex event processing capabilities help companies to take proactive actions before issues get escalated. BPM system can be designed to listen to a certain event patters then deduce from those customer situations (credit card stolen, baggage lost, change of address) and do a triage before situation goes out of control. If such a situation arises you can send alerts to right people or immediately invoke corrective actions. Last but not least one of BPM’s key values is to drive continuous improvement. Learning about customers past experiences, interactions and social conversations, provide valuable insight. Such insight can be used to improve products, customer facing processes, and customer experience. You may take these insights as an input to design better more efficient and customer friendly sales, contact center or self-service processes. If customer experience is important for your business, make sure you have incorporated BPM as a part of your strategy to design, orchestrate and improve your customer facing processes.

    Read the article

  • Hiring New IT Employees versus Promoting Internally for IT Positions

    Recently I was asked my opinion regarding the hiring of IT professionals in regards to the option of hiring new IT employees versus promoting internally for IT positions. After thinking a little more about this question regarding staffing, specifically pertaining to promoting internally verses new employees; I think my answer to this question is that it truly depends on the situation. However, in most cases I would side with promoting internally. The key factors in this decision should be based on a company/department’s current values, culture, attitude, and existing priorities.  For example if a company values retaining all of its hard earned business knowledge then they would tend to promote existing employees internal over hiring a new employee. Moreover, the company will have to pay to train an existing employee to learn a new technology and the learning curve for some technologies can be very steep. Conversely, if a company values new technologies and technical proficiency over business knowledge then a company would tend to hire new employees because they may already have experience with a technology that the company is planning on using. In this scenario, the company would have to take on the additional overhead of allowing a new employee to learn how the business operates prior to them being fully effective. To illustrate my points above let us look at contractor that builds in ground pools for example.  He has the option to hire employees that are very strong but use small shovels to dig, or employees weak in physical strength but use large shovels to dig. Which employee should the contractor use to dig a hole for a new in ground pool? If we compare the possible candidates for this job we will find that they are very similar to hiring someone internally verses a new hire. The first example represents the existing workers that are very strong regarding the understanding how the business operates and the reasons why in a specific manner. However this employee could be potentially weaker than an outsider pertaining to specific technologies and would need some time to build their technical prowess for a new position much like the strong worker upgrading their shovels in order to remove more dirt at once when digging. The other employee is very similar to hiring a new person that may already have the large shovel but will need to increase their strength in order to use the shovel properly and efficiently so that they can move a maximum amount of dirt in a minimal amount of time. This can be compared to new employ learning how a business operates before they can be fully functional and integrated in the company/department. Another key factor in this dilemma pertains to existing employee and their passion for their work, their ability to accept new responsibility when given, and the willingness to take on responsibilities when they see a need in the business. As much as possible should be considered in this decision down to the mood of the team, the quality of existing staff, learning cure for both technology and business, and the potential side effects of the existing staff.  In addition, there are many more consideration based on the current team/department/companies culture and mood. There are several factors that need to be considered when promoting an individual or hiring new blood for a team. They both can provide great benefits as well as create controversy to a group. Personally, staffing especially in the IT world is like building a large scale system in that all of the components and modules must fit together and preform as one cohesive system in the same way a team must come together using their individually acquired skills so that they can work as one team.  If a module is out of place or is nonexistent then the rest of the team will suffer until the all of its issues are addressed and resolved. Benefits of Promoting Internally Internal promotions give employees a reason to constantly upgrade their technology, business, and communication skills if they want to further their career Employees can control their own destiny based on personal desires Employee already knows how the business operates Companies can save money by promoting internally because the initial overhead of allowing new hires to learn how a company operates is very expensive Newly promoted employees can assist in training their replacements while transitioning to their new role within a company. Existing employees already have a proven track record in regards fitting in with the business culture; this is always an unknown with all new hires Benefits of a New Hire New employees can energize and excite existing employees New employees can bring new ideas and advancements in technology New employees can offer a different perspective on existing issues based on their past experience. As you can see the decision to promote an existing employee from within a company verses hiring a new person should be based on several factors that should ultimately place the business in the best possible situation for the immediate and long term future. How would you handle this situation? Would you hire a new employee or promote from within?

    Read the article

< Previous Page | 60 61 62 63 64 65 66 67 68 69 70 71  | Next Page >