Search Results

Search found 28207 results on 1129 pages for 'tfs process template'.

Page 310/1129 | < Previous Page | 306 307 308 309 310 311 312 313 314 315 316 317  | Next Page >

  • VBUG Spring Conference, 28th and 29th March in Reading

    - by Eric Nelson
    I presented at VBUG last year and can confirm that they put on a really good event. This year I stood aside for my “replacement” Steve Plank to work his magic. Worth checking out… VBUG SPRING CONFERENCE 28/29 March 2011 Wokefield Park, Mortimer, Reading RG7 3AH Day One (Mon 28 March): Developing SharePoint 2010 with Visual Studio 2010 - Dave McMahon Cache Out with Windows Server AppFabric – Phil Pursglove Extending your Corporate Network in to the Windows Azure Data Centre with Windows Azure Connect – Steve Plank Silverlight Development on Windows Phone 7 - Andy Wigley Day Two (Tues 29 March): Self Service BI for your users, but what does that mean for you? - Andrew Fryer Design Patterns – Compare and Contrast – Gary Short Projecting your corporate identity to the cloud – Steve Plank May the Silverlight 4 be with you – Richard Costall The Step up to ALM – an Introduction to Visual Studio 2010 TFS for the Visual Sourcesafe User - Richard Fennell For more information go to http://cms.vbug.net (It isn’t free but it is high quality)

    Read the article

  • Solo .NET Programmer moving to a team

    - by 219558af-62fa-411d-b24c-d08dab
    I've been a solo .NET programmer for a small startup for the last 8 years. I've put together some pretty decent software, and I always strived to better myself and conform to best practices, including source control (SVN/TFS). I worked very closely with a team of engineers of other disciplines, but when it came down to the software I was the only one programming. I love the craft of programming and love learning new things to sharpen my tools. In 2 weeks I will be starting a new job in a team of 20 .NET developers. My position will be mid-level, and I will be working under some programmers with incredibly impressive backgrounds. Again, the team aspect of development will be new to me, so I'm looking for some general "new guy" tips that will help me be as effective and easy to get along with as possible from the get-go. Anything goes, including high level tips, and small day-to-day things about communication. Thanks for any and all input!

    Read the article

  • Solo .NET Programmer moving to a team

    - by 219558af-62fa-411d-b24c-d08dab
    I've been a solo .NET programmer for a small startup for the last 8 years. I've put together some pretty decent software, and I always strived to better myself and conform to best practices, including source control (SVN/TFS). I worked very closely with a team of engineers of other disciplines, but when it came down to the software I was the only one programming. I love the craft of programming and love learning new things to sharpen my tools. In 2 weeks I will be starting a new job in a team of 20 .NET developers. My position will be mid-level, and I will be working under some programmers with incredibly impressive backgrounds. Again, the team aspect of development will be new to me, so I'm looking for some general "new guy" tips that will help me be as effective and easy to get along with as possible from the get-go. Anything goes, including high level tips, and small day-to-day things about communication.

    Read the article

  • How do you avoid working on the wrong branch?

    - by henginy
    Being careful is usually enough to prevent problems, but sometimes I need to double check the branch I'm working on (e.g. "hmm... I'm in the dev branch, right?") by checking the source control path of a random file. In looking for an easier way, I thought of naming the solution files accordingly (e.g. MySolution_Dev.sln) but with different file names in each branch, I can't merge the solution files. It's not that big of a deal but are there any methods or "small tricks" you use to quickly ensure you're in the correct branch? I'm using Visual Studio 2010 with TFS 2008.

    Read the article

  • New Bing Maps

    - by MikeParks
    Normally I don't stray too far from Programming and TFS on my blog posts but I'm just really impressed with how much Silverlight has improved Bing maps. I use to be big on MapQuest, then hopped over to Google, but now the new Bing Maps have everything I need. The two coolest features are right on the main page. All you have to do is go to http://www.bing.com/maps/explore/, enter your city and hit enter in the search box, then look in the lower left corner under the EXPLORE section. Check out the "What's nearby" and "Restaurants" links. The best part is, if you're interested in doing any Silverlight programming, they have a Bing Maps Silverlight Control Interactive SDK. I was thinking about coding something....but they've pretty much got it down :) Pretty impressive stuff.

    Read the article

  • php-common causing conflict

    - by Cornwell
    I'm trying to install php-pdo but it always fails because of php-common bash-3.2# yum install php-pdo Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile * epel: fedora-epel.mirror.lstn.net Setting up Install Process Resolving Dependencies --> Running transaction check ---> Package php-pdo.i386 0:5.1.6-40.el5_9 set to be updated --> Processing Dependency: php-common = 5.1.6-40.el5_9 for package: php-pdo --> Finished Dependency Resolution php-pdo-5.1.6-40.el5_9.i386 from base has depsolving problems --> Missing Dependency: php-common = 5.1.6-40.el5_9 is needed by package php-pdo-5.1.6-40.el5_9.i386 (base) Error: Missing Dependency: php-common = 5.1.6-40.el5_9 is needed by package php-pdo-5.1.6-40.el5_9.i386 (base) You could try using --skip-broken to work around the problem You could try running: package-cleanup --problems package-cleanup --dupes rpm -Va --nofiles --nodigest The program package-cleanup is found in the yum-utils package. Installing php-common I get: bash-3.2# yum install php-common Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile * epel: fedora-epel.mirror.lstn.net Setting up Install Process Package matching php-common-5.1.6-40.el5_9.i386 already installed. Checking for update. Nothing to do Searched here and google but couldn't find anything that worked EDIT: Added new repositories: bash-3.2# yum install php-common Loaded plugins: fastestmirror Repository base is listed more than once in the configuration Repository addons is listed more than once in the configuration Repository extras is listed more than once in the configuration Repository centosplus is listed more than once in the configuration Repository contrib is listed more than once in the configuration Loading mirror speeds from cached hostfile * addons: mirror.raystedman.net * base: mirror.5ninesolutions.com * centosplus: mirror.anl.gov * contrib: yum.singlehop.com * epel: fedora-epel.mirror.lstn.net * extras: centos.unmeteredvps.net * update: mirror.team-cymru.org addons | 1.9 kB 00:00 base | 1.1 kB 00:00 centosplus | 1.9 kB 00:00 centosplus/primary_db | 53 kB 00:01 contrib | 1.9 kB 00:00 contrib/primary_db | 1.1 kB 00:00 epel | 3.6 kB 00:00 extras | 2.1 kB 00:00 nginx | 2.5 kB 00:00 update | 1.9 kB 00:00 update/primary_db | 84 kB 00:04 updates | 1.9 kB 00:00 Setting up Install Process Package matching php-common-5.1.6-40.el5_9.i386 already installed. Checking for update. Nothing to do And then... bash-3.2# yum install php-pdo Loaded plugins: fastestmirror Repository base is listed more than once in the configuration Repository addons is listed more than once in the configuration Repository extras is listed more than once in the configuration Repository centosplus is listed more than once in the configuration Repository contrib is listed more than once in the configuration Loading mirror speeds from cached hostfile * addons: mirror.raystedman.net * base: centos.mirror.lstn.net * centosplus: mirror.anl.gov * contrib: yum.singlehop.com * epel: fedora-epel.mirror.lstn.net * extras: centos.unmeteredvps.net * update: mirrors.loosefoot.com Setting up Install Process Resolving Dependencies --> Running transaction check ---> Package php-pdo.i386 0:5.1.6-40.el5_9 set to be updated --> Processing Dependency: php-common = 5.1.6-40.el5_9 for package: php-pdo --> Finished Dependency Resolution php-pdo-5.1.6-40.el5_9.i386 from base has depsolving problems --> Missing Dependency: php-common = 5.1.6-40.el5_9 is needed by package php-pdo-5.1.6-40.el5_9.i386 (base) Error: Missing Dependency: php-common = 5.1.6-40.el5_9 is needed by package php-pdo-5.1.6-40.el5_9.i386 (base) You could try using --skip-broken to work around the problem You could try running: package-cleanup --problems package-cleanup --dupes rpm -Va --nofiles --nodigest The program package-cleanup is found in the yum-utils package.

    Read the article

  • The betas are coming &hellip; the betas are coming

    - by Enrique Lima
    Yep!  Another round of stuff to test out, I am referring to the Visual Studio 2010 SP1 related Betas. They are out, and available.  The always present warning that normally comes with installing Beta stuff is true here too. Scott Hanselman does a fantastic job on describing what is new, gains, fixes and such. The download links: Visual Studio 2010 SP1 Beta:http://www.microsoft.com/downloads/en/details.aspx?FamilyID=11ea69cb-cf12-4842-a3d7-b32a1e5642e2&displaylang=en .NET 4.0 SP1 Beta : http://www.microsoft.com/downloads/en/details.aspx?FamilyID=6e3b7759-3df2-4755-8208-44955eee4d4c&displaylang=en TFS 2010 SP1 Beta:http://www.microsoft.com/downloads/en/details.aspx?FamilyID=d4f5a430-919b-46ee-bab6-ba804402df21&displaylang=en

    Read the article

  • Not assigning Bugs to a specific user

    - by user2977817
    My question: Is there a benefit to NOT assigning a Bug to a particular developer? Leaving it to the team as-a-whole? Our department has decided to be more Agile by not assigning Bugs/Defects to individuals. Using Team Foundation Server 2012, we'll place all Bugs in a development team's "Area" but leave the "Assigned To" field blank. The idea is that the team will create a Task work item which will be assigned to an individual and the Task will link to the Bug. The Team as a whole will therefore take responsibility for the Bug, not an individual, aligning to Scrum - apparently. I see the down side. The reporting tools built into TFS become less useful when you cannot sort by assigned vs unassigned, let alone sorting by which user Bugs are assigned. Is there a benefit I'm not seeing? Besides encouraging teamwork by putting the responsibility on the team-as-a-whole instead of an individual?

    Read the article

  • Distributed and/or Parallel SSIS processing

    - by Jeff
    Background: Our company hosts SaaS DSS applications, where clients provide us data Daily and/or Weekly, which we process & merge into their existing database. During business hours, load in the servers are pretty minimal as it's mostly users running simple pre-defined queries via the website, or running drill-through reports that mostly hit the SSAS OLAP cube. I manage the IT Operations Team, and so far this has presented an interesting "scaling" issue for us. For our daily-refreshed clients, the server is only "busy" for about 4-6 hrs at night. For our weekly-refresh clients, the server is only "busy" for maybe 8-10 hrs per week! We've done our best to use some simple methods of distributing the load by spreading the daily clients evenly among the servers such that we're not trying to process daily clients back-to-back over night. But long-term this scaling strategy creates two notable issues. First, it's going to consume a pretty immense amount of hardware that sits idle for large periods of time. Second, it takes significant Production Support over-head to basically "schedule" the ETL such that they don't over-lap, and move clients/schedules around if they out-grow the resources on a particular server or allocated time-slot. As the title would imply, one option we've tried is running multiple SSIS packages in parallel, but in most cases this has yielded VERY inconsistent results. The most common failures are DTExec, SQL, and SSAS fighting for physical memory and throwing out-of-memory errors, and ETLs running 3,4,5x longer than expected. So from my practical experience thus far, it seems like running multiple ETL packages on the same hardware isn't a good idea, but I can't be the first person that doesn't want to scale multiple ETLs around manual scheduling, and sequential processing. One option we've considered is virtualizing the servers, which obviously doesn't give you any additional resources, but moves the resource contention onto the hypervisor, which (from my experience) seems to manage simultaneous CPU/RAM/Disk I/O a little more gracefully than letting DTExec, SQL, and SSAS battle it out within Windows. Question to the forum: So my question to the forum is, are we missing something obvious here? Are there tools out there that can help manage running multiple SSIS packages on the same hardware? Would it be more "efficient" in terms of parallel execution if instead of running DTExec, SQL, and SSAS same machine (with every machine running that configuration), we run in pairs of three machines with SSIS running on one machine, SQL on another, and SSAS on a third? Obviously that would only make sense if we could process more than the three ETL we were able to process on the machine independently. Another option we've considered is completely re-architecting our SSIS package to have one "master" package for all clients that attempts to intelligently chose a server based off how "busy" it already is in terms of CPU/Memory/Disk utilization, but that would be a herculean effort, and seems like we're trying to reinvent something that you would think someone would sell (although I haven't had any luck finding it). So in summary, are we missing an obvious solution for this, and does anyone know if any tools (for free or for purchase, doesn't matter) that facilitate running multiple SSIS ETL packages in parallel and on multiple servers? (What I would call a "queue & node based" system, but that's not an official term). Ultimately VMWare's Distributed Resource Scheduler addresses this as you simply run a consistent number of clients per VM that you know will never conflict scheduleing-wise, then leave it up to VMWare to move the VMs around to balance out hardware usage. I'm definitely not against using VMWare to do this, but since we're a 100% Microsoft app stack, it seems like -someone- out there would have solved this problem at the application layer instead of the hypervisor layer by checking on resource utilization at the OS, SQL, SSAS levels. I'm open to ANY discussion on this, and remember no suggestion is too crazy or radical! :-) Right now, VMWare is the only option we've found to get away from "manually" balancing our resources, so any suggestions that leave us on a pure Microsoft stack would be great. Thanks guys, Jeff

    Read the article

  • TELERIK DELIVERS NEW TEAM PRODUCTIVITY TOOLS TO MARKET

    Companys holistic TeamPulse solution addresses agile project management needs Waltham, MA, April 13, 2010 Telerik, a leading provider of development tools and solutions for the Microsoft .NET platform, today announced the launch of TeamPulse, an integrated suite of agile scheduling and planning tools for Microsofts team collaboration platform, Team Foundation Server (TFS). The first version of TeamPulse is available to users immediately in Beta, and is scheduled for commercial launch in July 2010. Developed...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Team Foundation Service passe au Cloud, la plateforme de gestion du cycle de vie de Microsoft ne vise plus uniquement .NET ou Windows

    Microsoft lance Team Foundation Service la version Cloud de son outil de gestion du cycle de vie des applications Près d'une année après avoir dévoilé la beta de Team Foundation Service (TFS), Microsoft annonce le passage de la version hébergée de Team Foundation Server sur Windows Azure en version finale. Pour rappel, Team Foundation Server est une solution de travail collaboratif et de gestion du cycle de vie des applications (ALM) permettant : la gestion des sources, des builds, le suivi des éléments de travail, la planification et l'analyse des performances. La version hébergée de l'outil dispose des outils de gestion de projets agiles supportant SCRUM et Capability Ma...

    Read the article

  • Multiple style sheets best practice

    - by user1145927
    I currently am working on a project which has one large style sheet for about 20 pages. The style sheet contains some styles which are specific for certain pages. I'd like to break the style sheet up so there is one style sheet for each page with one master style sheet that handles everything generic. The reason I want to do this is so people can work on multiple pages without having to worry about who has that large style sheet checked out (I'm using TFS). Is this good practice?

    Read the article

  • How to Identify and Backup the Latest SQL Server Database in a Series

    I have to support a third party application that periodically creates a new database on the fly. This obviously causes issues with our backup mechanisms. The databases have a particular pattern for naming, so I can identify the set of databases, however, I need to make sure I'm always backing up the newest one. Read this tip to ensure you are backing up your latest database in a series. Is your SQL Database under Version Control?SSMS plug-in SQL Source Control connects SVN, TFS, Git, Hg and all others to SQL Server. Learn more.

    Read the article

  • Microsoft intègre Git à Codeplex, la plateforme d'hébergement de projets open-source supporte l'application de gestion des versions

    Microsoft intègre Git à Codeplex La plateforme d'hébergement de projets open source supporte désormais l'application de gestion des versions en plus de Mercurial et TFS Codeplex, la plateforme d'hébergement des projets open source de Microsoft prend désormais en charge Git. Git est une application de gestion des versions décentralisée libre créée par Linux Torvalds, le père du noyau Linux et distribuée sous la licence GNU 2. CodePlex utilise déjà le logiciel de gestion des versions décentralisé Mercurial, pour le contrôle de version distribué et Team Foundation Server (qui prend en charge les clients Subversion) pour le contrôle de version centralisée. Malgr...

    Read the article

  • MySQL hangs if connection comes from outside the LAN

    - by Subito
    I have a MySQL Server operating just fine if I access him from his local LAN (192.168.100.0/24). If I try to access hin from another LAN (192.168.113.0/24 in this case) it hangs for a really long time before delivering the result. SHOW PROCESSLIST; shows this process in Sleep, State empty. If I strace -p this process I get the following Output (23512 is the PID of the corresponding mysqld process): Process 23512 attached - interrupt to quit restart_syscall(<... resuming interrupted call ...>) = 1 fcntl(10, F_GETFL) = 0x2 (flags O_RDWR) fcntl(10, F_SETFL, O_RDWR|O_NONBLOCK) = 0 accept(10, {sa_family=AF_INET, sin_port=htons(51696), sin_addr=inet_addr("192.168.113.4")}, [16]) = 33 fcntl(10, F_SETFL, O_RDWR) = 0 rt_sigaction(SIGCHLD, {SIG_DFL, [CHLD], SA_RESTORER|SA_RESTART, 0x7f9ce7ca34f0}, {SIG_DFL, [CHLD], SA_RESTORER|SA_RESTART, 0x7f9ce7ca34f0}, ) = 0 getpeername(33, {sa_family=AF_INET, sin_port=htons(51696), sin_addr=inet_addr("192.168.113.4")}, [16]) = 0 getsockname(33, {sa_family=AF_INET, sin_port=htons(3306), sin_addr=inet_addr("192.168.100.190")}, [16]) = 0 open("/etc/hosts.allow", O_RDONLY) = 64 fstat(64, {st_mode=S_IFREG|0644, st_size=580, ...}) = 0 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f9ce9839000 read(64, "# /etc/hosts.allow: list of host"..., 4096) = 580 read(64, "", 4096) = 0 close(64) = 0 munmap(0x7f9ce9839000, 4096) = 0 open("/etc/hosts.deny", O_RDONLY) = 64 fstat(64, {st_mode=S_IFREG|0644, st_size=880, ...}) = 0 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f9ce9839000 read(64, "# /etc/hosts.deny: list of hosts"..., 4096) = 880 read(64, "", 4096) = 0 close(64) = 0 munmap(0x7f9ce9839000, 4096) = 0 getsockname(33, {sa_family=AF_INET, sin_port=htons(3306), sin_addr=inet_addr("192.168.100.190")}, [16]) = 0 fcntl(33, F_SETFL, O_RDONLY) = 0 fcntl(33, F_GETFL) = 0x2 (flags O_RDWR) setsockopt(33, SOL_SOCKET, SO_RCVTIMEO, "\36\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0", 16) = 0 setsockopt(33, SOL_SOCKET, SO_SNDTIMEO, "<\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0", 16) = 0 fcntl(33, F_SETFL, O_RDWR|O_NONBLOCK) = 0 setsockopt(33, SOL_IP, IP_TOS, [8], 4) = 0 setsockopt(33, SOL_TCP, TCP_NODELAY, [1], 4) = 0 futex(0x7f9cea5c9564, FUTEX_WAKE_OP_PRIVATE, 1, 1, 0x7f9cea5c9560, {FUTEX_OP_SET, 0, FUTEX_OP_CMP_GT, 1}) = 1 futex(0x7f9cea5c6fe0, FUTEX_WAKE_PRIVATE, 1) = 1 poll([{fd=10, events=POLLIN}, {fd=12, events=POLLIN}], 2, -1) = 1 ([{fd=10, revents=POLLIN}]) fcntl(10, F_GETFL) = 0x2 (flags O_RDWR) fcntl(10, F_SETFL, O_RDWR|O_NONBLOCK) = 0 accept(10, {sa_family=AF_INET, sin_port=htons(51697), sin_addr=inet_addr("192.168.113.4")}, [16]) = 31 fcntl(10, F_SETFL, O_RDWR) = 0 rt_sigaction(SIGCHLD, {SIG_DFL, [CHLD], SA_RESTORER|SA_RESTART, 0x7f9ce7ca34f0}, {SIG_DFL, [CHLD], SA_RESTORER|SA_RESTART, 0x7f9ce7ca34f0}, ) = 0 getpeername(31, {sa_family=AF_INET, sin_port=htons(51697), sin_addr=inet_addr("192.168.113.4")}, [16]) = 0 getsockname(31, {sa_family=AF_INET, sin_port=htons(3306), sin_addr=inet_addr("192.168.100.190")}, [16]) = 0 open("/etc/hosts.allow", O_RDONLY) = 33 fstat(33, {st_mode=S_IFREG|0644, st_size=580, ...}) = 0 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f9ce9839000 read(33, "# /etc/hosts.allow: list of host"..., 4096) = 580 read(33, "", 4096) = 0 close(33) = 0 munmap(0x7f9ce9839000, 4096) = 0 open("/etc/hosts.deny", O_RDONLY) = 33 fstat(33, {st_mode=S_IFREG|0644, st_size=880, ...}) = 0 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f9ce9839000 read(33, "# /etc/hosts.deny: list of hosts"..., 4096) = 880 read(33, "", 4096) = 0 close(33) = 0 munmap(0x7f9ce9839000, 4096) = 0 getsockname(31, {sa_family=AF_INET, sin_port=htons(3306), sin_addr=inet_addr("192.168.100.190")}, [16]) = 0 fcntl(31, F_SETFL, O_RDONLY) = 0 fcntl(31, F_GETFL) = 0x2 (flags O_RDWR) setsockopt(31, SOL_SOCKET, SO_RCVTIMEO, "\36\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0", 16) = 0 setsockopt(31, SOL_SOCKET, SO_SNDTIMEO, "<\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0", 16) = 0 fcntl(31, F_SETFL, O_RDWR|O_NONBLOCK) = 0 setsockopt(31, SOL_IP, IP_TOS, [8], 4) = 0 setsockopt(31, SOL_TCP, TCP_NODELAY, [1], 4) = 0 futex(0x7f9cea5c9564, FUTEX_WAKE_OP_PRIVATE, 1, 1, 0x7f9cea5c9560, {FUTEX_OP_SET, 0, FUTEX_OP_CMP_GT, 1}) = 1 futex(0x7f9cea5c6fe0, FUTEX_WAKE_PRIVATE, 1) = 1 poll([{fd=10, events=POLLIN}, {fd=12, events=POLLIN}], 2, -1^C <unfinished ...> Process 23512 detached This output repeats itself until the answer gets send. It could take up to 15 Minutes until the request gets served. In the local LAN its a matter of Milliseconds. Why is this and how can I debug this further? [Edit] tcpdump shows a ton of this: 14:49:44.103107 IP cassandra-test.mysql > 192.168.X.6.64626: Flags [S.], seq 1434117703, ack 1793610733, win 14600, options [mss 1460,nop,nop,sackOK,nop,wscale 7], length 0 14:49:44.135187 IP 192.168.X.6.64625 > cassandra-test.mysql: Flags [P.], seq 106:145, ack 182, win 4345, length 39 14:49:44.135293 IP cassandra-test.mysql > 192.168.X.6.64625: Flags [P.], seq 182:293, ack 145, win 115, length 111 14:49:44.167025 IP 192.168.X.6.64624 > cassandra-test.mysql: Flags [.], ack 444, win 4280, length 0 14:49:44.168933 IP 192.168.X.6.64626 > cassandra-test.mysql: Flags [.], ack 1, win 4390, length 0 14:49:44.169088 IP cassandra-test.mysql > 192.168.X.6.64626: Flags [P.], seq 1:89, ack 1, win 115, length 88 14:49:44.169672 IP 192.168.X.6.64625 > cassandra-test.mysql: Flags [P.], seq 145:171, ack 293, win 4317, length 26 14:49:44.169726 IP cassandra-test.mysql > 192.168.X.6.64625: Flags [P.], seq 293:419, ack 171, win 115, length 126 14:49:44.275111 IP 192.168.X.6.64626 > cassandra-test.mysql: Flags [P.], seq 1:74, ack 89, win 4368, length 73 14:49:44.275131 IP cassandra-test.mysql > 192.168.X.6.64626: Flags [.], ack 74, win 115, length 0 14:49:44.275149 IP 192.168.X.6.64625 > cassandra-test.mysql: Flags [P.], seq 171:180, ack 419, win 4286, length 9 14:49:44.275189 IP cassandra-test.mysql > 192.168.X.6.64626: Flags [P.], seq 89:100, ack 74, win 115, length 11 14:49:44.275264 IP 192.168.X.6.64625 > cassandra-test.mysql: Flags [P.], seq 180:185, ack 419, win 4286, length 5 14:49:44.275281 IP cassandra-test.mysql > 192.168.X.6.64625: Flags [.], ack 185, win 115, length 0 14:49:44.275295 IP cassandra-test.mysql > 192.168.X.6.64625: Flags [F.], seq 419, ack 185, win 115, length 0 14:49:44.275650 IP 192.168.X.6.64625 > cassandra-test.mysql: Flags [F.], seq 185, ack 419, win 4286, length 0 14:49:44.275660 IP cassandra-test.mysql > 192.168.X.6.64625: Flags [.], ack 186, win 115, length 0 14:49:44.275910 IP 192.168.X.6.64627 > cassandra-test.mysql: Flags [S], seq 2336421549, win 8192, options [mss 1351,nop,wscale 2,nop,nop,sackOK], length 0 14:49:44.275921 IP cassandra-test.mysql > 192.168.X.6.64627: Flags [S.], seq 3289359778, ack 2336421550, win 14600, options [mss 1460,nop,nop,sackOK,nop,wscale 7], length 0

    Read the article

  • "Can't Connect to Server" from 2nd virtual host on VPS

    - by chaoskreator
    I'm using Debian 7 Wheezy and Apache 2.2.22, and I'm setting up Virtual Hosts for a number of websites on my VPS. I've successfully configured the VirtualHost directives for one of the sites, but the second one continually gives "Problem Loading Page" in Firefox. I've run configtest and it has verified all my syntax is correct, and I've checked all the permissions. Everything on the 2nd domain is pretty much copy/pasted from the first, so I'm not sure what the issue is, as there are no entries into /var/log/apache2/error.log other than where I have reloaded the configurations: /# cat /var/log/apache2/error.log [Thu May 29 01:19:00 2014] [notice] Graceful restart requested, doing restart [Thu May 29 01:19:00 2014] [info] Init: Seeding PRNG with 656 bytes of entropy [Thu May 29 01:19:00 2014] [info] Init: Generating temporary RSA private keys (512/1024 bits) [Thu May 29 01:19:00 2014] [info] Init: Generating temporary DH parameters (512/1024 bits) [Thu May 29 01:19:00 2014] [debug] ssl_scache_shmcb.c(253): shmcb_init allocated 512000 bytes of shared memory [Thu May 29 01:19:00 2014] [debug] ssl_scache_shmcb.c(272): for 511920 bytes (512000 including header), recommending 32 subcaches, 133 indexes each [Thu May 29 01:19:00 2014] [debug] ssl_scache_shmcb.c(306): shmcb_init_memory choices follow [Thu May 29 01:19:00 2014] [debug] ssl_scache_shmcb.c(308): subcache_num = 32 [Thu May 29 01:19:00 2014] [debug] ssl_scache_shmcb.c(310): subcache_size = 15992 [Thu May 29 01:19:00 2014] [debug] ssl_scache_shmcb.c(312): subcache_data_offset = 3208 [Thu May 29 01:19:00 2014] [debug] ssl_scache_shmcb.c(314): subcache_data_size = 12784 [Thu May 29 01:19:00 2014] [debug] ssl_scache_shmcb.c(316): index_num = 133 [Thu May 29 01:19:00 2014] [info] Shared memory session cache initialised [Thu May 29 01:19:00 2014] [info] Init: Initializing (virtual) servers for SSL [Thu May 29 01:19:00 2014] [info] mod_ssl/2.2.22 compiled against Server: Apache/2.2.22, Library: OpenSSL/1.0.1e [Thu May 29 01:19:00 2014] [notice] Apache/2.2.22 (Debian) PHP/5.4.4-14+deb7u9 mod_ssl/2.2.22 OpenSSL/1.0.1e mod_perl/2.0.7 Perl/v5.14.2 configured -- resuming normal operations [Thu May 29 01:19:00 2014] [info] Server built: Mar 4 2013 22:05:16 [Thu May 29 01:19:00 2014] [debug] prefork.c(1023): AcceptMutex: sysvsem (default: sysvsem) I've ensured to enable each vhost with a2ensite {sitename.conf} with no errors there, either. Below are the contents of the configuration files... /etc/apache2/apache2.conf # Global configuration # LockFile ${APACHE_LOCK_DIR}/accept.lock PidFile ${APACHE_PID_FILE} Timeout 300 KeepAlive On MaxKeepAliveRequests 100 KeepAliveTimeout 5 ## ## Server-Pool Size Regulation (MPM specific) ## # prefork MPM # StartServers: number of server processes to start # MinSpareServers: minimum number of server processes which are kept spare # MaxSpareServers: maximum number of server processes which are kept spare # MaxClients: maximum number of server processes allowed to start # MaxRequestsPerChild: maximum number of requests a server process serves <IfModule mpm_prefork_module> StartServers 5 MinSpareServers 5 MaxSpareServers 10 MaxClients 150 MaxRequestsPerChild 0 </IfModule> # worker MPM # StartServers: initial number of server processes to start # MinSpareThreads: minimum number of worker threads which are kept spare # MaxSpareThreads: maximum number of worker threads which are kept spare # ThreadLimit: ThreadsPerChild can be changed to this maximum value during a # graceful restart. ThreadLimit can only be changed by stopping # and starting Apache. # ThreadsPerChild: constant number of worker threads in each server process # MaxClients: maximum number of simultaneous client connections # MaxRequestsPerChild: maximum number of requests a server process serves <IfModule mpm_worker_module> StartServers 2 MinSpareThreads 25 MaxSpareThreads 75 ThreadLimit 64 ThreadsPerChild 25 MaxClients 150 MaxRequestsPerChild 0 </IfModule> # event MPM # StartServers: initial number of server processes to start # MinSpareThreads: minimum number of worker threads which are kept spare # MaxSpareThreads: maximum number of worker threads which are kept spare # ThreadsPerChild: constant number of worker threads in each server process # MaxClients: maximum number of simultaneous client connections # MaxRequestsPerChild: maximum number of requests a server process serves <IfModule mpm_event_module> StartServers 2 MinSpareThreads 25 MaxSpareThreads 75 ThreadLimit 64 ThreadsPerChild 25 MaxClients 150 MaxRequestsPerChild 0 </IfModule> # These need to be set in /etc/apache2/envvars User ${APACHE_RUN_USER} Group ${APACHE_RUN_GROUP} # # AccessFileName: The name of the file to look for in each directory # for additional configuration directives. See also the AllowOverride # directive. # AccessFileName .htaccess # # The following lines prevent .htaccess and .htpasswd files from being # viewed by Web clients. # <Files ~ "^\.ht"> Order allow,deny Deny from all Satisfy all </Files> DefaultType None HostnameLookups Off ErrorLog ${APACHE_LOG_DIR}/error.log LogLevel debug # Include module configuration: Include mods-enabled/*.load Include mods-enabled/*.conf # Include list of ports to listen on and which to use for name based vhosts Include ports.conf # # The following directives define some format nicknames for use with # a CustomLog directive (see below). # If you are behind a reverse proxy, you might want to change %h into %{X-Forwarded-For}i # # LogFormat "%v:%p %h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" vhost_combined LogFormat "%h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" combined LogFormat "%h %l %u %t \"%r\" %>s %O" common LogFormat "%{Referer}i -> %U" referer LogFormat "%{User-agent}i" agent <Directory "/var/www"> Order allow,deny Allow from all Require all granted </Directory> # Include generic snippets of statements Include conf.d/ # Include the virtual host configurations: Include sites-enabled/*.conf NameVirtualHost *:80 /etc/apache2/sites-available/site1.net.conf <VirtualHost *:80> ServerName site1.net ServerAlias site1.net *.site1.net DocumentRoot "/var/www/site1" ErrorLog "/var/www/site1/logs/error.log" CustomLog "/var/www/site1/logs/access.log" vhost_combined <Directory "/var/www/site1"> Options None AllowOverride All Order allow,deny Allow from all Satisfy Any </Directory> </VirtualHost> /etc/apache2/sites-available/site2.com.conf <VirtualHost *:80> ServerName site2.com ServerAlias site2.com *.site2.com DocumentRoot "/var/www/site2" ErrorLog "/var/www/site2/logs/error.log" CustomLog "/var/www/site2/logs/access.log" vhost_combined <Directory "/var/www/site2"> Options None AllowOverride All Order allow,deny Allow from all Satisfy Any </Directory> </VirtualHost> I've also tried setting NameVirtualHost like: Listen 80 NameVirtualHost 23.88.121.82:80 NameVirtualHost 127.0.0.1:80 and the VirtualHost Directives: <VirtualHost 23.88.121.82:80> ... </VirtualHost> for both sites, but that causes the first site to fail, as well. I'm wondering if I need to set up individual IPs for each site, possibly? I have 2 more IPv4 and 3 IPv6 addresses available, if that would make a difference. Also, in the grand scheme of things, I will need to enable SSL for the first site. I've been reading that I'll need to basically just mimic the directives for listening on port 80, only on port 443, and make sure mod_ssl is enabled? EDIT: I just ran apache2 -t to test the config files that way, and got the error: apache2: bad user name ${APACHE_RUN_USER}. However, apachectl configtest returns Syntax OK. There are no other mentions of errors with the mutex anywhere else, however. I was pretty sure if there was an error with the user apache was supposed to run under, the server wouldn't start at all... EDIT 2: Restarting apache fixed the bad user name error.

    Read the article

  • Enabling Kerberos Authentication for Reporting Services

    - by robcarrol
    Recently, I’ve helped several customers with Kerberos authentication problems with Reporting Services and Analysis Services, so I’ve decided to write this blog post and pull together some useful resources in one place (there are 2 whitepapers in particular that I found invaluable configuring Kerberos authentication, and these can be found in the references section at the bottom of this post). In most of these cases, the problem has manifested itself with the Login failed for User ‘NT Authority\Anonymous’ (“double-hop”) error. By default, Reporting Services uses Windows Integrated Authentication, which includes the Kerberos and NTLM protocols for network authentication. Additionally, Windows Integrated Authentication includes the negotiate security header, which prompts the client to select Kerberos or NTLM for authentication. The client can access reports which have the appropriate permissions by using Kerberos for authentication. Servers that use Kerberos authentication can impersonate those clients and use their security context to access network resources. You can configure Reporting Services to use both Kerberos and NTLM authentication; however this may lead to a failure to authenticate. With negotiate, if Kerberos cannot be used, the authentication method will default to NTLM. When negotiate is enabled, the Kerberos protocol is always used except when: Clients/servers that are involved in the authentication process cannot use Kerberos. The client does not provide the information necessary to use Kerberos. An in-depth discussion of Kerberos authentication is beyond the scope of this post, however when users execute reports that are configured to use Windows Integrated Authentication, their logon credentials are passed from the report server to the server hosting the data source. Delegation needs to be set on the report server and Service Principle Names (SPNs) set for the relevant services. When a user processes a report, the request must go through a Web server on its way to a database server for processing. Kerberos authentication enables the Web server to request a service ticket from the domain controller; impersonate the client when passing the request to the database server; and then restrict the request based on the user’s permissions. Each time a server is required to pass the request to another server, the same process must be used. Kerberos authentication is supported in both native and SharePoint integrated mode, but I’ll focus on native mode for the purpose of this post (I’ll explain configuring SharePoint integrated mode and Kerberos authentication in a future post). Configuring Kerberos avoids the authentication failures due to double-hop issues. These double-hop errors occur when a users windows domain credentials can’t be passed to another server to complete the user’s request. In the case of my customers, users were executing Reporting Services reports that were configured to query Analysis Services cubes on a separate machine using Windows Integrated security. The double-hop issue occurs as NTLM credentials are valid for only one network hop, subsequent hops result in anonymous authentication. The client attempts to connect to the report server by making a request from a browser (or some other application), and the connection process begins with authentication. With NTLM authentication, client credentials are presented to Computer 2. However Computer 2 can’t use the same credentials to access Computer 3 (so we get the Anonymous login error). To access Computer 3 it is necessary to configure the connection string with stored credentials, which is what a number of customers I have worked with have done to workaround the double-hop authentication error. However, to get the benefits of Windows Integrated security, a better solution is to enable Kerberos authentication. Again, the connection process begins with authentication. With Kerberos authentication, the client and the server must demonstrate to one another that they are genuine, at which point authentication is successful and a secure client/server session is established. In the illustration above, the tiers represent the following: Client tier (computer 1): The client computer from which an application makes a request. Middle tier (computer 2): The Web server or farm where the client’s request is directed. Both the SharePoint and Reporting Services server(s) comprise the middle tier (but we’re only concentrating on native deployments just now). Back end tier (computer 3): The Database/Analysis Services server/Cluster where the requested data is stored. In order to enable Kerberos authentication for Reporting Services it’s necessary to configure the relevant SPNs, configure trust for delegation for server accounts, configure Kerberos with full delegation and configure the authentication types for Reporting Services. Service Principle Names (SPNs) are unique identifiers for services and identify the account’s type of service. If an SPN is not configured for a service, a client account will be unable to authenticate to the servers using Kerberos. You need to be a domain administrator to add an SPN, which can be added using the SetSPN utility. For Reporting Services in native mode, the following SPNs need to be registered --SQL Server Service SETSPN -S mssqlsvc/servername:1433 Domain\SQL For named instances, or if the default instance is running under a different port, then the specific port number should be used. --Reporting Services Service SETSPN -S http/servername Domain\SSRS SETSPN -S http/servername.domain.com Domain\SSRS The SPN should be set for the NETBIOS name of the server and the FQDN. If you access the reports using a host header or DNS alias, then that should also be registered SETSPN -S http/www.reports.com Domain\SSRS --Analysis Services Service SETSPN -S msolapsvc.3/servername Domain\SSAS Next, you need to configure trust for delegation, which refers to enabling a computer to impersonate an authenticated user to services on another computer: Location Description Client 1. The requesting application must support the Kerberos authentication protocol. 2. The user account making the request must be configured on the domain controller. Confirm that the following option is not selected: Account is sensitive and cannot be delegated. Servers 1. The service accounts must be trusted for delegation on the domain controller. 2. The service accounts must have SPNs registered on the domain controller. If the service account is a domain user account, the domain administrator must register the SPNs. In Active Directory Users and Computers, verify that the domain user accounts used to access reports have been configured for delegation (the ‘Account is sensitive and cannot be delegated’ option should not be selected): We then need to configure the Reporting Services service account and computer to use Kerberos with full delegation:   We also need to do the same for the SQL Server or Analysis Services service accounts and computers (depending on what type of data source you are connecting to in your reports). Finally, and this is the part that sometimes gets over-looked, we need to configure the authentication type correctly for reporting services to use Kerberos authentication. This is configured in the Authentication section of the RSReportServer.config file on the report server. <Authentication> <AuthenticationTypes>           <RSWindowsNegotiate/> </AuthenticationTypes> <EnableAuthPersistence>true</EnableAuthPersistence> </Authentication> This will enable Kerberos authentication for Internet Explorer. For other browsers, see the link below. The report server instance must be restarted for these changes to take effect. Once these changes have been made, all that’s left to do is test to make sure Kerberos authentication is working properly by running a report from report manager that is configured to use Windows Integrated authentication (either connecting to Analysis Services or SQL Server back-end). Resources: Manage Kerberos Authentication Issues in a Reporting Services Environment http://download.microsoft.com/download/B/E/1/BE1AABB3-6ED8-4C3C-AF91-448AB733B1AF/SSRSKerberos.docx Configuring Kerberos Authentication for Microsoft SharePoint 2010 Products http://www.microsoft.com/download/en/details.aspx?displaylang=en&id=23176 How to: Configure Windows Authentication in Reporting Services http://msdn.microsoft.com/en-us/library/cc281253.aspx RSReportServer Configuration File http://msdn.microsoft.com/en-us/library/ms157273.aspx#Authentication Planning for Browser Support http://msdn.microsoft.com/en-us/library/ms156511.aspx

    Read the article

  • Oracle Enterprise Manager 11g Application Management Suite for Oracle E-Business Suite Now Available

    - by chung.wu
    Oracle Enterprise Manager 11g Application Management Suite for Oracle E-Business Suite is now available. The management suite combines features that were available in the standalone Application Management Pack for Oracle E-Business Suite and Application Change Management Pack for Oracle E-Business Suite with Oracle's market leading real user monitoring and configuration management capabilities to provide the most complete solution for managing E-Business Suite applications. The features that were available in the standalone management packs are now packaged into Oracle E-Business Suite Plug-in 4.0, which is now fully certified with Oracle Enterprise Manager 11g Grid Control. This latest plug-in extends Grid Control with E-Business Suite specific management capabilities and features enhanced change management support. In addition, this latest release of Application Management Suite for Oracle E-Business Suite also includes numerous real user monitoring improvements. General Enhancements This new release of Application Management Suite for Oracle E-Business Suite offers the following key capabilities: Oracle Enterprise Manager 11g Grid Control Support: All components of the management suite are certified with Oracle Enterprise Manager 11g Grid Control. Built-in Diagnostic Ability: This release has numerous major enhancements that provide the necessary intelligence to determine if the product has been installed and configured correctly. There are diagnostics for Discovery, Cloning, and User Monitoring that will validate if the appropriate patches, privileges, setups, and profile options have been configured. This feature improves the setup and configuration time to be up and operational. Lifecycle Automation Enhancements Application Management Suite for Oracle E-Business Suite provides a centralized view to monitor and orchestrate changes (both functional and technical) across multiple Oracle E-Business Suite systems. In this latest release, it provides even more control and flexibility in managing Oracle E-Business Suite changes.Change Management: Built-in Diagnostic Ability: This latest release has numerous major enhancements that provide the necessary intelligence to determine if the product has been installed and configured correctly. There are diagnostics for Customization Manager, Patch Manager, and Setup Manager that will validate if the appropriate patches, privileges, setups, and profile options have been configured. Enhancing the setup time and configuration time to be up and operational. Customization Manager: Multi-Node Custom Application Registration: This feature automates the process of registering and validating custom products/applications on every node in a multi-node EBS system. Public/Private File Source Mappings and E-Business Suite Mappings: File Source Mappings & E-Business Suite Mappings can be created and marked as public or private. Only the creator/owner can define/edit his/her own mappings. Users can use public mappings, but cannot edit or change settings. Test Checkout Command for Versions: This feature allows you to test/verify checkout commands at the version level within the File Source Mapping page. Prerequisite Patch Validation: You can specify prerequisite patches for Customization packages and for Release 12 Oracle E-Business Suite packages. Destination Path Population: You can now automatically populate the Destination Path for common file types during package construction. OAF File Type Support: Ability to package Oracle Application Framework (OAF) customizations and deploy them across multiple Oracle E-Business Suite instances. Extended PLL Support: Ability to distinguish between different types of PLLs (that is, Report and Forms PLL files). Providing better granularity when managing PLL objects. Enhanced Standard Checker: Provides greater and more comprehensive list of coding standards that are verified during the package build process (for example, File Driver exceptions, Java checks, XML checks, SQL checks, etc.) HTML Package Readme: The package Readme is in HTML format and includes the file listing. Advanced Package Search Capabilities: The ability to utilize more criteria within the advanced search package (that is, Public, Last Updated by, Files Source Mapping, and E-Business Suite Mapping). Enhanced Package Build Notifications: More detailed information on the results of a package build process. Better, more detailed troubleshooting guidance in the event of build failures. Patch Manager:Staged Patches: Ability to run Patch Manager with no external internet access. Customer can download Oracle E-Business Suite patches into a shared location for Patch Manager to access and apply. Supports highly secured production environments that prohibit external internet connections. Support for Superseded Patches: Automatic check for superseded patches. Allows users to easily add superseded patches into the Patch Run. More comprehensive and correct Patch Runs. Removes many manual and laborious tasks, frees up Apps DBAs for higher value-added tasks. Automatic Primary Node Identification: Users can now specify which is the "primary node" (that is, which node hosts the Shared APPL_TOP) during the Patch Run interview process, available for Release 12 only. Setup Manager:Preview Extract Results: Ability to execute an extract in "proof mode", and examine the query results, to determine accuracy. Used in conjunction with the "where" clause in Advanced Filtering. This feature can provide better and more accurate fine tuning of extracts. Use Uploaded Extracts in New Projects: Ability to incorporate uploaded extracts in new projects via new LOV fields in package construction. Leverages the Setup Manager repository to access extracts that have been uploaded. Allows customer to reuse uploaded extracts to provision new instances. Re-use Existing (that is, historical) Extracts in New Projects: Ability to incorporate existing extracts in new projects via new LOV fields in package construction. Leverages the Setup Manager repository to access point-in-time extracts (snapshots) of configuration data. Allows customer to reuse existing extracts to provision new instances. Allows comparative historical reporting of identical APIs, executed at different times. Support for BR100 formats: Setup Manager can now automatically produce reports in the BR100 format. Native support for industry standard formats. Concurrent Manager API Support: General Foundation now provides an API for management of "Concurrent Manager" configuration data. Ability to migrate Concurrent Managers from one instance to another. Complete the setup once and never again; no need to redefine the Concurrent Managers. User Experience Management Enhancements Application Management Suite for Oracle E-Business Suite includes comprehensive capabilities for user experience management, supporting both real user and synthetic transaction based user monitoring techniques. This latest release of the management suite include numerous improvements in real user monitoring support. KPI Reporting: Configurable decimal precision for reporting of KPI and SLA values. By default, this is two decimal places. KPI numerator and denominator information. It is now possible to view KPI numerator and denominator information, and to have it available for export. Content Messages Processing: The application content message facility has been extended to distinguish between notifications and errors. In addition, it is now possible to specify matching rules that can be used to refine a selected content message specification. Note this is only available for XPath-based (not literal) message contents. Data Export: The Enriched data export facility has been significantly enhanced to provide improved performance and accessibility. Data is no longer stored within XML-based files, but is now stored within the Reporter database. However, it is possible to configure an alternative database for its storage. Access to the export data is through SQL. With this enhancement, it is now more easy than ever to use tools such as Oracle Business Intelligence Enterprise Edition to analyze correlated data collected from real user monitoring and business data sources. SNMP Traps for System Events: Previously, the SNMP notification facility was only available for KPI alerting. It has now been extended to support the generation of SNMP traps for system events, to provide external health monitoring of the RUEI system processes. Performance Improvements: Enhanced dashboard performance. The dashboard facility has been enhanced to support the parallel loading of items. In the case of dashboards containing large numbers of items, this can result in a significant performance improvement. Initial period selection within Data Browser and reports. The User Preferences facility has been extended to allow you to specify the initial period selection when first entering the Data Browser or reports facility. The default is the last hour. Performance improvement when querying the all sessions group. Technical Prerequisites, Download and Installation Instructions The Linux version of the plug-in is available for immediate download from Oracle Technology Network or Oracle eDelivery. For specific information regarding technical prerequisites, product download and installation, please refer to My Oracle Support note 1224313.1. The following certifications are in progress: * Oracle Solaris on SPARC (64-bit) (9, 10) * HP-UX Itanium (11.23, 11.31) * HP-UX PA-RISC (64-bit) (11.23, 11.31) * IBM AIX on Power Systems (64-bit) (5.3, 6.1)

    Read the article

  • Refreshing Your PC Won’t Help: Why Bloatware is Still a Problem on Windows 8

    - by Chris Hoffman
    Bloatware is still a big problem on new Windows 8 and 8.1 PCs. Some websites will tell you that you can easily get rid of manufacturer-installed bloatware with Windows 8′s Reset feature, but they’re generally wrong. This junk software often turns the process of powering on your new PC from what could be a delightful experience into a tedious slog, forcing you to spend hours cleaning up your new PC before you can enjoy it. Why Refreshing Your PC (Probably) Won’t Help Manufacturers install software along with Windows on their new PCs. In addition to hardware drivers that allow the PC’s hardware to work properly, they install more questionable things like trial antivirus software and other nagware. Much of this software runs at boot, cluttering the system tray and slowing down boot times, often dramatically. Software companies pay computer manufacturers to include this stuff. It’s installed to make the PC manufacturer money at the cost of making the Windows computer worse for actual users. Windows 8 includes “Refresh Your PC” and “Reset Your PC” features that allow Windows users to quickly get their computers back to a fresh state. It’s essentially a quick, streamlined way of reinstalling Windows.  If you install Windows 8 or 8.1 yourself, the Refresh operation will give your PC a clean Windows system without any additional third-party software. However, Microsoft allows computer manufacturers to customize their Refresh images. In other words, most computer manufacturers will build their drivers, bloatware, and other system customizations into the Refresh image. When you Refresh your computer, you’ll just get back to the factory-provided system complete with bloatware. It’s possible that some computer manufacturers aren’t building bloatware into their refresh images in this way. It’s also possible that, when Windows 8 came out, some computer manufacturer didn’t realize they could do this and that refreshing a new PC would strip the bloatware. However, on most Windows 8 and 8.1 PCs, you’ll probably see bloatware come back when you refresh your PC. It’s easy to understand how PC manufacturers do this. You can create your own Refresh images on Windows 8 and 8.1 with just a simple command, replacing Microsoft’s image with a customized one. Manufacturers can install their own refresh images in the same way. Microsoft doesn’t lock down the Refresh feature. Desktop Bloatware is Still Around, Even on Tablets! Not only is typical Windows desktop bloatware not gone, it has tagged along with Windows as it moves to new form factors. Every Windows tablet currently on the market — aside from Microsoft’s own Surface and Surface 2 tablets — runs on a standard Intel x86 chip. This means that every Windows 8 and 8.1 tablet you see in stores has a full desktop with the capability to run desktop software. Even if that tablet doesn’t come with a keyboard, it’s likely that the manufacturer has preinstalled bloatware on the tablet’s desktop. Yes, that means that your Windows tablet will be slower to boot and have less memory because junk and nagging software will be on its desktop and in its system tray. Microsoft considers tablets to be PCs, and PC manufacturers love installing their bloatware. If you pick up a Windows tablet, don’t be surprised if you have to deal with desktop bloatware on it. Microsoft Surfaces and Signature PCs Microsoft is now selling their own Surface PCs that they built themselves — they’re now a “devices and services” company after all, not a software company. One of the nice things about Microsoft’s Surface PCs is that they’re free of the typical bloatware. Microsoft won’t take money from Norton to include nagging software that worsens the experience. If you pick up a Surface device that provides Windows 8.1 and 8 as Microsoft intended it — or install a fresh Windows 8.1 or 8 system — you won’t see any bloatware. Microsoft is also continuing their Signature program. New PCs purchased from Microsoft’s official stores are considered “Signature PCs” and don’t have the typical bloatware. For example, the same laptop could be full of bloatware in a traditional computer store and clean, without the nasty bloatware when purchased from a Microsoft Store. Microsoft will also continue to charge you $99 if you want them to remove your computer’s bloatware for you — that’s the more questionable part of the Signature program. Windows 8 App Bloatware is an Improvement There’s a new type of bloatware on new Windows 8 systems, which is thankfully less harmful. This is bloatware in the form of included “Windows 8-style”, “Store-style”, or “Modern” apps in the new, tiled interface. For example, Amazon may pay a computer manufacturer to include the Amazon Kindle app from the Windows Store. (The manufacturer may also just receive a cut of book sales for including it. We’re not sure how the revenue sharing works — but it’s clear PC manufacturers are getting money from Amazon.) The manufacturer will then install the Amazon Kindle app from the Windows Store by default. This included software is technically some amount of clutter, but it doesn’t cause the problems older types of bloatware does. It won’t automatically load and delay your computer’s startup process, clutter your system tray, or take up memory while you’re using your computer. For this reason, a shift to including new-style apps as bloatware is a definite improvement over older styles of bloatware. Unfortunately, this type of bloatware has not replaced traditional desktop bloatware, and new Windows PCs will generally have both. Windows RT is Immune to Typical Bloatware, But… Microsoft’s Windows RT can’t run Microsoft desktop software, so it’s immune to traditional bloatware. Just as you can’t install your own desktop programs on it, the Windows RT device’s manufacturer can’t install their own desktop bloatware. While Windows RT could be an antidote to bloatware, this advantage comes at the cost of being able to install any type of desktop software at all. Windows RT has also seemingly failed — while a variety of manufacturers came out with their own Windows RT devices when Windows 8 was first released, they’ve all since been withdrawn from the market. Manufacturers who created Windows RT devices have criticized it in the media and stated they have no plans to produce any future Windows RT devices. The only Windows RT devices still on the market are Microsoft’s Surface (originally named Surface RT) and Surface 2. Nokia is also coming out with their own Windows RT tablet, but they’re in the process of being purchased by Microsoft. In other words, Windows RT just isn’t a factor when it comes to bloatware — you wouldn’t get a Windows RT device unless you purchased a Surface, but those wouldn’t come with bloatware anyway. Removing Bloatware or Reinstalling Windows 8.1 While bloatware is still a problem on new Windows systems and the Refresh option probably won’t help you, you can still eliminate bloatware in the traditional way. Bloatware can be uninstalled from the Windows Control Panel or with a dedicated removal tool like PC Decrapifier, which tries to automatically uninstall the junk for you. You can also do what Windows geeks have always tended to do with new computers — reinstall Windows 8 or 8.1 from scratch with installation media from Microsoft. You’ll get a clean Windows system and you can install only the hardware drivers and other software you need. Unfortunately, bloatware is still a big problem for Windows PCs. Windows 8 tries to do some things to address bloatware, but it ultimately comes up short. Most Windows PCs sold in most stores to most people will still have the typical bloatware slowing down the boot process, wasting memory, and adding clutter. Image Credit: LG on Flickr, Intel Free Press on Flickr, Wilson Hui on Flickr, Intel Free Press on Flickr, Vernon Chan on Flickr     

    Read the article

  • Launch market place with id of an application that doesn't exist in the android market place

    - by Gaurav
    Hi, I am creating an application that checks the installation of a package and then launches the market-place with its id. When I try to launch market place with id of an application say com.mybrowser.android by throwing an intent android.intent.action.VIEW with url: market://details?id=com.mybrowser.android, the market place application does launches but crashes after launch. Note: the application com.mybrowser.android doesn't exists in the market-place. MyApplication is my application. $ adb logcat I/ActivityManager( 1030): Starting activity: Intent { act=android.intent.action.MAIN cat=[android.intent.category.LAUNCHER] flg=0x10200000 cmp=myapp.testapp/.MyApplication } I/ActivityManager( 1030): Start proc myapp.testapp for activity myapp.testapp/.MyApplication: pid=3858 uid=10047 gids={1015, 3003} I/MyApplication( 3858): [ Activity CREATED ] I/MyApplication( 3858): [ Activity STARTED ] I/MyApplication( 3858): onResume D/dalvikvm( 1109): GC freed 6571 objects / 423480 bytes in 73ms I/MyApplication( 3858): Pressed OK button I/MyApplication( 3858): Broadcasting Intent: android.intent.action.VIEW, data: market://details?id=com.mybrowser.android I/ActivityManager( 1030): Starting activity: Intent { act=android.intent.action.VIEW dat=market://details?id=com.mybrowser.android flg=0x10000000 cmp=com.android.ven ding/.AssetInfoActivity } I/MyApplication( 3858): onPause I/ActivityManager( 1030): Start proc com.android.vending for activity com.android.vending/.AssetInfoActivity: pid=3865 uid=10023 gids={3003} I/ActivityThread( 3865): Publishing provider com.android.vending.SuggestionsProvider: com.android.vending.SuggestionsProvider D/dalvikvm( 1030): GREF has increased to 701 I/vending ( 3865): com.android.vending.api.RadioHttpClient$1.handleMessage(): Handle DATA_STATE_CHANGED event: NetworkInfo: type: WIFI[], state: CONNECTED/CO NNECTED, reason: (unspecified), extra: (none), roaming: false, failover: false, isAvailable: true I/ActivityManager( 1030): Displayed activity com.android.vending/.AssetInfoActivity: 609 ms (total 7678 ms) D/dalvikvm( 1030): GC freed 10458 objects / 676440 bytes in 128ms I/MyApplication( 3858): [ Activity STOPPED ] D/dalvikvm( 3865): GC freed 3538 objects / 254008 bytes in 84ms W/dalvikvm( 3865): threadid=19: thread exiting with uncaught exception (group=0x4001b180) E/AndroidRuntime( 3865): Uncaught handler: thread AsyncTask #1 exiting due to uncaught exception E/AndroidRuntime( 3865): java.lang.RuntimeException: An error occured while executing doInBackground() E/AndroidRuntime( 3865): at android.os.AsyncTask$3.done(AsyncTask.java:200) E/AndroidRuntime( 3865): at java.util.concurrent.FutureTask$Sync.innerSetException(FutureTask.java:273) E/AndroidRuntime( 3865): at java.util.concurrent.FutureTask.setException(FutureTask.java:124) E/AndroidRuntime( 3865): at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:307) E/AndroidRuntime( 3865): at java.util.concurrent.FutureTask.run(FutureTask.java:137) E/AndroidRuntime( 3865): at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1068) E/AndroidRuntime( 3865): at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:561) E/AndroidRuntime( 3865): at java.lang.Thread.run(Thread.java:1096) E/AndroidRuntime( 3865): Caused by: java.lang.NullPointerException E/AndroidRuntime( 3865): at com.android.vending.AssetItemAdapter$ReloadLocalAssetInformationTask.doInBackground(AssetItemAdapter.java:845) E/AndroidRuntime( 3865): at com.android.vending.AssetItemAdapter$ReloadLocalAssetInformationTask.doInBackground(AssetItemAdapter.java:831) E/AndroidRuntime( 3865): at android.os.AsyncTask$2.call(AsyncTask.java:185) E/AndroidRuntime( 3865): at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:305) E/AndroidRuntime( 3865): ... 4 more I/Process ( 1030): Sending signal. PID: 3865 SIG: 3 I/dalvikvm( 3865): threadid=7: reacting to signal 3 I/dalvikvm( 3865): Wrote stack trace to '/data/anr/traces.txt' I/DumpStateReceiver( 1030): Added state dump to 1 crashes D/AndroidRuntime( 3865): Shutting down VM W/dalvikvm( 3865): threadid=3: thread exiting with uncaught exception (group=0x4001b180) E/AndroidRuntime( 3865): Uncaught handler: thread main exiting due to uncaught exception E/AndroidRuntime( 3865): java.lang.NullPointerException E/AndroidRuntime( 3865): at com.android.vending.controller.AssetInfoActivityController.getIdDeferToLocal(AssetInfoActivityController.java:637) E/AndroidRuntime( 3865): at com.android.vending.AssetInfoActivity.displayAssetInfo(AssetInfoActivity.java:556) E/AndroidRuntime( 3865): at com.android.vending.AssetInfoActivity.access$800(AssetInfoActivity.java:74) E/AndroidRuntime( 3865): at com.android.vending.AssetInfoActivity$LoadAssetInfoAction$1.run(AssetInfoActivity.java:917) E/AndroidRuntime( 3865): at android.os.Handler.handleCallback(Handler.java:587) E/AndroidRuntime( 3865): at android.os.Handler.dispatchMessage(Handler.java:92) E/AndroidRuntime( 3865): at android.os.Looper.loop(Looper.java:123) E/AndroidRuntime( 3865): at android.app.ActivityThread.main(ActivityThread.java:4363) E/AndroidRuntime( 3865): at java.lang.reflect.Method.invokeNative(Native Method) E/AndroidRuntime( 3865): at java.lang.reflect.Method.invoke(Method.java:521) E/AndroidRuntime( 3865): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:860) E/AndroidRuntime( 3865): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:618) E/AndroidRuntime( 3865): at dalvik.system.NativeStart.main(Native Method) I/Process ( 1030): Sending signal. PID: 3865 SIG: 3 W/ActivityManager( 1030): Process com.android.vending has crashed too many times: killing! D/ActivityManager( 1030): Force finishing activity com.android.vending/.AssetInfoActivity I/dalvikvm( 3865): threadid=7: reacting to signal 3 D/ActivityManager( 1030): Force removing process ProcessRecord{44e48548 3865:com.android.vending/10023} (com.android.vending/10023) However, when I try to launch the market place for a package that exists in the market place say com.opera.mini.android, everything works. Log for this case: D/dalvikvm( 966): GC freed 2781 objects / 195056 bytes in 99ms I/MyApplication( 1165): Pressed OK button I/MyApplication( 1165): Broadcasting Intent: android.intent.action.VIEW, data: market://details?id=com.opera.mini.android I/ActivityManager( 78): Starting activity: Intent { act=android.intent.action.VIEW dat=market://details?id=com.opera.mini.android flg=0x10000000 cmp=com.android.vending/.AssetInfoActivity } I/AndroidRuntime( 1165): AndroidRuntime onExit calling exit(0) I/WindowManager( 78): WIN DEATH: Window{44c72308 myapp.testapp/myapp.testapp.MyApplication paused=true} I/ActivityManager( 78): Process myapp.testapp (pid 1165) has died. I/WindowManager( 78): WIN DEATH: Window{44c72958 myapp.testapp/myapp.testapp.MyApplication paused=false} D/dalvikvm( 78): GC freed 31778 objects / 1796368 bytes in 142ms I/ActivityManager( 78): Displayed activity com.android.vending/.AssetInfoActivity: 214 ms (total 22866 ms) W/KeyCharacterMap( 978): No keyboard for id 65540 W/KeyCharacterMap( 978): Using default keymap: /system/usr/keychars/qwerty.kcm.bin V/RenderScript_jni( 966): surfaceCreated V/RenderScript_jni( 966): surfaceChanged V/RenderScript( 966): setSurface 480 762 0x573430 D/ViewFlipper( 966): updateRunning() mVisible=true, mStarted=true, mUserPresent=true, mRunning=true D/dalvikvm( 978): GC freed 10065 objects / 624440 bytes in 95ms Any ideas? Thanks in advance!

    Read the article

  • Run Your Tests With Any NUnit Version

    - by Alois Kraus
    I always thought that the NUnit test runners and the test assemblies need to reference the same NUnit.Framework version. I wanted to be able to run my test assemblies with the newest GUI runner (currently 2.5.3). Ok so all I need to do is to reference both NUnit versions the newest one and the official for the current project. There is a nice article form Kent Bogart online how to reference the same assembly multiple times with different versions. The magic works by referencing one NUnit assembly with an alias which does prefix all types inside it. Then I could decorate my tests with the TestFixture and Test attribute from both NUnit versions and everything worked fine except that this was ugly. After playing a little bit around to make it simpler I found that I did not need to reference both NUnit.Framework assemblies. The test runners do not require the TestFixture and Test attribute in their specific version. That is really neat since the test runners are instructed by attributes what to do in a declarative way there is really no need to tie the runners to a specific version. At its core NUnit has this little method hidden to find matching TestFixtures and Tests   public bool CanBuildFrom(Type type) {     if (!(!type.IsAbstract || type.IsSealed))     {         return false;     }     return (((Reflect.HasAttribute(type,           "NUnit.Framework.TestFixtureAttribute", true) ||               Reflect.HasMethodWithAttribute(type, "NUnit.Framework.TestAttribute"       , true)) ||               Reflect.HasMethodWithAttribute(type, "NUnit.Framework.TestCaseAttribute"   , true)) ||               Reflect.HasMethodWithAttribute(type, "NUnit.Framework.TheoryAttribute"     , true)); } That is versioning and backwards compatibility at its best. I tell NUnit what to do by decorating my tests classes with NUnit Attributes and the runner executes my intent without the need to bind me to a specific version. The contract between NUnit versions is actually a bit more complex (think of AssertExceptions) but this is also handled nicely by using not the concrete type but simply to check for the catched exception type by string. What can we learn from this? Versioning can be easy if the contract is small and the users of your library use it in a declarative way (Attributes). Everything beyond it will force you to reference several versions of the same assembly with all its consequences. Type equality is lost between versions so none of your casts will work. That means that you cannot simply use IBigInterface in two versions. You will need a wrapper to call the correct versioned one. To get out of this mess you can use one (and only one) version agnostic driver to encapsulate your business logic from the concrete versions. This is of course more work but as NUnit shows it can be easy. Simplicity is therefore not a nice thing to have but also requirement number one if you intend to make things more complex in version two and want to support any version (older and newer). Any interaction model above easy will not be maintainable. There are different approached to versioning. Below are my own personal observations how versioning works within the  .NET Framwork and NUnit.   Versioning Models 1. Bug Fixing and New Isolated Features When you only need to fix bugs there is no need to break anything. This is especially true when you have a big API surface. Microsoft did this with the .NET Framework 3.0 which did leave the CLR as is but delivered new assemblies for the features WPF, WCF and Windows Workflow Foundations. Their basic model was that the .NET 2.0 assemblies were declared as red assemblies which must not change (well mostly but each change was carefully reviewed to minimize the risk of breaking changes as much as possible) whereas the new green assemblies of .NET 3,3.5 did not have such obligations since they did implement new unrelated features which did not have any impact on the red assemblies. This is versioning strategy aimed at maximum compatibility and the delivery of new unrelated features. If you have a big API surface you should strive hard to do the same or you will break your customers code with every release. 2. New Breaking Features There are times when really new things need to be added to an existing product. The .NET Framework 4.0 did change the CLR in many ways which caused subtle different behavior although the API´s remained largely unchanged. Sometimes it is possible to simply recompile an application to make it work (e.g. changed method signature void Func() –> bool Func()) but behavioral changes need much more thought and cannot be automated. To minimize the impact .NET 2.0,3.0,3.5 applications will not automatically use the .NET 4.0 runtime when installed but they will keep using the “old” one. What is interesting is that a side by side execution model of both CLR versions (2 and 4) within one process is possible. Key to success was total isolation. You will have 2 GCs, 2 JIT compilers, 2 finalizer threads within one process. The two .NET runtimes cannot talk  (except via the usual IPC mechanisms) to each other. Both runtimes share nothing and run independently within the same process. This enables Explorer plugins written for the CLR 2.0 to work even when a CLR 4 plugin is already running inside the Explorer process. The price for isolation is an increased memory footprint because everything is loaded and running two times.   3. New Non Breaking Features It really depends where you break things. NUnit has evolved and many different Assert, Expect… methods have been added. These changes are all localized in the NUnit.Framework assembly which can be easily extended. As long as the test execution contract (TestFixture, Test, AssertException) remains stable it is possible to write test executors which can run tests written for NUnit 10 because the execution contract has not changed. It is possible to write software which executes other components in a version independent way but this is only feasible if the interaction model is relatively simple.   Versioning software is hard and it looks like it will remain hard since you suddenly work in a severely constrained environment when you try to innovate and to keep everything backwards compatible at the same time. These are contradicting goals and do not play well together. The easiest way out of this is to carefully watch what your customers are doing with your software. Minimizing the impact is much easier when you do not need to guess how many people will be broken when this or that is removed.

    Read the article

  • Faye private pub web sockets Errno::ECONNREFUSED: Connection refused - connect(2)

    - by Rubytastic
    Faye private pub has issues connecting. It works from rails console and from inside application. It fails when called from background process like delayed_job or sidekiq. I have been unable to resolve this issue for some time now, does anyone know why this happens? Errno::ECONNREFUSED: Connection refused - connect(2) /Users/jordan/.rvm/rubies/ruby-2.0.0-p247/lib/ruby/2.0.0/resolv-replace.rb:23:in initialize' /Users/jordan/.rvm/rubies/ruby-2.0.0-p247/lib/ruby/2.0.0/resolv-replace.rb:23:in initialize' /Users/jordan/.rvm/rubies/ruby-2.0.0-p247/lib/ruby/2.0.0/net/http.rb:878:in open' /Users/jordan/.rvm/rubies/ruby-2.0.0-p247/lib/ruby/2.0.0/net/http.rb:878:in block in connect' /Users/jordan/.rvm/rubies/ruby-2.0.0-p247/lib/ruby/2.0.0/timeout.rb:52:in timeout' /Users/jordan/.rvm/rubies/ruby-2.0.0-p247/lib/ruby/2.0.0/net/http.rb:877:in connect' /Users/jordan/.rvm/rubies/ruby-2.0.0-p247/lib/ruby/2.0.0/net/http.rb:862:in do_start' /Users/jordan/.rvm/rubies/ruby-2.0.0-p247/lib/ruby/2.0.0/net/http.rb:851:in start' /Users/jordan/.rvm/gems/ruby-2.0.0-p247@books/gems/private_pub-1.0.3/lib/private_pub.rb:42:in publish_message' /Users/jordan/.rvm/gems/ruby-2.0.0-p247@books/gems/private_pub-1.0.3/lib/private_pub.rb:29:in publish_to' /srv/books/app/workers/session_reload.rb:16:in perform' /Users/jordan/.rvm/gems/ruby-2.0.0-p247@books/gems/sidekiq-2.16.0/lib/sidekiq/processor.rb:48:in block (3 levels) in process' /Users/jordan/.rvm/gems/ruby-2.0.0-p247@books/gems/sidekiq-2.16.0/lib/sidekiq/middleware/chain.rb:119:in call' /Users/jordan/.rvm/gems/ruby-2.0.0-p247@books/gems/sidekiq-2.16.0/lib/sidekiq/middleware/chain.rb:119:inblock in invoke' /Users/jordan/.rvm/gems/ruby-2.0.0-p247@books/gems/newrelic_rpm-3.6.8.168/lib/new_relic/agent/instrumentation/sidekiq.rb:25:in block in call' /Users/jordan/.rvm/gems/ruby-2.0.0-p247@books/gems/newrelic_rpm-3.6.8.168/lib/new_relic/agent/instrumentation/controller_instrumentation.rb:324:in perform_action_with_newrelic_trace' /Users/jordan/.rvm/gems/ruby-2.0.0-p247@books/gems/newrelic_rpm-3.6.8.168/lib/new_relic/agent/instrumentation/sidekiq.rb:21:in call' /Users/jordan/.rvm/gems/ruby-2.0.0-p247@books/gems/sidekiq-2.16.0/lib/sidekiq/middleware/chain.rb:121:inblock in invoke' /Users/jordan/.rvm/gems/ruby-2.0.0-p247@books/gems/sidekiq-failures-0.2.2/lib/sidekiq/failures/middleware.rb:10:in call' /Users/jordan/.rvm/gems/ruby-2.0.0-p247@books/gems/sidekiq-2.16.0/lib/sidekiq/middleware/chain.rb:121:inblock in invoke' /Users/jordan/.rvm/gems/ruby-2.0.0-p247@books/gems/sidekiq-2.16.0/lib/sidekiq/middleware/server/active_record.rb:6:in call' /Users/jordan/.rvm/gems/ruby-2.0.0-p247@books/gems/sidekiq-2.16.0/lib/sidekiq/middleware/chain.rb:121:inblock in invoke' /Users/jordan/.rvm/gems/ruby-2.0.0-p247@books/gems/sidekiq-2.16.0/lib/sidekiq/middleware/server/retry_jobs.rb:62:in call' /Users/jordan/.rvm/gems/ruby-2.0.0-p247@books/gems/sidekiq-2.16.0/lib/sidekiq/middleware/chain.rb:121:inblock in invoke' /Users/jordan/.rvm/gems/ruby-2.0.0-p247@books/gems/sidekiq-2.16.0/lib/sidekiq/middleware/server/logging.rb:11:in block in call' /Users/jordan/.rvm/gems/ruby-2.0.0-p247@books/gems/sidekiq-2.16.0/lib/sidekiq/logging.rb:22:in with_context' /Users/jordan/.rvm/gems/ruby-2.0.0-p247@books/gems/sidekiq-2.16.0/lib/sidekiq/middleware/server/logging.rb:7:in call' /Users/jordan/.rvm/gems/ruby-2.0.0-p247@books/gems/sidekiq-2.16.0/lib/sidekiq/middleware/chain.rb:121:inblock in invoke' /Users/jordan/.rvm/gems/ruby-2.0.0-p247@books/gems/sidekiq-2.16.0/lib/sidekiq/middleware/chain.rb:124:in call' /Users/jordan/.rvm/gems/ruby-2.0.0-p247@books/gems/sidekiq-2.16.0/lib/sidekiq/middleware/chain.rb:124:ininvoke' /Users/jordan/.rvm/gems/ruby-2.0.0-p247@books/gems/sidekiq-2.16.0/lib/sidekiq/processor.rb:47:in block (2 levels) in process' /Users/jordan/.rvm/gems/ruby-2.0.0-p247@books/gems/sidekiq-2.16.0/lib/sidekiq/processor.rb:102:in stats' /Users/jordan/.rvm/gems/ruby-2.0.0-p247@books/gems/sidekiq-2.16.0/lib/sidekiq/processor.rb:46:in block in process' /Users/jordan/.rvm/gems/ruby-2.0.0-p247@books/gems/sidekiq-2.16.0/lib/sidekiq/processor.rb:83:in do_defer' /Users/jordan/.rvm/gems/ruby-2.0.0-p247@books/gems/sidekiq-2.16.0/lib/sidekiq/processor.rb:37:in process' /Users/jordan/.rvm/gems/ruby-2.0.0-p247@books/gems/celluloid-0.15.2/lib/celluloid/calls.rb:25:in public_send' /Users/jordan/.rvm/gems/ruby-2.0.0-p247@books/gems/celluloid-0.15.2/lib/celluloid/calls.rb:25:in dispatch' /Users/jordan/.rvm/gems/ruby-2.0.0-p247@books/gems/celluloid-0.15.2/lib/celluloid/calls.rb:122:in dispatch' /Users/jordan/.rvm/gems/ruby-2.0.0-p247@books/gems/celluloid-0.15.2/lib/celluloid/actor.rb:322:in block in handle_message' /Users/jordan/.rvm/gems/ruby-2.0.0-p247@books/gems/celluloid-0.15.2/lib/celluloid/actor.rb:416:in block in task' /Users/jordan/.rvm/gems/ruby-2.0.0-p247@books/gems/celluloid-0.15.2/lib/celluloid/tasks.rb:55:in block in initialize' /Users/jordan/.rvm/gems/ruby-2.0.0-p247@books/gems/celluloid-0.15.2/lib/celluloid/tasks/task_fiber.rb:13:in block in create' Processor: dev-air.local:db67c04914cdef80c501043115298f6d-70211452597260

    Read the article

  • SQL SERVER – Step by Step Guide to Beginning Data Quality Services in SQL Server 2012 – Introduction to DQS

    - by pinaldave
    Data Quality Services is a very important concept of SQL Server. I have recently started to explore the same and I am really learning some good concepts. Here are two very important blog posts which one should go over before continuing this blog post. Installing Data Quality Services (DQS) on SQL Server 2012 Connecting Error to Data Quality Services (DQS) on SQL Server 2012 This article is introduction to Data Quality Services for beginners. We will be using an Excel file Click on the image to enlarge the it. In the first article we learned to install DQS. In this article we will see how we can learn about building Knowledge Base and using it to help us identify the quality of the data as well help correct the bad quality of the data. Here are the two very important steps we will be learning in this tutorial. Building a New Knowledge Base  Creating a New Data Quality Project Let us start the building the Knowledge Base. Click on New Knowledge Base. In our project we will be using the Excel as a knowledge base. Here is the Excel which we will be using. There are two columns. One is Colors and another is Shade. They are independent columns and not related to each other. The point which I am trying to show is that in Column A there are unique data and in Column B there are duplicate records. Clicking on New Knowledge Base will bring up the following screen. Enter the name of the new knowledge base. Clicking NEXT will bring up following screen where it will allow to select the EXCE file and it will also let users select the source column. I have selected Colors and Shade both as a source column. Creating a domain is very important. Here you can create a unique domain or domain which is compositely build from Colors and Shade. As this is the first example, I will create unique domain – for Colors I will create domain Colors and for Shade I will create domain Shade. Here is the screen which will demonstrate how the screen will look after creating domains. Clicking NEXT it will bring you to following screen where you can do the data discovery. Clicking on the START will start the processing of the source data provided. Pre-processed data will show various information related to the source data. In our case it shows that Colors column have unique data whereas Shade have non-unique data and unique data rows are only two. In the next screen you can actually add more rows as well see the frequency of the data as the values are listed unique. Clicking next will publish the knowledge base which is just created. Now the knowledge base is created. We will try to take any random data and attempt to do DQS implementation over it. I am using another excel sheet here for simplicity purpose. In reality you can easily use SQL Server table for the same. Click on New Data Quality Project to see start DQS Project. In the next screen it will ask which knowledge base to use. We will be using our Colors knowledge base which we have recently created. In the Colors knowledge base we had two columns – 1) Colors and 2) Shade. In our case we will be using both of the mappings here. User can select one or multiple column mapping over here. Now the most important phase of the complete project. Click on Start and it will make the cleaning process and shows various results. In our case there were two columns to be processed and it completed the task with necessary information. It demonstrated that in Colors columns it has not corrected any value by itself but in Shade value there is a suggestion it has. We can train the DQS to correct values but let us keep that subject for future blog posts. Now click next and keep the domain Colors selected left side. It will demonstrate that there are two incorrect columns which it needs to be corrected. Here is the place where once corrected value will be auto-corrected in future. I manually corrected the value here and clicked on Approve radio buttons. As soon as I click on Approve buttons the rows will be disappeared from this tab and will move to Corrected Tab. If I had rejected tab it would have moved the rows to Invalid tab as well. In this screen you can see how the corrected 2 rows are demonstrated. You can click on Correct tab and see previously validated 6 rows which passed the DQS process. Now let us click on the Shade domain on the left side of the screen. This domain shows very interesting details as there DQS system guessed the correct answer as Dark with the confidence level of 77%. It is quite a high confidence level and manual observation also demonstrate that Dark is the correct answer. I clicked on Approve and the row moved to corrected tab. On the next screen DQS shows the summary of all the activities. It also demonstrates how the correction of the quality of the data was performed. The user can explore their data to a SQL Server Table, CSV file or Excel. The user also has an option to either explore data and all the associated cleansing info or data only. I will select Data only for demonstration purpose. Clicking explore will generate the files. Let us open the generated file. It will look as following and it looks pretty complete and corrected. Well, we have successfully completed DQS Process. The process is indeed very easy. I suggest you try this out yourself and you will find it very easy to learn. In future we will go over advanced concepts. Are you using this feature on your production server? If yes, would you please leave a comment with your environment and business need. It will be indeed interesting to see where it is implemented. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Business Intelligence, Data Warehousing, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Data Quality Services, DQS

    Read the article

  • The Proper Use of the VM Role in Windows Azure

    - by BuckWoody
    At the Professional Developer’s Conference (PDC) in 2010 we announced an addition to the Computational Roles in Windows Azure, called the VM Role. This new feature allows a great deal of control over the applications you write, but some have confused it with our full infrastructure offering in Windows Hyper-V. There is a proper architecture pattern for both of them. Virtualization Virtualization is the process of taking all of the hardware of a physical computer and replicating it in software alone. This means that a single computer can “host” or run several “virtual” computers. These virtual computers can run anywhere - including at a vendor’s location. Some companies refer to this as Cloud Computing since the hardware is operated and maintained elsewhere. IaaS The more detailed definition of this type of computing is called Infrastructure as a Service (Iaas) since it removes the need for you to maintain hardware at your organization. The operating system, drivers, and all the other software required to run an application are still under your control and your responsibility to license, patch, and scale. Microsoft has an offering in this space called Hyper-V, that runs on the Windows operating system. Combined with a hardware hosting vendor and the System Center software to create and deploy Virtual Machines (a process referred to as provisioning), you can create a Cloud environment with full control over all aspects of the machine, including multiple operating systems if you like. Hosting machines and provisioning them at your own buildings is sometimes called a Private Cloud, and hosting them somewhere else is often called a Public Cloud. State-ful and Stateless Programming This paradigm does not create a new, scalable way of computing. It simply moves the hardware away. The reason is that when you limit the Cloud efforts to a Virtual Machine, you are in effect limiting the computing resources to what that single system can provide. This is because much of the software developed in this environment maintains “state” - and that requires a little explanation. “State-ful programming” means that all parts of the computing environment stay connected to each other throughout a compute cycle. The system expects the memory, CPU, storage and network to remain in the same state from the beginning of the process to the end. You can think of this as a telephone conversation - you expect that the other person picks up the phone, listens to you, and talks back all in a single unit of time. In “Stateless” computing the system is designed to allow the different parts of the code to run independently of each other. You can think of this like an e-mail exchange. You compose an e-mail from your system (it has the state when you’re doing that) and then you walk away for a bit to make some coffee. A few minutes later you click the “send” button (the network has the state) and you go to a meeting. The server receives the message and stores it on a mail program’s database (the mail server has the state now) and continues working on other mail. Finally, the other party logs on to their mail client and reads the mail (the other user has the state) and responds to it and so on. These events might be separated by milliseconds or even days, but the system continues to operate. The entire process doesn’t maintain the state, each component does. This is the exact concept behind coding for Windows Azure. The stateless programming model allows amazing rates of scale, since the message (think of the e-mail) can be broken apart by multiple programs and worked on in parallel (like when the e-mail goes to hundreds of users), and only the order of re-assembling the work is important to consider. For the exact same reason, if the system makes copies of those running programs as Windows Azure does, you have built-in redundancy and recovery. It’s just built into the design. The Difference Between Infrastructure Designs and Platform Designs When you simply take a physical server running software and virtualize it either privately or publicly, you haven’t done anything to allow the code to scale or have recovery. That all has to be handled by adding more code and more Virtual Machines that have a slight lag in maintaining the running state of the system. Add more machines and you get more lag, so the scale is limited. This is the primary limitation with IaaS. It’s also not as easy to deploy these VM’s, and more importantly, you’re often charged on a longer basis to remove them. your agility in IaaS is more limited. Windows Azure is a Platform - meaning that you get objects you can code against. The code you write runs on multiple nodes with multiple copies, and it all works because of the magic of Stateless programming. you don’t worry, or even care, about what is running underneath. It could be Windows (and it is in fact a type of Windows Server), Linux, or anything else - but that' isn’t what you want to manage, monitor, maintain or license. You don’t want to deploy an operating system - you want to deploy an application. You want your code to run, and you don’t care how it does that. Another benefit to PaaS is that you can ask for hundreds or thousands of new nodes of computing power - there’s no provisioning, it just happens. And you can stop using them quicker - and the base code for your application does not have to change to make this happen. Windows Azure Roles and Their Use If you need your code to have a user interface, in Visual Studio you add a Web Role to your project, and if the code needs to do work that doesn’t involve a user interface you can add a Worker Role. They are just containers that act a certain way. I’ll provide more detail on those later. Note: That’s a general description, so it’s not entirely accurate, but it’s accurate enough for this discussion. So now we’re back to that VM Role. Because of the name, some have mistakenly thought that you can take a Virtual Machine running, say Linux, and deploy it to Windows Azure using this Role. But you can’t. That’s not what it is designed for at all. If you do need that kind of deployment, you should look into Hyper-V and System Center to create the Private or Public Infrastructure as a Service. What the VM Role is actually designed to do is to allow you to have a great deal of control over the system where your code will run. Let’s take an example. You’ve heard about Windows Azure, and Platform programming. You’re convinced it’s the right way to code. But you have a lot of things you’ve written in another way at your company. Re-writing all of your code to take advantage of Windows Azure will take a long time. Or perhaps you have a certain version of Apache Web Server that you need for your code to work. In both cases, you think you can (or already have) code the the software to be “Stateless”, you just need more control over the place where the code runs. That’s the place where a VM Role makes sense. Recap Virtualizing servers alone has limitations of scale, availability and recovery. Microsoft’s offering in this area is Hyper-V and System Center, not the VM Role. The VM Role is still used for running Stateless code, just like the Web and Worker Roles, with the exception that it allows you more control over the environment of where that code runs.

    Read the article

  • Computer crashes on resume from standby almost every time

    - by Los Frijoles
    I am running Ubuntu 12.04 on a Core i5 2500K and ASRock Z68 Pro3-M motherboard (no graphics card, hd is a WD Green 1TB, and cd drive is some cheap lite-on drive). Since installing 12.04, my computer has been freezing after resume, but not every time. When I start to resume, it starts going normally with a blinking cursor on the screen and then sometimes it will continue on to the gnome 3 unlock screen. Most of the time, however, it will blink for a little bit and then the monitor will flip modes and shut off due to no signal. Pressing keys on the keyboard gets no response (num lock light doesn't respond, Ctrl-Alt-F1 fails to drop it into a terminal, Ctrl-Alt-Backspace doesn't work) and so I assume the computer is crashed. The worst part is, the logs look entirely normal. Here is my system log during one of these crashes and my subsequent hard poweroff and restart: Jun 6 21:54:43 kcuzner-desktop udevd[10448]: inotify_add_watch(6, /dev/dm-2, 10) failed: No such file or directory Jun 6 21:54:43 kcuzner-desktop udevd[10448]: inotify_add_watch(6, /dev/dm-2, 10) failed: No such file or directory Jun 6 21:54:43 kcuzner-desktop udevd[10448]: inotify_add_watch(6, /dev/dm-1, 10) failed: No such file or directory Jun 6 21:54:43 kcuzner-desktop udevd[12419]: inotify_add_watch(6, /dev/dm-0, 10) failed: No such file or directory Jun 6 21:54:43 kcuzner-desktop udevd[10448]: inotify_add_watch(6, /dev/dm-0, 10) failed: No such file or directory Jun 6 22:09:01 kcuzner-desktop CRON[9061]: (root) CMD ( [ -x /usr/lib/php5/maxlifetime ] && [ -d /var/lib/php5 ] && find /var/lib/php5/ -depth -mindepth 1 -maxdepth 1 -type f -cmin +$(/usr/lib/php5/maxlifetime) ! -execdir fuser -s {} 2>/dev/null \; -delete) Jun 6 22:17:01 kcuzner-desktop CRON[22142]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly) Jun 6 22:39:01 kcuzner-desktop CRON[26909]: (root) CMD ( [ -x /usr/lib/php5/maxlifetime ] && [ -d /var/lib/php5 ] && find /var/lib/php5/ -depth -mindepth 1 -maxdepth 1 -type f -cmin +$(/usr/lib/php5/maxlifetime) ! -execdir fuser -s {} 2>/dev/null \; -delete) Jun 6 22:54:21 kcuzner-desktop kernel: [57905.560822] show_signal_msg: 36 callbacks suppressed Jun 6 22:54:21 kcuzner-desktop kernel: [57905.560828] chromium-browse[9139]: segfault at 0 ip 00007f3a78efade0 sp 00007fff7e2d2c18 error 4 in chromium-browser[7f3a76604000+412b000] Jun 6 23:05:43 kcuzner-desktop kernel: [58586.415158] chromium-browse[21025]: segfault at 0 ip 00007f3a78efade0 sp 00007fff7e2d2c18 error 4 in chromium-browser[7f3a76604000+412b000] Jun 6 23:09:01 kcuzner-desktop CRON[13542]: (root) CMD ( [ -x /usr/lib/php5/maxlifetime ] && [ -d /var/lib/php5 ] && find /var/lib/php5/ -depth -mindepth 1 -maxdepth 1 -type f -cmin +$(/usr/lib/php5/maxlifetime) ! -execdir fuser -s {} 2>/dev/null \; -delete) Jun 6 23:12:43 kcuzner-desktop kernel: [59006.317590] usb 2-1.7: USB disconnect, device number 8 Jun 6 23:12:43 kcuzner-desktop kernel: [59006.319672] sd 7:0:0:0: [sdg] Synchronizing SCSI cache Jun 6 23:12:43 kcuzner-desktop kernel: [59006.319737] sd 7:0:0:0: [sdg] Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK Jun 6 23:17:01 kcuzner-desktop CRON[26580]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly) Jun 6 23:19:04 kcuzner-desktop acpid: client connected from 29925[0:0] Jun 6 23:19:04 kcuzner-desktop acpid: 1 client rule loaded Jun 6 23:19:07 kcuzner-desktop rtkit-daemon[1835]: Successfully made thread 30131 of process 30131 (n/a) owned by '104' high priority at nice level -11. Jun 6 23:19:07 kcuzner-desktop rtkit-daemon[1835]: Supervising 1 threads of 1 processes of 1 users. Jun 6 23:19:07 kcuzner-desktop rtkit-daemon[1835]: Successfully made thread 30162 of process 30131 (n/a) owned by '104' RT at priority 5. Jun 6 23:19:07 kcuzner-desktop rtkit-daemon[1835]: Supervising 2 threads of 1 processes of 1 users. Jun 6 23:19:07 kcuzner-desktop rtkit-daemon[1835]: Successfully made thread 30163 of process 30131 (n/a) owned by '104' RT at priority 5. Jun 6 23:19:07 kcuzner-desktop rtkit-daemon[1835]: Supervising 3 threads of 1 processes of 1 users. Jun 6 23:19:07 kcuzner-desktop bluetoothd[1140]: Endpoint registered: sender=:1.239 path=/MediaEndpoint/HFPAG Jun 6 23:19:07 kcuzner-desktop bluetoothd[1140]: Endpoint registered: sender=:1.239 path=/MediaEndpoint/A2DPSource Jun 6 23:19:07 kcuzner-desktop bluetoothd[1140]: Endpoint registered: sender=:1.239 path=/MediaEndpoint/A2DPSink Jun 6 23:19:07 kcuzner-desktop rtkit-daemon[1835]: Successfully made thread 30166 of process 30166 (n/a) owned by '104' high priority at nice level -11. Jun 6 23:19:07 kcuzner-desktop rtkit-daemon[1835]: Supervising 4 threads of 2 processes of 1 users. Jun 6 23:19:07 kcuzner-desktop pulseaudio[30166]: [pulseaudio] pid.c: Daemon already running. Jun 6 23:19:10 kcuzner-desktop acpid: client 2942[0:0] has disconnected Jun 6 23:19:10 kcuzner-desktop acpid: client 29925[0:0] has disconnected Jun 6 23:19:10 kcuzner-desktop acpid: client connected from 1286[0:0] Jun 6 23:19:10 kcuzner-desktop acpid: 1 client rule loaded Jun 6 23:19:31 kcuzner-desktop bluetoothd[1140]: Endpoint unregistered: sender=:1.239 path=/MediaEndpoint/HFPAG Jun 6 23:19:31 kcuzner-desktop bluetoothd[1140]: Endpoint unregistered: sender=:1.239 path=/MediaEndpoint/A2DPSource Jun 6 23:19:31 kcuzner-desktop bluetoothd[1140]: Endpoint unregistered: sender=:1.239 path=/MediaEndpoint/A2DPSink Jun 6 23:28:12 kcuzner-desktop kernel: imklog 5.8.6, log source = /proc/kmsg started. Jun 6 23:28:12 kcuzner-desktop rsyslogd: [origin software="rsyslogd" swVersion="5.8.6" x-pid="1053" x-info="http://www.rsyslog.com"] start Jun 6 23:28:12 kcuzner-desktop rsyslogd: rsyslogd's groupid changed to 103 Jun 6 23:28:12 kcuzner-desktop rsyslogd: rsyslogd's userid changed to 101 Jun 6 23:28:12 kcuzner-desktop rsyslogd-2039: Could not open output pipe '/dev/xconsole' [try http://www.rsyslog.com/e/2039 ] Jun 6 23:28:12 kcuzner-desktop modem-manager[1070]: <info> Loaded plugin Ericsson MBM Jun 6 23:28:12 kcuzner-desktop modem-manager[1070]: <info> Loaded plugin Sierra Jun 6 23:28:12 kcuzner-desktop modem-manager[1070]: <info> Loaded plugin Generic Jun 6 23:28:12 kcuzner-desktop modem-manager[1070]: <info> Loaded plugin Huawei Jun 6 23:28:12 kcuzner-desktop modem-manager[1070]: <info> Loaded plugin Linktop Jun 6 23:28:12 kcuzner-desktop bluetoothd[1072]: Failed to init gatt_example plugin Jun 6 23:28:12 kcuzner-desktop bluetoothd[1072]: Listening for HCI events on hci0 Jun 6 23:28:12 kcuzner-desktop NetworkManager[1080]: <info> NetworkManager (version 0.9.4.0) is starting... Jun 6 23:28:12 kcuzner-desktop NetworkManager[1080]: <info> Read config file /etc/NetworkManager/NetworkManager.conf Jun 6 23:28:12 kcuzner-desktop NetworkManager[1080]: <info> VPN: loaded org.freedesktop.NetworkManager.pptp Jun 6 23:28:12 kcuzner-desktop NetworkManager[1080]: <info> DNS: loaded plugin dnsmasq Jun 6 23:28:12 kcuzner-desktop kernel: [ 0.000000] Initializing cgroup subsys cpuset Jun 6 23:28:12 kcuzner-desktop kernel: [ 0.000000] Initializing cgroup subsys cpu Sorry it's so huge; the restart happens at 23:28:12 I believe and all I see is that chromium segfaulted a few times. I wouldn't think a segfault from an individual program on the computer would crash it, but could that be the issue?

    Read the article

< Previous Page | 306 307 308 309 310 311 312 313 314 315 316 317  | Next Page >