Search Results

Search found 1210 results on 49 pages for 'positive'.

Page 46/49 | < Previous Page | 42 43 44 45 46 47 48 49  | Next Page >

  • Unicast traffic between hosts on a switch leaving the switch by its uplink. Why?

    - by Rich Lafferty
    I have a weird thing happening on our network at my office which I can't quite get my head around. In particular I can't tell if it's a problem with a switch, or a problem with configuration. We have a Cisco SG300-52 switch (sw01) in the top of a rack in our server room, connected to another SG300-28 that acts as our core switch (core01). Both run layer 2 only, our firewalls do routing between VLANs. They have a dozen or so VLANs between them. Gi1 on sw01 is a trunk port connected to gi1 on core01. (Disclosure: There are other switches in our environment but I'm pretty sure I've isolated the problem down to these two. Happy to provide more info if necessary.) The behaviour I'm seeing is limited to one VLAN, vlan 12 -- or, at least, it's not happening on the other ones I checked (It's hard to guarantee the absence of packets), and it is: sw01 is forwarding, to core01, traffic which is between two hosts which are both plugged into sw01. (I noticed this because the IDS in our firewall gave a false positive on traffic which should not reach the firewall.) We noticed this mostly between our two dhcp/dns servers, net01 (10.12.0.10) and net02 (10.12.0.11). net01 is physical hardware and net02 is on a VMware ESX server. net01 is connected to gi44 on sw01 and net02's ESX server to gi11. [net01]----gi44-[sw01]-gi1----gi1-[core01] [net02]----gi11/ Let's see some interfaces! Remember, vlan 12 is the problem vlan. Of the others I explicitly verified that vlan 27 was not affected. Here's the two hosts' ports: esx01 contains net02. sw01#sh run int gi11 interface gigabitethernet11 description esx01 lldp med disable switchport trunk allowed vlan add 5-7,11-13,100 switchport trunk native vlan 27 ! sw01#sh run int gi44 interface gigabitethernet44 description net01-1 lldp med disable switchport mode access switchport access vlan 12 ! Here's the trunk on sw01. sw01#sh run int gi1 interface gigabitethernet1 description "trunk to core01" lldp med disable switchport trunk allowed vlan add 4-7,11-13,27,100 ! And the other end of the trunk on core01. interface gigabitethernet1 description sw01 macro description switch switchport trunk allowed vlan add 2-7,11-16,27,100 ! I have a monitor port on core01, thus: core01#sh run int gi12 interface gigabitethernet12 description "monitor port" port monitor GigabitEthernet 1 ! And the monitor port on core01 sees unicast traffic going between net01 and net02, both of which are on sw01! I've verified this with a monitor port on sw01 that sees the net01-net02 unicast traffic leaving via gi1 too. sw01 knows that both of those hosts are on ports that are not its trunk port: :) ratchet$ arp -a | grep net net02.2ndsiteinc.com (10.12.0.11) at 00:0C:29:1A:66:15 [ether] on eth0 net01.2ndsiteinc.com (10.12.0.10) at 00:11:43:D8:9F:94 [ether] on eth0 sw01#sh mac addr addr 00:0C:29:1A:66:15 Aging time is 300 sec Vlan Mac Address Port Type -------- --------------------- ---------- ---------- 12 00:0c:29:1a:66:15 gi11 dynamic sw01#sh mac addr addr 00:11:43:D8:9F:94 Aging time is 300 sec Vlan Mac Address Port Type -------- --------------------- ---------- ---------- 12 00:11:43:d8:9f:94 gi44 dynamic I also brought up an unused port on sw01 on vlan 12, but the unicast traffic was (as best as I could tell) not coming out that port. So it doesn't look like sw01 is pushing it out all its ports, just the right ports and also gi1! I've verified that sw01 is not filling up its address-table: sw01#sh mac addr count This may take some time. Capacity : 8192 Free : 7983 Used : 208 The full configs for both core01 and sw01 are available: core01, sw01. Finally, versions: sw01#sh ver SW version 1.1.2.0 ( date 12-Nov-2011 time 23:34:26 ) Boot version 1.0.0.4 ( date 08-Apr-2010 time 16:37:57 ) HW version V01 core01#sh ver SW version 1.1.2.0 ( date 12-Nov-2011 time 23:34:26 ) Boot version 1.1.0.6 ( date 11-May-2011 time 18:31:00 ) HW version V01 So my understanding is this: sw01 should take unicast traffic for net01 and send it only out net02's port, and vice versa; none of it should go out sw01's uplink. But core01, receiving traffic on gi1 for a host it knows is on gi1, is right in sending it out all of its ports. (That is: sw01 is misbehaving, but core01 is doing what it should given the circumstances.) My question is: Why is sw01 sending that unicast traffic out its uplink, gi1? (And pre-emptively: yes, I know SG300s leave much to be desired, and yes, we should have spanning-tree enabled, but that's where I'm at right now.)

    Read the article

  • GitLab on a fresh Ubuntu 13 EC2 instance

    - by Polly
    I've spun up a fresh Amazon EC2 instance for a micro Ubuntu 13 server to be used as a GitLab server. I know the specs are a little low, but it should serve well for my purposes. It has an elastic (static) IP address that I have created an A record for git.mydomain.com. The first thing I did to the instance was add 1GB of swap to keep it happy from a memory perspective. I then set the hostname of the box to be git.mydomain.com and followed https://github.com/gitlabhq/gitlabhq/blob/6-2-stable/doc/install/installation.md to the letter. Everything seems to have worked, except for the web server side of things. Doing a gitlab:check shows the following: Checking Environment ... Git configured for git user? ... yes Has python2? ... yes python2 is supported version? ... yes Checking Environment ... Finished Checking GitLab Shell ... GitLab Shell version >= 1.7.4 ? ... OK (1.7.4) Repo base directory exists? ... yes Repo base directory is a symlink? ... no Repo base owned by git:git? ... yes Repo base access is drwxrws---? ... yes update hook up-to-date? ... yes update hooks in repos are links: ... can't check, you have no projects Running /home/git/gitlab-shell/bin/check Check GitLab API access: /usr/local/lib/ruby/2.0.0/net/http.rb:878:in `initialize': Connection refused - connect(2) (Errno::ECONNREFUSED) from /usr/local/lib/ruby/2.0.0/net/http.rb:878:in `open' from /usr/local/lib/ruby/2.0.0/net/http.rb:878:in `block in connect' from /usr/local/lib/ruby/2.0.0/timeout.rb:52:in `timeout' from /usr/local/lib/ruby/2.0.0/net/http.rb:877:in `connect' from /usr/local/lib/ruby/2.0.0/net/http.rb:862:in `do_start' from /usr/local/lib/ruby/2.0.0/net/http.rb:851:in `start' from /home/git/gitlab-shell/lib/gitlab_net.rb:62:in `get' from /home/git/gitlab-shell/lib/gitlab_net.rb:29:in `check' from /home/git/gitlab-shell/bin/check:11:in `<main>' gitlab-shell self-check failed Try fixing it: Make sure GitLab is running; Check the gitlab-shell configuration file: sudo -u git -H editor /home/git/gitlab-shell/config.yml Please fix the error above and rerun the checks. Checking GitLab Shell ... Finished Checking Sidekiq ... Running? ... yes Number of Sidekiq processes ... 1 Checking Sidekiq ... Finished Checking GitLab ... Database config exists? ... yes Database is SQLite ... no All migrations up? ... yes GitLab config exists? ... yes GitLab config outdated? ... no Log directory writable? ... yes Tmp directory writable? ... yes Init script exists? ... yes Init script up-to-date? ... yes projects have namespace: ... can't check, you have no projects Projects have satellites? ... can't check, you have no projects Redis version >= 2.0.0? ... yes Your git bin path is "/usr/bin/git" Git version >= 1.7.10 ? ... yes (1.8.3) Checking GitLab ... Finished It seems like I'm very nearly there. Searching on this error I have only found advice that unfortunately hasn't helped. I'm not using any kind of SSL setup, which a lot of the posts I found were about. I have tried appending 127.0.0.1 git.mydomain.com to /etc/hosts and giving the instance a reboot but there was no change. My config/gitlab.yml file has host: git.mydomain.com in it, and my gitlab-shell/config.yml has gitlab_url: "http://git.mydomain.com/" in it. I'm sure I'm missing something simple, but I've been through every relevant link I can find and have had no positive results; thank you in advance for any help!

    Read the article

  • Installing SharePoint 2010 and PowerPivot for SharePoint on Windows 7

    - by smisner
    Many people like me want (or need) to do their business intelligence development work on a laptop. As someone who frequently speaks at various events or teaches classes on all subjects related to the Microsoft business intelligence stack, I need a way to run multiple server products on my laptop with reasonable performance. Once upon a time, that requirement meant only that I had to load the current version of SQL Server and the client tools of choice. In today's post, I'll review my latest experience with trying to make the newly released Microsoft BI products work with a Windows 7 operating system.The entrance of Microsoft Office SharePoint Server 2007 into the BI stack complicated matters and I started using Virtual Server to establish a "suitable" environment. As part of the team that delivered a lot of education as part of the Yukon pre-launch activities (that would be SQL Server 2005 for the uninitiated), I was working with four - yes, four - virtual servers. That was a pretty brutal workload for a 2GB laptop, which worked if I was very, very careful. It could also be a finicky and unreliable configuration as I learned to my dismay at one TechEd session several years ago when I had to reboot a very carefully cached set of servers just minutes before my session started. Although it worked, it came back to life very, very slowly much to the displeasure of the audience. They couldn't possibly have been less pleased than me.At that moment, I resolved to get the beefiest environment I could afford and consolidate to a single virtual server. Enter the 4GB 64-bit laptop to preserve my sanity and my livelihood. Likewise, for SQL Server 2008, I managed to keep everything within a single virtual server and I could function reasonably well with this approach.Now we have SQL Server 2008 R2 plus Office SharePoint Server 2010. That means a 64-bit operating system. Period. That means no more Virtual Server. That means I must use Hyper-V or another alternative. I've heard alternatives exist, but my few dabbles in this area did not yield positive results. It might have been just me having issues rather than any failure of those technologies to adequately support the requirements.My first run at working with the new BI stack configuration was to set up a 64-bit 4GB laptop with a dual-boot to run Windows Server 2008 R2 with Hyper-V. However, I was generally not happy with running Windows Server 2008 R2 on my laptop. For one, I couldn't put it into sleep mode, which is helpful if I want to prepare for a presentation beforehand and then walk to the podium without the need to hold my laptop in its open state along the way (my strategy at the TechEd session long, long ago). Secondly, it was finicky with projectors. I had issues from time to time and while I always eventually got it to work, I didn't appreciate those nerve-wracking moments wondering whether this would be the time that it wouldn't work.Somewhere along the way, I learned that it was possible to load SharePoint 2010 in a Windows 7 which piqued my interest. I had just acquired a new laptop running Windows 7 64-bit, and thought surely running the BI stack natively on my laptop must be better than running Hyper-V. (I have not tried booting to Hyper-V VHD yet, but that's on my list of things to try so the jury of one is still out on this approach.) Recently, I had to build up a server with the RTM versions of SQL Server 2008 R2 and Sharepoint Server 2010 and decided to follow suit on my Windows 7 Ultimate 64-bit laptop. The process is slightly different, but I'm happy to report that it IS possible, although I had some fits and starts along the way.DISCLAIMER: These products are NOT intended to be run in production mode on the Windows 7 operating system. The configuration described in this post is strictly for development or learning purposes and not supported by Microsoft. If you have trouble, you will NOT get help from them. I might be able to help, but I provide no guarantees of my ability or availablity to help. I won't provide the step-by-step instructions in this post as there are other resources that provide these details, but I will provide an overview of my approach, point you to the relevant resources, describe some of the problems I encountered, and explain how I addressed those problems to achieve my desired goal.Because my goal was not simply to set up SharePoint Server 2010 on my laptop, but specifically PowerPivot for SharePoint, I started out by referring to the installation instructions at the PowerPiovt-Info site, but mainly to confirm that I was performing steps in the proper sequence. I didn't perform the steps in Part 1 because those steps are applicable only to a server operating system which I am not running on my laptop. Then, the instructions in Part 2, won't work exactly as written for the same reason. Instead, I followed the instructions on MSDN, Setting Up the Development Environment for SharePoint 2010 on Windows Vista, Windows 7, and Windows Server 2008. In general, I found the following differences in installation steps from the steps at PowerPivot-Info:You must copy the SharePoint installation media to the local drive so that you can edit the config.xml to allow installation on a Windows client.You also have to manually install the prerequisites. The instructions provides links to each item that you must manually install and provides a command-line instruction to execute which enables required Windows features.I will digress for a moment to save you some grief in the sequence of steps to perform. I discovered later that a missing step in the MSDN instructions is to install the November CTP Reporting Services add-in for SharePoint. When I went to test my SharePoint site (I believe I tested after I had a successful PowerPivot installation), I ran into the following error: Could not load file or assembly 'RSSharePointSoapProxy, Version=10.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91' or one of its dependencies. The system cannot find the file specified. I was rather surprised that Reporting Services was required. Then I found an article by Alan le Marquand, Working Together: SQL Server 2008 R2 Reporting Services Integration in SharePoint 2010,that instructed readers to install the November add-in. My first reaction was, "Really?!?" But I confirmed it in another TechNet article on hardware and software requirements for SharePoint Server 2010. It doesn't refer explicitly to the November CTP but following the link took me there. (Interestingly, I retested today and there's no longer any reference to the November CTP. Here's the link to download the latest and greatest Reporting Services Add-in for SharePoint Technologies 2010.) You don't need to download the add-in anymore if you're doing a regular server-based installation of SharePoint because it installs as part of the prerequisites automatically.When it was time to start the installation of SharePoint, I deviated from the MSDN instructions and from the PowerPivot-Info instructions:On the Choose the installation you want page of the installation wizard, I chose Server Farm.On the Server Type page, I chose Complete.At the end of the installation, I did not run the configuration wizard.Returning to the PowerPivot-Info instructions, I tried to follow the instructions in Part 3 which describe installing SQL Server 2008 R2 with the PowerPivot option. These instructions tell you to choose the New Server option on the Setup Role page where you add PowerPivot for SharePoint. However, I ran into problems with this approach and got installation errors at the end.It wasn't until much later as I was investigating an error that I encountered Dave Wickert's post that installing PowerPivot for SharePoint on Windows 7 is unsupported. Uh oh. But he did want to hear about it if anyone succeeded, so I decided to take the plunge. Perseverance paid off, and I can happily inform Dave that it does work so far. I haven't tested absolutely everything with PowerPivot for SharePoint but have successfully deployed a workbook and viewed the PowerPivot Management Dashboard. I have not yet tested the data refresh feature, but I have installed. Continue reading to see how I accomplished my objective.I unintalled SQL Server 2008 R2 and started again. I had different problems which I don't recollect now. However, I uninstalled again and approached installation from a different angle and my next attempt succeeded. The downside of this approach is that you must do all of the things yourself that are done automatically when you install PowerPivot as a new server. Here are the steps that I followed:Install SQL Server 2008 R2 to get a database engine instance installed.Run the SharePoint configuration wizard to set up the SharePoint databases.In Central Administration, create a Web application using classic mode authentication as per a TechNet article on PowerPivot Authentication and Authorization.Then I followed the steps I found at How to: Install PowerPivot for SharePoint on an Existing SharePoint Server. Especially important to note - you must launch setup by using Run as administrator. I did not have to manually deploy the PowerPivot solution as the instructions specify, but it's good to know about this step because it tells you where to look in Central Administration to confirm a successful deployment.I did spot some incorrect steps in the instructions (at the time of this writing) in How To: Configure Stored Credentials for PowerPivot Data Refresh. Specifically, in the section entitled Step 1: Create a target application and set the credentials, both steps 10 and 12 are incorrect. They tell you to provide an actual Windows user name and password on the page where you are simply defining the prompts for your application in the Secure Store Service. To add the Windows user name and password that you want to associate with the application - after you have successfully created the target application - you select the target application and then click Set credentials in the ribbon.Lastly, I followed the instructions at How to: Install Office Data Connectivity Components on a PowerPivot server. However, I have yet to test this in my current environment.I did have several stops and starts throughout this process and edited those out to spare you from reading non-essential information. I believe the explanation I have provided here accurately reflect the steps I followed to produce a working configuration. If you follow these steps and get a different result, please let me know so that together we can work through the issue and correct these instructions. I'm sure there are many other folks in the Microsoft BI community that will appreciate the ability to set up the BI stack in a Windows 7 environment for development or learning purposes. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • FluentNHibernate Unit Of Work / Repository Design Pattern Questions

    - by Echiban
    Hi all, I think I am at a impasse here. I have an application I built from scratch using FluentNHibernate (ORM) / SQLite (file db). I have decided to implement the Unit of Work and Repository Design pattern. I am at a point where I need to think about the end game, which will start as a WPF windows app (using MVVM) and eventually implement web services / ASP.Net as UI. Now I already created domain objects (entities) for ORM. And now I don't know how should I use it outside of ORM. Questions about it include: Should I use ORM entity objects directly as models in MVVM? If yes, do I put business logic (such as certain values must be positive and be greater than another Property) in those entity objects? It is certainly the simpler approach, and one I am leaning right now. However, will there be gotchas that would trash this plan? If the answer above is no, do I then create a new set of classes to implement business logic and use those as Models in MVVM? How would I deal with the transition between model objects and entity objects? I guess a type converter implementation would work well here. Now I followed this well written article to implement the Unit Of Work pattern. However, due to the fact that I am using FluentNHibernate instead of NHibernate, I had to bastardize the implementation of UnitOfWorkFactory. Here's my implementation: using System; using FluentNHibernate.Cfg; using FluentNHibernate.Cfg.Db; using NHibernate; using NHibernate.Cfg; using NHibernate.Tool.hbm2ddl; namespace ELau.BlindsManagement.Business { public class UnitOfWorkFactory : IUnitOfWorkFactory { private static readonly string DbFilename; private static Configuration _configuration; private static ISession _currentSession; private ISessionFactory _sessionFactory; static UnitOfWorkFactory() { // arbitrary default filename DbFilename = "defaultBlindsDb.db3"; } internal UnitOfWorkFactory() { } #region IUnitOfWorkFactory Members public ISession CurrentSession { get { if (_currentSession == null) { throw new InvalidOperationException(ExceptionStringTable.Generic_NotInUnitOfWork); } return _currentSession; } set { _currentSession = value; } } public ISessionFactory SessionFactory { get { if (_sessionFactory == null) { _sessionFactory = BuildSessionFactory(); } return _sessionFactory; } } public Configuration Configuration { get { if (_configuration == null) { Fluently.Configure().ExposeConfiguration(c => _configuration = c); } return _configuration; } } public IUnitOfWork Create() { ISession session = CreateSession(); session.FlushMode = FlushMode.Commit; _currentSession = session; return new UnitOfWorkImplementor(this, session); } public void DisposeUnitOfWork(UnitOfWorkImplementor adapter) { CurrentSession = null; UnitOfWork.DisposeUnitOfWork(adapter); } #endregion public ISession CreateSession() { return SessionFactory.OpenSession(); } public IStatelessSession CreateStatelessSession() { return SessionFactory.OpenStatelessSession(); } private static ISessionFactory BuildSessionFactory() { ISessionFactory result = Fluently.Configure() .Database( SQLiteConfiguration.Standard .UsingFile(DbFilename) ) .Mappings(m => m.FluentMappings.AddFromAssemblyOf<UnitOfWorkFactory>()) .ExposeConfiguration(BuildSchema) .BuildSessionFactory(); return result; } private static void BuildSchema(Configuration config) { // this NHibernate tool takes a configuration (with mapping info in) // and exports a database schema from it _configuration = config; new SchemaExport(_configuration).Create(false, true); } } } I know that this implementation is flawed because a few tests pass when run individually, but when all tests are run, it would fail for some unknown reason. Whoever wants to help me out with this one, given its complexity, please contact me by private message. I am willing to send some $$$ by Paypal to someone who can address the issue and provide solid explanation. I am new to ORM, so any assistance is appreciated.

    Read the article

  • SQL SERVER – Guest Post – Jonathan Kehayias – Wait Type – Day 16 of 28

    - by pinaldave
    Jonathan Kehayias (Blog | Twitter) is a MCITP Database Administrator and Developer, who got started in SQL Server in 2004 as a database developer and report writer in the natural gas industry. After spending two and a half years working in TSQL, in late 2006, he transitioned to the role of SQL Database Administrator. His primary passion is performance tuning, where he frequently rewrites queries for better performance and performs in depth analysis of index implementation and usage. Jonathan blogs regularly on SQLBlog, and was a coauthor of Professional SQL Server 2008 Internals and Troubleshooting. On a personal note, I think Jonathan is extremely positive person. In every conversation with him I have found that he is always eager to help and encourage. Every time he finds something needs to be approved, he has contacted me without hesitation and guided me to improve, change and learn. During all the time, he has not lost his focus to help larger community. I am honored that he has accepted to provide his views on complex subject of Wait Types and Queues. Currently I am reading his series on Extended Events. Here is the guest blog post by Jonathan: SQL Server troubleshooting is all about correlating related pieces of information together to indentify where exactly the root cause of a problem lies. In my daily work as a DBA, I generally get phone calls like, “So and so application is slow, what’s wrong with the SQL Server.” One of the funny things about the letters DBA is that they go so well with Default Blame Acceptor, and I really wish that I knew exactly who the first person was that pointed that out to me, because it really fits at times. A lot of times when I get this call, the problem isn’t related to SQL Server at all, but every now and then in my initial quick checks, something pops up that makes me start looking at things further. The SQL Server is slow, we see a number of tasks waiting on ASYNC_IO_COMPLETION, IO_COMPLETION, or PAGEIOLATCH_* waits in sys.dm_exec_requests and sys.dm_exec_waiting_tasks. These are also some of the highest wait types in sys.dm_os_wait_stats for the server, so it would appear that we have a disk I/O bottleneck on the machine. A quick check of sys.dm_io_virtual_file_stats() and tempdb shows a high write stall rate, while our user databases show high read stall rates on the data files. A quick check of some performance counters and Page Life Expectancy on the server is bouncing up and down in the 50-150 range, the Free Page counter consistently hits zero, and the Free List Stalls/sec counter keeps jumping over 10, but Buffer Cache Hit Ratio is 98-99%. Where exactly is the problem? In this case, which happens to be based on a real scenario I faced a few years back, the problem may not be a disk bottleneck at all; it may very well be a memory pressure issue on the server. A quick check of the system spec’s and it is a dual duo core server with 8GB RAM running SQL Server 2005 SP1 x64 on Windows Server 2003 R2 x64. Max Server memory is configured at 6GB and we think that this should be enough to handle the workload; or is it? This is a unique scenario because there are a couple of things happening inside of this system, and they all relate to what the root cause of the performance problem is on the system. If we were to query sys.dm_exec_query_stats for the TOP 10 queries, by max_physical_reads, max_logical_reads, and max_worker_time, we may be able to find some queries that were using excessive I/O and possibly CPU against the system in their worst single execution. We can also CROSS APPLY to sys.dm_exec_sql_text() and see the statement text, and also CROSS APPLY sys.dm_exec_query_plan() to get the execution plan stored in cache. Ok, quick check, the plans are pretty big, I see some large index seeks, that estimate 2.8GB of data movement between operators, but everything looks like it is optimized the best it can be. Nothing really stands out in the code, and the indexing looks correct, and I should have enough memory to handle this in cache, so it must be a disk I/O problem right? Not exactly! If we were to look at how much memory the plan cache is taking by querying sys.dm_os_memory_clerks for the CACHESTORE_SQLCP and CACHESTORE_OBJCP clerks we might be surprised at what we find. In SQL Server 2005 RTM and SP1, the plan cache was allowed to take up to 75% of the memory under 8GB. I’ll give you a second to go back and read that again. Yes, you read it correctly, it says 75% of the memory under 8GB, but you don’t have to take my word for it, you can validate this by reading Changes in Caching Behavior between SQL Server 2000, SQL Server 2005 RTM and SQL Server 2005 SP2. In this scenario the application uses an entirely adhoc workload against SQL Server and this leads to plan cache bloat, and up to 4.5GB of our 6GB of memory for SQL can be consumed by the plan cache in SQL Server 2005 SP1. This in turn reduces the size of the buffer cache to just 1.5GB, causing our 2.8GB of data movement in this expensive plan to cause complete flushing of the buffer cache, not just once initially, but then another time during the queries execution, resulting in excessive physical I/O from disk. Keep in mind that this is not the only query executing at the time this occurs. Remember the output of sys.dm_io_virtual_file_stats() showed high read stalls on the data files for our user databases versus higher write stalls for tempdb? The memory pressure is also forcing heavier use of tempdb to handle sorting and hashing in the environment as well. The real clue here is the Memory counters for the instance; Page Life Expectancy, Free List Pages, and Free List Stalls/sec. The fact that Page Life Expectancy is fluctuating between 50 and 150 constantly is a sign that the buffer cache is experiencing constant churn of data, once every minute to two and a half minutes. If you add to the Page Life Expectancy counter, the consistent bottoming out of Free List Pages along with Free List Stalls/sec consistently spiking over 10, and you have the perfect memory pressure scenario. All of sudden it may not be that our disk subsystem is the problem, but is instead an innocent bystander and victim. Side Note: The Page Life Expectancy counter dropping briefly and then returning to normal operating values intermittently is not necessarily a sign that the server is under memory pressure. The Books Online and a number of other references will tell you that this counter should remain on average above 300 which is the time in seconds a page will remain in cache before being flushed or aged out. This number, which equates to just five minutes, is incredibly low for modern systems and most published documents pre-date the predominance of 64 bit computing and easy availability to larger amounts of memory in SQL Servers. As food for thought, consider that my personal laptop has more memory in it than most SQL Servers did at the time those numbers were posted. I would argue that today, a system churning the buffer cache every five minutes is in need of some serious tuning or a hardware upgrade. Back to our problem and its investigation: There are two things really wrong with this server; first the plan cache is excessively consuming memory and bloated in size and we need to look at that and second we need to evaluate upgrading the memory to accommodate the workload being performed. In the case of the server I was working on there were a lot of single use plans found in sys.dm_exec_cached_plans (where usecounts=1). Single use plans waste space in the plan cache, especially when they are adhoc plans for statements that had concatenated filter criteria that is not likely to reoccur with any frequency.  SQL Server 2005 doesn’t natively have a way to evict a single plan from cache like SQL Server 2008 does, but MVP Kalen Delaney, showed a hack to evict a single plan by creating a plan guide for the statement and then dropping that plan guide in her blog post Geek City: Clearing a Single Plan from Cache. We could put that hack in place in a job to automate cleaning out all the single use plans periodically, minimizing the size of the plan cache, but a better solution would be to fix the application so that it uses proper parameterized calls to the database. You didn’t write the app, and you can’t change its design? Ok, well you could try to force parameterization to occur by creating and keeping plan guides in place, or we can try forcing parameterization at the database level by using ALTER DATABASE <dbname> SET PARAMETERIZATION FORCED and that might help. If neither of these help, we could periodically dump the plan cache for that database, as discussed as being a problem in Kalen’s blog post referenced above; not an ideal scenario. The other option is to increase the memory on the server to 16GB or 32GB, if the hardware allows it, which will increase the size of the plan cache as well as the buffer cache. In SQL Server 2005 SP1, on a system with 16GB of memory, if we set max server memory to 14GB the plan cache could use at most 9GB  [(8GB*.75)+(6GB*.5)=(6+3)=9GB], leaving 5GB for the buffer cache.  If we went to 32GB of memory and set max server memory to 28GB, the plan cache could use at most 16GB [(8*.75)+(20*.5)=(6+10)=16GB], leaving 12GB for the buffer cache. Thankfully we have SQL Server 2005 Service Pack 2, 3, and 4 these days which include the changes in plan cache sizing discussed in the Changes to Caching Behavior between SQL Server 2000, SQL Server 2005 RTM and SQL Server 2005 SP2 blog post. In real life, when I was troubleshooting this problem, I spent a week trying to chase down the cause of the disk I/O bottleneck with our Server Admin and SAN Admin, and there wasn’t much that could be done immediately there, so I finally asked if we could increase the memory on the server to 16GB, which did fix the problem. It wasn’t until I had this same problem occur on another system that I actually figured out how to really troubleshoot this down to the root cause.  I couldn’t believe the size of the plan cache on the server with 16GB of memory when I actually learned about this and went back to look at it. SQL Server is constantly telling a story to anyone that will listen. As the DBA, you have to sit back and listen to all that it’s telling you and then evaluate the big picture and how all the data you can gather from SQL about performance relate to each other. One of the greatest tools out there is actually a free in the form of Diagnostic Scripts for SQL Server 2005 and 2008, created by MVP Glenn Alan Berry. Glenn’s scripts collect a majority of the information that SQL has to offer for rapid troubleshooting of problems, and he includes a lot of notes about what the outputs of each individual query might be telling you. When I read Pinal’s blog post SQL SERVER – ASYNC_IO_COMPLETION – Wait Type – Day 11 of 28, I noticed that he referenced Checking Memory Related Performance Counters in his post, but there was no real explanation about why checking memory counters is so important when looking at an I/O related wait type. I thought I’d chat with him briefly on Google Talk/Twitter DM and point this out, and offer a couple of other points I noted, so that he could add the information to his blog post if he found it useful.  Instead he asked that I write a guest blog for this. I am honored to be a guest blogger, and to be able to share this kind of information with the community. The information contained in this blog post is a glimpse at how I do troubleshooting almost every day of the week in my own environment. SQL Server provides us with a lot of information about how it is running, and where it may be having problems, it is up to us to play detective and find out how all that information comes together to tell us what’s really the problem. This blog post is written by Jonathan Kehayias (Blog | Twitter). Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: MVP, Pinal Dave, PostADay, Readers Contribution, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • Eclipse buildtime and runtime classpaths for easy execution of junit test cases

    - by emeraldjava
    Hey, I've a large eclipse project with multiple junit classes. I'm trying to strike a balance between adding runtime resources to the eclipse project classpath, and the need to configure mutliple junit launch configurations. I realise the default eclipse build classpath is inherited by all unit test configurations, but some of my tests require extra runtime resources. I could add these resources to the build classpath, but this does slow my overall project build time (since it has to keep more files in synch). I don't like the idea of including * resources and jars on the runtime classpath. The two options that i have are these, the positive and negative cases as i see it are listed 1 : Add all runtime resources to eclipse classpath. POS I can select a unit test and run it without having to configure the test classpath. POS Extra resources on build classpath means eclipse slows down. NEG More difficult to ensure each test uses the correct resources. 2 : Configure the classpath of each unit test POS I know exactly what resources are being used by a test. POS Smaller build classpath means quicker build and execution by eclipse. NEG Its a pain having to setup multiple separate junit runtime classpaths. Ideally i'd like to configure one base junit runtime configuration, which takes the default eclipse build classpath, adds extra runtime jars and resources. This configuration could then be reused by the specific junit test cases, Is anything like this possible? Looking at a specific junit launch configuration which can be exported to a share project file <?xml version="1.0" encoding="UTF-8" standalone="no"?> <launchConfiguration type="org.eclipse.jdt.junit.launchconfig"> <stringAttribute key="bad_container_name" value="/CR-3089_5_1_branch."/> <listAttribute key="org.eclipse.debug.core.MAPPED_RESOURCE_PATHS"> <listEntry value="/CR-3089_5_1_branch/src/com/x/y/z/ParserJUnitTest.java"/> </listAttribute> <listAttribute key="org.eclipse.debug.core.MAPPED_RESOURCE_TYPES"> <listEntry value="1"/> </listAttribute> <stringAttribute key="org.eclipse.jdt.junit.CONTAINER" value=""/> <booleanAttribute key="org.eclipse.jdt.junit.KEEPRUNNING_ATTR" value="false"/> <stringAttribute key="org.eclipse.jdt.junit.TESTNAME" value=""/> <stringAttribute key="org.eclipse.jdt.junit.TEST_KIND" value="org.eclipse.jdt.junit.loader.junit4"/> <stringAttribute key="org.eclipse.jdt.launching.MAIN_TYPE" value="com.x.y.z.ParserJUnitTest"/> <stringAttribute key="org.eclipse.jdt.launching.PROJECT_ATTR" value="CR-3089_5_1_branch"/> </launchConfiguration> is it possible to extend/reuse this configuration, and parameterise the 'org.eclipse.jdt.launching.MAIN_TYPE' value? I'm aware of the commons launch and jar manifest file solutions to configuring the classpath, but they both seem to assume that an ant build is run before the test can execute. I want to avoid any eclipse dependency on calling an ant target when refactoting code and executing tests. Basically - What is the the easiest way to seperate and maintain the eclipse buildtime classpath and junit runtime classpaths?

    Read the article

  • April 2013 Release of the Ajax Control Toolkit

    - by Stephen.Walther
    I’m excited to announce the April 2013 release of the Ajax Control Toolkit. For this release, we focused on improving two controls: the AjaxFileUpload and the MaskedEdit controls. You can download the latest release from CodePlex at http://AjaxControlToolkit.CodePlex.com or, better yet, you can execute the following NuGet command within Visual Studio 2010/2012: There are three builds of the Ajax Control Toolkit: .NET 3.5, .NET 4.0, and .NET 4.5. A Better AjaxFileUpload Control We completely rewrote the AjaxFileUpload control for this release. We had two primary goals. First, we wanted to support uploading really large files. In particular, we wanted to support uploading multi-gigabyte files such as video files or application files. Second, we wanted to support showing upload progress on as many browsers as possible. The previous version of the AjaxFileUpload could show upload progress when used with Google Chrome or Mozilla Firefox but not when used with Apple Safari or Microsoft Internet Explorer. The new version of the AjaxFileUpload control shows upload progress when used with any browser. Using the AjaxFileUpload Control Let me walk-through using the AjaxFileUpload in the most basic scenario. And then, in following sections, I can explain some of its more advanced features. Here’s how you can declare the AjaxFileUpload control in a page: <ajaxToolkit:ToolkitScriptManager runat="server" /> <ajaxToolkit:AjaxFileUpload ID="AjaxFileUpload1" AllowedFileTypes="mp4" OnUploadComplete="AjaxFileUpload1_UploadComplete" runat="server" /> The exact appearance of the AjaxFileUpload control depends on the features that a browser supports. In the case of Google Chrome, which supports drag-and-drop upload, here’s what the AjaxFileUpload looks like: Notice that the page above includes two Ajax Control Toolkit controls: the AjaxFileUpload and the ToolkitScriptManager control. You always need to include the ToolkitScriptManager with any page which uses Ajax Control Toolkit controls. The AjaxFileUpload control declared in the page above includes an event handler for its UploadComplete event. This event handler is declared in the code-behind page like this: protected void AjaxFileUpload1_UploadComplete(object sender, AjaxControlToolkit.AjaxFileUploadEventArgs e) { // Save uploaded file to App_Data folder AjaxFileUpload1.SaveAs(MapPath("~/App_Data/" + e.FileName)); } This method saves the uploaded file to your website’s App_Data folder. I’m assuming that you have an App_Data folder in your project – if you don’t have one then you need to create one or you will get an error. There is one more thing that you must do in order to get the AjaxFileUpload control to work. The AjaxFileUpload control relies on an HTTP Handler named AjaxFileUploadHandler.axd. You need to declare this handler in your application’s root web.config file like this: <configuration> <system.web> <compilation debug="true" targetFramework="4.5" /> <httpRuntime targetFramework="4.5" maxRequestLength="42949672" /> <httpHandlers> <add verb="*" path="AjaxFileUploadHandler.axd" type="AjaxControlToolkit.AjaxFileUploadHandler, AjaxControlToolkit"/> </httpHandlers> </system.web> <system.webServer> <validation validateIntegratedModeConfiguration="false"/> <handlers> <add name="AjaxFileUploadHandler" verb="*" path="AjaxFileUploadHandler.axd" type="AjaxControlToolkit.AjaxFileUploadHandler, AjaxControlToolkit"/> </handlers> <security> <requestFiltering> <requestLimits maxAllowedContentLength="4294967295"/> </requestFiltering> </security> </system.webServer> </configuration> Notice that the web.config file above also contains configuration settings for the maxRequestLength and maxAllowedContentLength. You need to assign large values to these configuration settings — as I did in the web.config file above — in order to accept large file uploads. Supporting Chunked File Uploads Because one of our primary goals with this release was support for large file uploads, we added support for client-side chunking. When you upload a file using a browser which fully supports the HTML5 File API — such as Google Chrome or Mozilla Firefox — then the file is uploaded in multiple chunks. You can see chunking in action by opening F12 Developer Tools in your browser and observing the Network tab: Notice that there is a crazy number of distinct post requests made (about 360 distinct requests for a 1 gigabyte file). Each post request looks like this: http://localhost:24338/AjaxFileUploadHandler.axd?contextKey={DA8BEDC8-B952-4d5d-8CC2-59FE922E2923}&fileId=B7CCE31C-6AB1-BB28-2940-49E0C9B81C64 &fileName=Sita_Sings_the_Blues_480p_2150kbps.mp4&chunked=true&firstChunk=false Each request posts another chunk of the file being uploaded. Notice that the request URL includes a chunked=true parameter which indicates that the browser is breaking the file being uploaded into multiple chunks. Showing Upload Progress on All Browsers The previous version of the AjaxFileUpload control could display upload progress only in the case of browsers which fully support the HTML5 File API. The new version of the AjaxFileUpload control can display upload progress in the case of all browsers. If a browser does not fully support the HTML5 File API then the browser polls the server every few seconds with an Ajax request to determine the percentage of the file that has been uploaded. This technique of displaying progress works with any browser which supports making Ajax requests. There is one catch. Be warned that this new feature only works with the .NET 4.0 and .NET 4.5 versions of the AjaxControlToolkit. To show upload progress, we are taking advantage of the new ASP.NET HttpRequest.GetBufferedInputStream() and HttpRequest.GetBufferlessInputStream() methods which are not supported by .NET 3.5. For example, here is what the Network tab looks like when you use the AjaxFileUpload with Microsoft Internet Explorer: Here’s what the requests in the Network tab look like: GET /WebForm1.aspx?contextKey={DA8BEDC8-B952-4d5d-8CC2-59FE922E2923}&poll=1&guid=9206FF94-76F9-B197-D1BC-EA9AD282806B HTTP/1.1 Notice that each request includes a poll=1 parameter. This parameter indicates that this is a polling request to get the size of the file buffered on the server. Here’s what the response body of a request looks like when about 20% of a file has been uploaded: Buffering to a Temporary File When you upload a file using the AjaxFileUpload control, the file upload is buffered to a temporary file located at Path.GetTempPath(). When you call the SaveAs() method, as we did in the sample page above, the temporary file is copied to a new file and then the temporary file is deleted. If you don’t call the SaveAs() method, then you must ensure that the temporary file gets deleted yourself. For example, if you want to save the file to a database then you will never call the SaveAs() method and you are responsible for deleting the file. The easiest way to delete the temporary file is to call the AjaxFileUploadEventArgs.DeleteTemporaryData() method in the UploadComplete handler: protected void AjaxFileUpload1_UploadComplete(object sender, AjaxControlToolkit.AjaxFileUploadEventArgs e) { // Save uploaded file to a database table e.DeleteTemporaryData(); } You also can call the static AjaxFileUpload.CleanAllTemporaryData() method to delete all temporary data and not only the temporary data related to the current file upload. For example, you might want to call this method on application start to ensure that all temporary data is removed whenever your application restarts. A Better MaskedEdit Extender This release of the Ajax Control Toolkit contains bug fixes for the top-voted issues related to the MaskedEdit control. We closed over 25 MaskedEdit issues. Here is a complete list of the issues addressed with this release: · 17302 MaskedEditExtender MaskType=Date, Mask=99/99/99 Undefined JS Error · 11758 MaskedEdit causes error in JScript when working with 2-digits year · 18810 Maskededitextender/validator Date validation issue · 23236 MaskEditValidator does not work with date input using format dd/mm/yyyy · 23042 Webkit based browsers (Safari, Chrome) and MaskedEditExtender · 26685 MaskedEditExtender@(ClearMaskOnLostFocus=false) adds a zero character when you each focused to target textbox · 16109 MaskedEditExtender: Negative amount, followed by decimal, sets value to positive · 11522 MaskEditExtender of AjaxtoolKit-1.0.10618.0 does not work properly for Hungarian Culture · 25988 MaskedEditExtender – CultureName (HU-hu) > DateSeparator · 23221 MaskedEditExtender date separator problem · 15233 Day and month swap in Dynamic user control · 15492 MaskedEditExtender with ClearMaskOnLostFocus and with MaskedEditValidator with ClientValidationFunction · 9389 MaskedEditValidator – when on no entry · 11392 MaskedEdit Number format messed up · 11819 MaskedEditExtender erases all values beyond first comma separtor · 13423 MaskedEdit(Extender/Validator) combo problem · 16111 MaskedEditValidator cannot validate date with DayMonthYear in UserDateFormat of MaskedEditExtender · 10901 MaskedEdit: The months and date fields swap values when you hit submit if UserDateFormat is set. · 15190 MaskedEditValidator can’t make use of MaskedEditExtender’s UserDateFormat property · 13898 MaskedEdit Extender with custom date type mask gives javascript error · 14692 MaskedEdit error in “yy/MM/dd” format. · 16186 MaskedEditExtender does not handle century properly in a date mask · 26456 MaskedEditBehavior. ConvFmtTime : function(input,loadFirst) fails if this._CultureAMPMPlaceholder == “” · 21474 Error on MaskedEditExtender working with number format · 23023 MaskedEditExtender’s ClearMaskOnLostFocus property causes problems for MaskedEditValidator when set to false · 13656 MaskedEditValidator Min/Max Date value issue Conclusion This latest release of the Ajax Control Toolkit required many hours of work by a team of talented developers. I want to thank the members of the Superexpert team for the long hours which they put into this release.

    Read the article

  • UIScrollView zoomToRect not zooming to given rect (created from UITouch CGPoint)

    - by pmhart
    My application has a UIScrollView with one subview. The subview is an extended UIView which prints a PDF page to itself using layers in the drawLayer event. Zooming using the built in pinching works great. setZoomScale also works as expected. I have been struggling with the zoomToRect function. I found an example online which makes a CGRect zoomRect variable from a given CGPoint. In the touchesEnded function, if there was a double tap and they are all the way zoomed out, I want to zoom in to that PDFUIView I created as though they were pinching out with the center of the pinch where they double tapped. So assume that I pass the UITouch variable to my function which utilizes zoomToRect if they double tap. I started with the following function I found on apples site: http://developer.apple.com/iphone/library/documentation/WindowsViews/Conceptual/UIScrollView_pg/ZoomZoom/ZoomZoom.html The following is a modified version for my UIScrollView extended class: - (void)zoomToCenter:(float)scale withCenter:(CGPoint)center { CGRect zoomRect; zoomRect.size.height = self.frame.size.height / scale; zoomRect.size.width = self.frame.size.width / scale; zoomRect.origin.x = center.x - (zoomRect.size.width / 2.0); zoomRect.origin.y = center.y - (zoomRect.size.height / 2.0); //return zoomRect; [self zoomToRect:zoomRect animated:YES]; } When I do this, the UIScrollView seems to zoom using the bottom right edge of the zoomRect above and not the center. If I make UIView like this UIView *v = [[UIView alloc] initWithFrame:zoomRect]; [v setBackgroundColor:[UIView redColor]]; [self addSubview:v]; The red box shows up with the touch point dead in the center. Please note: I am writing this from my PC, I recall messing around with the divided by two part on my Mac, so just assume that this draws a rect with the touch point in the center. If the UIView drew off center but zoomed to the right spot it would be all good. However, what happens is when it preforms the zoomToRect it seems to use the bottom right off the zoomRect at the top left of the zoomed in results. Also, I noticed that depending on where I click on the UIScrollView, it anchors to diffrent spots. It almost seems like there is a cross down the middle and it's reflecting the points somehow as though anywhere left of the middle is a negative reflection and anywhere right of the middle is a positive reflection? This seems to complicated, shouldn't it just zoom to the rect that was drawn as the UIView was able to draw? I used a lot of research to figure out how to create a PDF that scales in high quality, so I am assuming that using the CALayer may be throwing off the coordinate system? But to the UIScrollView it should just treat it as a view with 768x985 dimensions. This is sort of advanced, please assume the code for creating the zoomRect is all good. There is something deeper with the CALayer in the UIView which is in the UIScrollView....

    Read the article

  • Lightning talk: Coderetreat

    - by Michael Williamson
    In the spirit of trying to encourage more deliberate practice amongst coders in Red Gate, Lauri Pesonen had the idea of running a coderetreat in Red Gate. Lauri and I ran the first one a few weeks ago: given that neither of us hadn’t even been to a coderetreat before, let alone run one, I think it turned out quite well. The participants gave positive feedback, saying that they enjoyed the day, wrote some thought-provoking code and would do it again. Sam Blackburn was one of the attendees, and gave a lightning talk to the other developers in one of our regular lightning talk sessions: In case you can’t watch the video, I’ve transcribed the talk below, although I’d recommend watching the video if you can — I didn’t have much time to do the transcribing! So, what is a coderetreat? So it’s not just something in Red Gate, there’s a website and everything, although it’s not a very big website. It calls itself a community network. The basic ideas behind coderetreat are: you’ve got one day, and you split it into one hour sections. You spend three quarters of that coding, and do a little retrospective at the end. You’re supposed to start fresh each, we were told to delete our code after every session. We were in pairs, swapping after each session, and we did the same task every time. In fact, Conway’s Game of Life is the only task mentioned anywhere that I find for coderetreat. So I don’t know what we’ll do next time, or if we’re meant to do the same thing again. There are some guiding principles which felt to us like restrictions, that you have to code in crazy ways to encourage better code. Final thing is that it’s supposed to be free for outsiders to join. It’s meant to be a kind of networking thing, where you link up with people from other companies. We had a pilot day with Michael and Lauri. Since it was basically the first time any of us had done anything like this, everybody was from Red Gate. We didn’t chat to anybody else for the initial one. The task was Conway’s Game of Life, which most of you have probably heard of it, all but one of us knew about it when did the coderetreat. I won’t got into the details of what it is, but it felt like the right size of task, basically one or two groups actually produced something working by the end of the day, and of course that doesn’t mean it’s necessarily a day’s work to produce that because we were starting again every hour. The task really drives you more than trying to create good code, I found. It was really tempting to try and get it working rather than stick to the rules. But it’s really good to stop and try again because there are so many what-ifs when you’ve finished writing something, “what if I’d done it this way?”. You can answer all those questions at a coderetreat because it’s not about getting a product out the door, it’s about learning and playing with ideas. So we had all these different practices we were trying. I’ll try and go through most of these. Single responsibility is this idea that everything should do just one thing. It was the very first session, we were still trying to figure out how do you go about the Game of Life? So by the end of forty-five minutes hadn’t produced very much for that first session. We were still thinking, “Do we start with a board, how do we represent all these squares? It can be infinitely big, help, this is getting really difficult!”. So, most of us didn’t really get anywhere on the first one. Although it was interesting that some people started with the board, one group started with the FateDecider class that decides whether things live or die. A sort of god class, but in a good way. They managed to implement all of the rules without even defining how the squares were arranged or anything like that. Another thing we tried was TDD (test-driven development). I’m sure most of you know what TDD is: Watch a test, watch it fail for the right reason Write code to pass the test, watch it pass Refactor, check the test still passes Repeat! It basically worked, we were able to produce code, but we often found the tests defined the direction that code went, which is obviously the idea of TDD. But you tend to find that by the time you’ve even written your first assertion, which is supposed to be the very first thing you write, because you write your tests backwards from the assertions back to the initial conditions, you’ve already constrained the logic of the code in some way by the time you’ve done that. You then get to this situation of, “Well, we actually want to go in a slightly different direction. Can we do this?”. Can we write tests that don’t constrain the architecture? Wrapping up all primitives: it’s kind of turtles all the way down. We had a Size, which has a Width and Height, which both derive from Dimension. You’ve got pages of code before you’ve even done anything. No getters and setters (use tell don’t ask instead): mocks and stubs for tests are required if you want to assert that your results are what you think they should be. You can’t just check the internal state of the code. And people found that really challenging and it made them think in a different way which I think is really good. Not having mutable state: that was kind of confusing because we weren’t quite sure what fitted within that rule and what didn’t, and I think we were trying too hard to follow the rule rather than the guideline. No if-statements: supposed to use polymorphism instead, but polymorphism still requires a factory with conditional behaviour. We did something really crazy to get around this: public T If(bool condition, Func<T> left, Func<T> right) { var dict = new Dictionary<bool, Func<T>> {{true, left}, {false, right}}; return dict[condition].Invoke(); } That is not really polymorphism, is it? For-loops: you can always replace a for-loop with recursion, but it doesn’t tend to make it any more readable unless it’s the kind of task that really lends itself to that. So it was interesting, it was good practice, but it wouldn’t make it easier it’s the kind of tree-structure algorithm where that would help. Having a limit on the number of levels of indentation: again, I think it does produce very nice, clean code, but it wasn’t actually a challenge because you just extract methods. That’s quite a useful thing because you can apply that to real code and say, “Okay, should this method really be going crazy like this?” No talking: we hated that. It’s like there’s two of you at a computer, and one of you is doing the typing, what does the other guy do if they’re not allowed to talk. The answer is TDD ping-pong – one person writes the tests, and then the other person writes the code to pass the test. And that creates communication without actually having to have discussion about things which is kind of cool. No code comments: just makes no difference to anything. It’s a forty-five minute exercise, so what are you going to put comments in code for? Finally, this is my fault. I discovered an entertaining way of doing the calculation that was kind of cool (using convolutions over the state of the board). Unfortunately, it turns out to be really hard to implement in C#, so didn’t even manage to work out how to do that convolution in C#. It’s trivial in some high-level languages, but you need something matrix-orientated for it to really work. That’s most of it, really. The thoughts that people went away with: we put down our answers to questions like “What have you learnt?” and “What surprised you?”, “How are you going to do things differently?”, and most people said redoing the problem is really, really good for understanding it properly. People hate having a massive legacy codebase that they can’t change, so being able to attack something three different ways in an environment where the end-product isn’t important: that’s something people really enjoyed. Pair-programming: also people said that they wanted to do more of that, especially with TDD ping-pong, where you write the test and somebody else writes the code. Various people thought different things about immutables, but most people thought they were good, they promote functional programming. And TDD people found really hard. “Tell, don’t ask” people found really, really hard and really, really, really hard to do well. And the recursion just made things trickier to debug. But most people agreed that coderetreats are really cool, and we should do more of them.

    Read the article

  • Hello Operator, My Switch Is Bored

    - by Paul White
    This is a post for T-SQL Tuesday #43 hosted by my good friend Rob Farley. The topic this month is Plan Operators. I haven’t taken part in T-SQL Tuesday before, but I do like to write about execution plans, so this seemed like a good time to start. This post is in two parts. The first part is primarily an excuse to use a pretty bad play on words in the title of this blog post (if you’re too young to know what a telephone operator or a switchboard is, I hate you). The second part of the post looks at an invisible query plan operator (so to speak). 1. My Switch Is Bored Allow me to present the rare and interesting execution plan operator, Switch: Books Online has this to say about Switch: Following that description, I had a go at producing a Fast Forward Cursor plan that used the TOP operator, but had no luck. That may be due to my lack of skill with cursors, I’m not too sure. The only application of Switch in SQL Server 2012 that I am familiar with requires a local partitioned view: CREATE TABLE dbo.T1 (c1 int NOT NULL CHECK (c1 BETWEEN 00 AND 24)); CREATE TABLE dbo.T2 (c1 int NOT NULL CHECK (c1 BETWEEN 25 AND 49)); CREATE TABLE dbo.T3 (c1 int NOT NULL CHECK (c1 BETWEEN 50 AND 74)); CREATE TABLE dbo.T4 (c1 int NOT NULL CHECK (c1 BETWEEN 75 AND 99)); GO CREATE VIEW V1 AS SELECT c1 FROM dbo.T1 UNION ALL SELECT c1 FROM dbo.T2 UNION ALL SELECT c1 FROM dbo.T3 UNION ALL SELECT c1 FROM dbo.T4; Not only that, but it needs an updatable local partitioned view. We’ll need some primary keys to meet that requirement: ALTER TABLE dbo.T1 ADD CONSTRAINT PK_T1 PRIMARY KEY (c1);   ALTER TABLE dbo.T2 ADD CONSTRAINT PK_T2 PRIMARY KEY (c1);   ALTER TABLE dbo.T3 ADD CONSTRAINT PK_T3 PRIMARY KEY (c1);   ALTER TABLE dbo.T4 ADD CONSTRAINT PK_T4 PRIMARY KEY (c1); We also need an INSERT statement that references the view. Even more specifically, to see a Switch operator, we need to perform a single-row insert (multi-row inserts use a different plan shape): INSERT dbo.V1 (c1) VALUES (1); And now…the execution plan: The Constant Scan manufactures a single row with no columns. The Compute Scalar works out which partition of the view the new value should go in. The Assert checks that the computed partition number is not null (if it is, an error is returned). The Nested Loops Join executes exactly once, with the partition id as an outer reference (correlated parameter). The Switch operator checks the value of the parameter and executes the corresponding input only. If the partition id is 0, the uppermost Clustered Index Insert is executed, adding a row to table T1. If the partition id is 1, the next lower Clustered Index Insert is executed, adding a row to table T2…and so on. In case you were wondering, here’s a query and execution plan for a multi-row insert to the view: INSERT dbo.V1 (c1) VALUES (1), (2); Yuck! An Eager Table Spool and four Filters! I prefer the Switch plan. My guess is that almost all the old strategies that used a Switch operator have been replaced over time, using things like a regular Concatenation Union All combined with Start-Up Filters on its inputs. Other new (relative to the Switch operator) features like table partitioning have specific execution plan support that doesn’t need the Switch operator either. This feels like a bit of a shame, but perhaps it is just nostalgia on my part, it’s hard to know. Please do let me know if you encounter a query that can still use the Switch operator in 2012 – it must be very bored if this is the only possible modern usage! 2. Invisible Plan Operators The second part of this post uses an example based on a question Dave Ballantyne asked using the SQL Sentry Plan Explorer plan upload facility. If you haven’t tried that yet, make sure you’re on the latest version of the (free) Plan Explorer software, and then click the Post to SQLPerformance.com button. That will create a site question with the query plan attached (which can be anonymized if the plan contains sensitive information). Aaron Bertrand and I keep a close eye on questions there, so if you have ever wanted to ask a query plan question of either of us, that’s a good way to do it. The problem The issue I want to talk about revolves around a query issued against a calendar table. The script below creates a simplified version and adds 100 years of per-day information to it: USE tempdb; GO CREATE TABLE dbo.Calendar ( dt date NOT NULL, isWeekday bit NOT NULL, theYear smallint NOT NULL,   CONSTRAINT PK__dbo_Calendar_dt PRIMARY KEY CLUSTERED (dt) ); GO -- Monday is the first day of the week for me SET DATEFIRST 1;   -- Add 100 years of data INSERT dbo.Calendar WITH (TABLOCKX) (dt, isWeekday, theYear) SELECT CA.dt, isWeekday = CASE WHEN DATEPART(WEEKDAY, CA.dt) IN (6, 7) THEN 0 ELSE 1 END, theYear = YEAR(CA.dt) FROM Sandpit.dbo.Numbers AS N CROSS APPLY ( VALUES (DATEADD(DAY, N.n - 1, CONVERT(date, '01 Jan 2000', 113))) ) AS CA (dt) WHERE N.n BETWEEN 1 AND 36525; The following query counts the number of weekend days in 2013: SELECT Days = COUNT_BIG(*) FROM dbo.Calendar AS C WHERE theYear = 2013 AND isWeekday = 0; It returns the correct result (104) using the following execution plan: The query optimizer has managed to estimate the number of rows returned from the table exactly, based purely on the default statistics created separately on the two columns referenced in the query’s WHERE clause. (Well, almost exactly, the unrounded estimate is 104.289 rows.) There is already an invisible operator in this query plan – a Filter operator used to apply the WHERE clause predicates. We can see it by re-running the query with the enormously useful (but undocumented) trace flag 9130 enabled: Now we can see the full picture. The whole table is scanned, returning all 36,525 rows, before the Filter narrows that down to just the 104 we want. Without the trace flag, the Filter is incorporated in the Clustered Index Scan as a residual predicate. It is a little bit more efficient than using a separate operator, but residual predicates are still something you will want to avoid where possible. The estimates are still spot on though: Anyway, looking to improve the performance of this query, Dave added the following filtered index to the Calendar table: CREATE NONCLUSTERED INDEX Weekends ON dbo.Calendar(theYear) WHERE isWeekday = 0; The original query now produces a much more efficient plan: Unfortunately, the estimated number of rows produced by the seek is now wrong (365 instead of 104): What’s going on? The estimate was spot on before we added the index! Explanation You might want to grab a coffee for this bit. Using another trace flag or two (8606 and 8612) we can see that the cardinality estimates were exactly right initially: The highlighted information shows the initial cardinality estimates for the base table (36,525 rows), the result of applying the two relational selects in our WHERE clause (104 rows), and after performing the COUNT_BIG(*) group by aggregate (1 row). All of these are correct, but that was before cost-based optimization got involved :) Cost-based optimization When cost-based optimization starts up, the logical tree above is copied into a structure (the ‘memo’) that has one group per logical operation (roughly speaking). The logical read of the base table (LogOp_Get) ends up in group 7; the two predicates (LogOp_Select) end up in group 8 (with the details of the selections in subgroups 0-6). These two groups still have the correct cardinalities as trace flag 8608 output (initial memo contents) shows: During cost-based optimization, a rule called SelToIdxStrategy runs on group 8. It’s job is to match logical selections to indexable expressions (SARGs). It successfully matches the selections (theYear = 2013, is Weekday = 0) to the filtered index, and writes a new alternative into the memo structure. The new alternative is entered into group 8 as option 1 (option 0 was the original LogOp_Select): The new alternative is to do nothing (PhyOp_NOP = no operation), but to instead follow the new logical instructions listed below the NOP. The LogOp_GetIdx (full read of an index) goes into group 21, and the LogOp_SelectIdx (selection on an index) is placed in group 22, operating on the result of group 21. The definition of the comparison ‘the Year = 2013’ (ScaOp_Comp downwards) was already present in the memo starting at group 2, so no new memo groups are created for that. New Cardinality Estimates The new memo groups require two new cardinality estimates to be derived. First, LogOp_Idx (full read of the index) gets a predicted cardinality of 10,436. This number comes from the filtered index statistics: DBCC SHOW_STATISTICS (Calendar, Weekends) WITH STAT_HEADER; The second new cardinality derivation is for the LogOp_SelectIdx applying the predicate (theYear = 2013). To get a number for this, the cardinality estimator uses statistics for the column ‘theYear’, producing an estimate of 365 rows (there are 365 days in 2013!): DBCC SHOW_STATISTICS (Calendar, theYear) WITH HISTOGRAM; This is where the mistake happens. Cardinality estimation should have used the filtered index statistics here, to get an estimate of 104 rows: DBCC SHOW_STATISTICS (Calendar, Weekends) WITH HISTOGRAM; Unfortunately, the logic has lost sight of the link between the read of the filtered index (LogOp_GetIdx) in group 22, and the selection on that index (LogOp_SelectIdx) that it is deriving a cardinality estimate for, in group 21. The correct cardinality estimate (104 rows) is still present in the memo, attached to group 8, but that group now has a PhyOp_NOP implementation. Skipping over the rest of cost-based optimization (in a belated attempt at brevity) we can see the optimizer’s final output using trace flag 8607: This output shows the (incorrect, but understandable) 365 row estimate for the index range operation, and the correct 104 estimate still attached to its PhyOp_NOP. This tree still has to go through a few post-optimizer rewrites and ‘copy out’ from the memo structure into a tree suitable for the execution engine. One step in this process removes PhyOp_NOP, discarding its 104-row cardinality estimate as it does so. To finish this section on a more positive note, consider what happens if we add an OVER clause to the query aggregate. This isn’t intended to be a ‘fix’ of any sort, I just want to show you that the 104 estimate can survive and be used if later cardinality estimation needs it: SELECT Days = COUNT_BIG(*) OVER () FROM dbo.Calendar AS C WHERE theYear = 2013 AND isWeekday = 0; The estimated execution plan is: Note the 365 estimate at the Index Seek, but the 104 lives again at the Segment! We can imagine the lost predicate ‘isWeekday = 0’ as sitting between the seek and the segment in an invisible Filter operator that drops the estimate from 365 to 104. Even though the NOP group is removed after optimization (so we don’t see it in the execution plan) bear in mind that all cost-based choices were made with the 104-row memo group present, so although things look a bit odd, it shouldn’t affect the optimizer’s plan selection. I should also mention that we can work around the estimation issue by including the index’s filtering columns in the index key: CREATE NONCLUSTERED INDEX Weekends ON dbo.Calendar(theYear, isWeekday) WHERE isWeekday = 0 WITH (DROP_EXISTING = ON); There are some downsides to doing this, including that changes to the isWeekday column may now require Halloween Protection, but that is unlikely to be a big problem for a static calendar table ;)  With the updated index in place, the original query produces an execution plan with the correct cardinality estimation showing at the Index Seek: That’s all for today, remember to let me know about any Switch plans you come across on a modern instance of SQL Server! Finally, here are some other posts of mine that cover other plan operators: Segment and Sequence Project Common Subexpression Spools Why Plan Operators Run Backwards Row Goals and the Top Operator Hash Match Flow Distinct Top N Sort Index Spools and Page Splits Singleton and Range Seeks Bitmaps Hash Join Performance Compute Scalar © 2013 Paul White – All Rights Reserved Twitter: @SQL_Kiwi

    Read the article

  • Results Delphi users who wish to use HID USB in windows

    - by Lex Dean
    Results Delphi users who wish to use HID USB in windows HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Enum\USB Contain a list if keys containing vender ID and Producer ID numbers that co inside with the USB web sites data base. These numbers and the GUID held within the key gives important information to execute the HID.dll that is otherwise imposable to execute. The Control Panel/System/Hardware/Device manager/USB Serial Bus Controllers/Mass Storage Devices/details simply lists the registry data. The access to the programmer has been documented through the API32.dll with a number of procedures that accesses the registry. But that is not the problem yet it looks like the problem!!!!!!!!! The key is info about the registry and how to use it. These keys are viewed in RegEdit.exe it’s self. Some parts of the registry like the USB have been given a windows security system type of protection with a Aurthz.dll to give the USB read and right protection. Even the api32.dll. Now only Microsoft give out these details and we all know Microsoft hate Delphi. Now C users have enjoyed this access for over 10 years now. Now some will make out that you should never give out such information because some idiot may make a stupid virus (true), but the argument is also do Delphi users need to be denied USB access for another ten years!!!!!!!!!!!!. What I do not have is the skill in is assembly code. I’m seeking for some one that can trace how regedit.exe gets its access through Aurthz.dll to access the USB data. So I’m asking all who reads this:- to partition any friend they have that has this skill to get the Aurthz.dll info needed. I find communicating with USB.org they reply when they have a positive email reply but do not bother should their email be a slightly negative policy. For all simple reasoning, all that USB had to do was to have a secure key as they have done, and to update the same data into a unsecured key every time the data is changed for USB developer to access. And not bother developers access to Aurthz.dll. Authz.dll with these functions for USB:- AuthzFreeResourceManager AuthzFreeContext AuthzAccessCheck(Flags: DWORD; AuthzClientContext: AUTHZ_CLIENT_CONTEXT_HANDLE; pRequest: PAUTHZ_ACCESS_REQUEST; AuditInfo: AUTHZ_AUDIT_INFO_HANDLE; pSecurityDescriptor: PSECURITY_DESCRIPTOR; OptionalSecurityDescriptorArray: PSECURITY_DESCRIPTOR; OptionalSecurityDescriptorCount: DWORD; //OPTIONAL, Var pReply: AUTHZ_ACCESS_REPLY; pAuthzHandle: PAUTHZ_ACCESS_CHECK_RESULTS_HANDLE): BOOl; AuthzInitializeContextFromSid(Flags: DWORD; UserSid: PSID; AuthzResourceManager: AUTHZ_RESOURCE_MANAGER_HANDLE; pExpirationTime: int64; Identifier: LUID; DynamicGroupArgs: PVOID; pAuthzClientContext: PAUTHZ_CLIENT_CONTEXT_HANDLE): BOOL; AuthzInitializeResourceManager(flags: DWORD; pfnAccessCheck: PFN_AUTHZ_DYNAMIC_ACCESS_CHECK; pfnComputeDynamicGroups: PFN_AUTHZ_COMPUTE_DYNAMIC_GROUPS; pfnFreeDynamicGroups: PFN_AUTHZ_FREE_DYNAMIC_GROUPS; ResourceManagerName: PWideChar; pAuthzResourceManager: PAUTHZ_RESOURCE_MANAGER_HANDLE): BOOL; further in Authz.h on kolers.com J Lex Dean.

    Read the article

  • C#/.NET Little Wonders: The Predicate, Comparison, and Converter Generic Delegates

    - by James Michael Hare
    Once again, in this series of posts I look at the parts of the .NET Framework that may seem trivial, but can help improve your code by making it easier to write and maintain. The index of all my past little wonders posts can be found here. In the last three weeks, we examined the Action family of delegates (and delegates in general), the Func family of delegates, and the EventHandler family of delegates and how they can be used to support generic, reusable algorithms and classes. This week I will be completing my series on the generic delegates in the .NET Framework with a discussion of three more, somewhat less used, generic delegates: Predicate<T>, Comparison<T>, and Converter<TInput, TOutput>. These are older generic delegates that were introduced in .NET 2.0, mostly for use in the Array and List<T> classes.  Though older, it’s good to have an understanding of them and their intended purpose.  In addition, you can feel free to use them yourself, though obviously you can also use the equivalents from the Func family of delegates instead. Predicate<T> – delegate for determining matches The Predicate<T> delegate was a very early delegate developed in the .NET 2.0 Framework to determine if an item was a match for some condition in a List<T> or T[].  The methods that tend to use the Predicate<T> include: Find(), FindAll(), FindLast() Uses the Predicate<T> delegate to finds items, in a list/array of type T, that matches the given predicate. FindIndex(), FindLastIndex() Uses the Predicate<T> delegate to find the index of an item, of in a list/array of type T, that matches the given predicate. The signature of the Predicate<T> delegate (ignoring variance for the moment) is: 1: public delegate bool Predicate<T>(T obj); So, this is a delegate type that supports any method taking an item of type T and returning bool.  In addition, there is a semantic understanding that this predicate is supposed to be examining the item supplied to see if it matches a given criteria. 1: // finds first even number (2) 2: var firstEven = Array.Find(numbers, n => (n % 2) == 0); 3:  4: // finds all odd numbers (1, 3, 5, 7, 9) 5: var allEvens = Array.FindAll(numbers, n => (n % 2) == 1); 6:  7: // find index of first multiple of 5 (4) 8: var firstFiveMultiplePos = Array.FindIndex(numbers, n => (n % 5) == 0); This delegate has typically been succeeded in LINQ by the more general Func family, so that Predicate<T> and Func<T, bool> are logically identical.  Strictly speaking, though, they are different types, so a delegate reference of type Predicate<T> cannot be directly assigned to a delegate reference of type Func<T, bool>, though the same method can be assigned to both. 1: // SUCCESS: the same lambda can be assigned to either 2: Predicate<DateTime> isSameDayPred = dt => dt.Date == DateTime.Today; 3: Func<DateTime, bool> isSameDayFunc = dt => dt.Date == DateTime.Today; 4:  5: // ERROR: once they are assigned to a delegate type, they are strongly 6: // typed and cannot be directly assigned to other delegate types. 7: isSameDayPred = isSameDayFunc; When you assign a method to a delegate, all that is required is that the signature matches.  This is why the same method can be assigned to either delegate type since their signatures are the same.  However, once the method has been assigned to a delegate type, it is now a strongly-typed reference to that delegate type, and it cannot be assigned to a different delegate type (beyond the bounds of variance depending on Framework version, of course). Comparison<T> – delegate for determining order Just as the Predicate<T> generic delegate was birthed to give Array and List<T> the ability to perform type-safe matching, the Comparison<T> was birthed to give them the ability to perform type-safe ordering. The Comparison<T> is used in Array and List<T> for: Sort() A form of the Sort() method that takes a comparison delegate; this is an alternate way to custom sort a list/array from having to define custom IComparer<T> classes. The signature for the Comparison<T> delegate looks like (without variance): 1: public delegate int Comparison<T>(T lhs, T rhs); The goal of this delegate is to compare the left-hand-side to the right-hand-side and return a negative number if the lhs < rhs, zero if they are equal, and a positive number if the lhs > rhs.  Generally speaking, null is considered to be the smallest value of any reference type, so null should always be less than non-null, and two null values should be considered equal. In most sort/ordering methods, you must specify an IComparer<T> if you want to do custom sorting/ordering.  The Array and List<T> types, however, also allow for an alternative Comparison<T> delegate to be used instead, essentially, this lets you perform the custom sort without having to have the custom IComparer<T> class defined. It should be noted, however, that the LINQ OrderBy(), and ThenBy() family of methods do not support the Comparison<T> delegate (though one could easily add their own extension methods to create one, or create an IComparer() factory class that generates one from a Comparison<T>). So, given this delegate, we could use it to perform easy sorts on an Array or List<T> based on custom fields.  Say for example we have a data class called Employee with some basic employee information: 1: public sealed class Employee 2: { 3: public string Name { get; set; } 4: public int Id { get; set; } 5: public double Salary { get; set; } 6: } And say we had a List<Employee> that contained data, such as: 1: var employees = new List<Employee> 2: { 3: new Employee { Name = "John Smith", Id = 2, Salary = 37000.0 }, 4: new Employee { Name = "Jane Doe", Id = 1, Salary = 57000.0 }, 5: new Employee { Name = "John Doe", Id = 5, Salary = 60000.0 }, 6: new Employee { Name = "Jane Smith", Id = 3, Salary = 59000.0 } 7: }; Now, using the Comparison<T> delegate form of Sort() on the List<Employee>, we can sort our list many ways: 1: // sort based on employee ID 2: employees.Sort((lhs, rhs) => Comparer<int>.Default.Compare(lhs.Id, rhs.Id)); 3:  4: // sort based on employee name 5: employees.Sort((lhs, rhs) => string.Compare(lhs.Name, rhs.Name)); 6:  7: // sort based on salary, descending (note switched lhs/rhs order for descending) 8: employees.Sort((lhs, rhs) => Comparer<double>.Default.Compare(rhs.Salary, lhs.Salary)); So again, you could use this older delegate, which has a lot of logical meaning to it’s name, or use a generic delegate such as Func<T, T, int> to implement the same sort of behavior.  All this said, one of the reasons, in my opinion, that Comparison<T> isn’t used too often is that it tends to need complex lambdas, and the LINQ ability to order based on projections is much easier to use, though the Array and List<T> sorts tend to be more efficient if you want to perform in-place ordering. Converter<TInput, TOutput> – delegate to convert elements The Converter<TInput, TOutput> delegate is used by the Array and List<T> delegate to specify how to convert elements from an array/list of one type (TInput) to another type (TOutput).  It is used in an array/list for: ConvertAll() Converts all elements from a List<TInput> / TInput[] to a new List<TOutput> / TOutput[]. The delegate signature for Converter<TInput, TOutput> is very straightforward (ignoring variance): 1: public delegate TOutput Converter<TInput, TOutput>(TInput input); So, this delegate’s job is to taken an input item (of type TInput) and convert it to a return result (of type TOutput).  Again, this is logically equivalent to a newer Func delegate with a signature of Func<TInput, TOutput>.  In fact, the latter is how the LINQ conversion methods are defined. So, we could use the ConvertAll() syntax to convert a List<T> or T[] to different types, such as: 1: // get a list of just employee IDs 2: var empIds = employees.ConvertAll(emp => emp.Id); 3:  4: // get a list of all emp salaries, as int instead of double: 5: var empSalaries = employees.ConvertAll(emp => (int)emp.Salary); Note that the expressions above are logically equivalent to using LINQ’s Select() method, which gives you a lot more power: 1: // get a list of just employee IDs 2: var empIds = employees.Select(emp => emp.Id).ToList(); 3:  4: // get a list of all emp salaries, as int instead of double: 5: var empSalaries = employees.Select(emp => (int)emp.Salary).ToList(); The only difference with using LINQ is that many of the methods (including Select()) are deferred execution, which means that often times they will not perform the conversion for an item until it is requested.  This has both pros and cons in that you gain the benefit of not performing work until it is actually needed, but on the flip side if you want the results now, there is overhead in the behind-the-scenes work that support deferred execution (it’s supported by the yield return / yield break keywords in C# which define iterators that maintain current state information). In general, the new LINQ syntax is preferred, but the older Array and List<T> ConvertAll() methods are still around, as is the Converter<TInput, TOutput> delegate. Sidebar: Variance support update in .NET 4.0 Just like our descriptions of Func and Action, these three early generic delegates also support more variance in assignment as of .NET 4.0.  Their new signatures are: 1: // comparison is contravariant on type being compared 2: public delegate int Comparison<in T>(T lhs, T rhs); 3:  4: // converter is contravariant on input and covariant on output 5: public delegate TOutput Contravariant<in TInput, out TOutput>(TInput input); 6:  7: // predicate is contravariant on input 8: public delegate bool Predicate<in T>(T obj); Thus these delegates can now be assigned to delegates allowing for contravariance (going to a more derived type) or covariance (going to a less derived type) based on whether the parameters are input or output, respectively. Summary Today, we wrapped up our generic delegates discussion by looking at three lesser-used delegates: Predicate<T>, Comparison<T>, and Converter<TInput, TOutput>.  All three of these tend to be replaced by their more generic Func equivalents in LINQ, but that doesn’t mean you shouldn’t understand what they do or can’t use them for your own code, as they do contain semantic meanings in their names that sometimes get lost in the more generic Func name.   Tweet Technorati Tags: C#,CSharp,.NET,Little Wonders,delegates,generics,Predicate,Converter,Comparison

    Read the article

  • C#: Why Decorate When You Can Intercept

    - by James Michael Hare
    We've all heard of the old Decorator Design Pattern (here) or used it at one time or another either directly or indirectly.  A decorator is a class that wraps a given abstract class or interface and presents the same (or a superset) public interface but "decorated" with additional functionality.   As a really simplistic example, consider the System.IO.BufferedStream, it itself is a descendent of System.IO.Stream and wraps the given stream with buffering logic while still presenting System.IO.Stream's public interface:   1: Stream buffStream = new BufferedStream(rawStream); Now, let's take a look at a custom-code example.  Let's say that we have a class in our data access layer that retrieves a list of products from a database:  1: // a class that handles our CRUD operations for products 2: public class ProductDao 3: { 4: ... 5:  6: // a method that would retrieve all available products 7: public IEnumerable<Product> GetAvailableProducts() 8: { 9: var results = new List<Product>(); 10:  11: // must create the connection 12: using (var con = _factory.CreateConnection()) 13: { 14: con.ConnectionString = _productsConnectionString; 15: con.Open(); 16:  17: // create the command 18: using (var cmd = _factory.CreateCommand()) 19: { 20: cmd.Connection = con; 21: cmd.CommandText = _getAllProductsStoredProc; 22: cmd.CommandType = CommandType.StoredProcedure; 23:  24: // get a reader and pass back all results 25: using (var reader = cmd.ExecuteReader()) 26: { 27: while(reader.Read()) 28: { 29: results.Add(new Product 30: { 31: Name = reader["product_name"].ToString(), 32: ... 33: }); 34: } 35: } 36: } 37: }            38:  39: return results; 40: } 41: } Yes, you could use EF or any myriad other choices for this sort of thing, but the germaine point is that you have some operation that takes a non-trivial amount of time.  What if, during the production day I notice that my application is performing slowly and I want to see how much of that slowness is in the query versus my code.  Well, I could easily wrap the logic block in a System.Diagnostics.Stopwatch and log the results to log4net or other logging flavor of choice: 1:     // a class that handles our CRUD operations for products 2:     public class ProductDao 3:     { 4:         private static readonly ILog _log = LogManager.GetLogger(typeof(ProductDao)); 5:         ... 6:         7:         // a method that would retrieve all available products 8:         public IEnumerable<Product> GetAvailableProducts() 9:         { 10:             var results = new List<Product>(); 11:             var timer = Stopwatch.StartNew(); 12:             13:             // must create the connection 14:             using (var con = _factory.CreateConnection()) 15:             { 16:                 con.ConnectionString = _productsConnectionString; 17:                 18:                 // and all that other DB code... 19:                 ... 20:             } 21:             22:             timer.Stop(); 23:             24:             if (timer.ElapsedMilliseconds > 5000) 25:             { 26:                 _log.WarnFormat("Long query in GetAvailableProducts() took {0} ms", 27:                     timer.ElapsedMillseconds); 28:             } 29:             30:             return results; 31:         } 32:     } In my eye, this is very ugly.  It violates Single Responsibility Principle (SRP), which says that a class should only ever have one responsibility, where responsibility is often defined as a reason to change.  This class (and in particular this method) has two reasons to change: If the method of retrieving products changes. If the method of logging changes. Well, we could “simplify” this using the Decorator Design Pattern (here).  If we followed the pattern to the letter, we'd need to create a base decorator that implements the DAOs public interface and forwards to the wrapped instance.  So let's assume we break out the ProductDAO interface into IProductDAO using your refactoring tool of choice (Resharper is great for this). Now, ProductDao will implement IProductDao and get rid of all logging logic: 1:     public class ProductDao : IProductDao 2:     { 3:         // this reverts back to original version except for the interface added 4:     } 5:  And we create the base Decorator that also implements the interface and forwards all calls: 1:     public class ProductDaoDecorator : IProductDao 2:     { 3:         private readonly IProductDao _wrappedDao; 4:         5:         // constructor takes the dao to wrap 6:         public ProductDaoDecorator(IProductDao wrappedDao) 7:         { 8:             _wrappedDao = wrappedDao; 9:         } 10:         11:         ... 12:         13:         // and then all methods just forward their calls 14:         public IEnumerable<Product> GetAvailableProducts() 15:         { 16:             return _wrappedDao.GetAvailableProducts(); 17:         } 18:     } This defines our base decorator, then we can create decorators that add items of interest, and for any methods we don't decorate, we'll get the default behavior which just forwards the call to the wrapper in the base decorator: 1:     public class TimedThresholdProductDaoDecorator : ProductDaoDecorator 2:     { 3:         private static readonly ILog _log = LogManager.GetLogger(typeof(TimedThresholdProductDaoDecorator)); 4:         5:         public TimedThresholdProductDaoDecorator(IProductDao wrappedDao) : 6:             base(wrappedDao) 7:         { 8:         } 9:         10:         ... 11:         12:         public IEnumerable<Product> GetAvailableProducts() 13:         { 14:             var timer = Stopwatch.StartNew(); 15:             16:             var results = _wrapped.GetAvailableProducts(); 17:             18:             timer.Stop(); 19:             20:             if (timer.ElapsedMilliseconds > 5000) 21:             { 22:                 _log.WarnFormat("Long query in GetAvailableProducts() took {0} ms", 23:                     timer.ElapsedMillseconds); 24:             } 25:             26:             return results; 27:         } 28:     } Well, it's a bit better.  Now the logging is in its own class, and the database logic is in its own class.  But we've essentially multiplied the number of classes.  We now have 3 classes and one interface!  Now if you want to do that same logging decorating on all your DAOs, imagine the code bloat!  Sure, you can simplify and avoid creating the base decorator, or chuck it all and just inherit directly.  But regardless all of these have the problem of tying the logging logic into the code itself. Enter the Interceptors.  Things like this to me are a perfect example of when it's good to write an Interceptor using your class library of choice.  Sure, you could design your own perfectly generic decorator with delegates and all that, but personally I'm a big fan of Castle's Dynamic Proxy (here) which is actually used by many projects including Moq. What DynamicProxy allows you to do is intercept calls into any object by wrapping it with a proxy on the fly that intercepts the method and allows you to add functionality.  Essentially, the code would now look like this using DynamicProxy: 1: // Note: I like hiding DynamicProxy behind the scenes so users 2: // don't have to explicitly add reference to Castle's libraries. 3: public static class TimeThresholdInterceptor 4: { 5: // Our logging handle 6: private static readonly ILog _log = LogManager.GetLogger(typeof(TimeThresholdInterceptor)); 7:  8: // Handle to Castle's proxy generator 9: private static readonly ProxyGenerator _generator = new ProxyGenerator(); 10:  11: // generic form for those who prefer it 12: public static object Create<TInterface>(object target, TimeSpan threshold) 13: { 14: return Create(typeof(TInterface), target, threshold); 15: } 16:  17: // Form that uses type instead 18: public static object Create(Type interfaceType, object target, TimeSpan threshold) 19: { 20: return _generator.CreateInterfaceProxyWithTarget(interfaceType, target, 21: new TimedThreshold(threshold, level)); 22: } 23:  24: // The interceptor that is created to intercept the interface calls. 25: // Hidden as a private inner class so not exposing Castle libraries. 26: private class TimedThreshold : IInterceptor 27: { 28: // The threshold as a positive timespan that triggers a log message. 29: private readonly TimeSpan _threshold; 30:  31: // interceptor constructor 32: public TimedThreshold(TimeSpan threshold) 33: { 34: _threshold = threshold; 35: } 36:  37: // Intercept functor for each method invokation 38: public void Intercept(IInvocation invocation) 39: { 40: // time the method invocation 41: var timer = Stopwatch.StartNew(); 42:  43: // the Castle magic that tells the method to go ahead 44: invocation.Proceed(); 45:  46: timer.Stop(); 47:  48: // check if threshold is exceeded 49: if (timer.Elapsed > _threshold) 50: { 51: _log.WarnFormat("Long execution in {0} took {1} ms", 52: invocation.Method.Name, 53: timer.ElapsedMillseconds); 54: } 55: } 56: } 57: } Yes, it's a bit longer, but notice that: This class ONLY deals with logging long method calls, no DAO interface leftovers. This class can be used to time ANY class that has an interface or virtual methods. Personally, I like to wrap and hide the usage of DynamicProxy and IInterceptor so that anyone who uses this class doesn't need to know to add a Castle library reference.  As far as they are concerned, they're using my interceptor.  If I change to a new library if a better one comes along, they're insulated. Now, all we have to do to use this is to tell it to wrap our ProductDao and it does the rest: 1: // wraps a new ProductDao with a timing interceptor with a threshold of 5 seconds 2: IProductDao dao = TimeThresholdInterceptor.Create<IProductDao>(new ProductDao(), 5000); Automatic decoration of all methods!  You can even refine the proxy so that it only intercepts certain methods. This is ideal for so many things.  These are just some of the interceptors we've dreamed up and use: Log parameters and returns of methods to XML for auditing. Block invocations to methods and return default value (stubbing). Throw exception if certain methods are called (good for blocking access to deprecated methods). Log entrance and exit of a method and the duration. Log a message if a method takes more than a given time threshold to execute. Whether you use DynamicProxy or some other technology, I hope you see the benefits this adds.  Does it completely eliminate all need for the Decorator pattern?  No, there may still be cases where you want to decorate a particular class with functionality that doesn't apply to the world at large. But for all those cases where you are using Decorator to add functionality that's truly generic.  I strongly suggest you give this a try!

    Read the article

  • Installing SharePoint 2010 and PowerPivot for SharePoint on Windows 7

    - by smisner
    Many people like me want (or need) to do their business intelligence development work on a laptop. As someone who frequently speaks at various events or teaches classes on all subjects related to the Microsoft business intelligence stack, I need a way to run multiple server products on my laptop with reasonable performance. Once upon a time, that requirement meant only that I had to load the current version of SQL Server and the client tools of choice. In today's post, I'll review my latest experience with trying to make the newly released Microsoft BI products work with a Windows 7 operating system. The entrance of Microsoft Office SharePoint Server 2007 into the BI stack complicated matters and I started using Virtual Server to establish a "suitable" environment. As part of the team that delivered a lot of education as part of the Yukon pre-launch activities (that would be SQL Server 2005 for the uninitiated), I was working with four - yes, four - virtual servers. That was a pretty brutal workload for a 2GB laptop, which worked if I was very, very careful. It could also be a finicky and unreliable configuration as I learned to my dismay at one TechEd session several years ago when I had to reboot a very carefully cached set of servers just minutes before my session started. Although it worked, it came back to life very, very slowly much to the displeasure of the audience. They couldn't possibly have been less pleased than me. At that moment, I resolved to get the beefiest environment I could afford and consolidate to a single virtual server. Enter the 4GB 64-bit laptop to preserve my sanity and my livelihood. Likewise, for SQL Server 2008, I managed to keep everything within a single virtual server and I could function reasonably well with this approach. Now we have SQL Server 2008 R2 plus Office SharePoint Server 2010. That means a 64-bit operating system. Period. That means no more Virtual Server. That means I must use Hyper-V or another alternative. I've heard alternatives exist, but my few dabbles in this area did not yield positive results. It might have been just me having issues rather than any failure of those technologies to adequately support the requirements. My first run at working with the new BI stack configuration was to set up a 64-bit 4GB laptop with a dual-boot to run Windows Server 2008 R2 with Hyper-V. However, I was generally not happy with running Windows Server 2008 R2 on my laptop. For one, I couldn't put it into sleep mode, which is helpful if I want to prepare for a presentation beforehand and then walk to the podium without the need to hold my laptop in its open state along the way (my strategy at the TechEd session long, long ago). Secondly, it was finicky with projectors. I had issues from time to time and while I always eventually got it to work, I didn't appreciate those nerve-wracking moments wondering whether this would be the time that it wouldn't work. Somewhere along the way, I learned that it was possible to load SharePoint 2010 in a Windows 7 which piqued my interest. I had just acquired a new laptop running Windows 7 64-bit, and thought surely running the BI stack natively on my laptop must be better than running Hyper-V. (I have not tried booting to Hyper-V VHD yet, but that's on my list of things to try so the jury of one is still out on this approach.) Recently, I had to build up a server with the RTM versions of SQL Server 2008 R2 and Sharepoint Server 2010 and decided to follow suit on my Windows 7 Ultimate 64-bit laptop. The process is slightly different, but I'm happy to report that it IS possible, although I had some fits and starts along the way. DISCLAIMER: These products are NOT intended to be run in production mode on the Windows 7 operating system. The configuration described in this post is strictly for development or learning purposes and not supported by Microsoft. If you have trouble, you will NOT get help from them. I might be able to help, but I provide no guarantees of my ability or availablity to help. I won't provide the step-by-step instructions in this post as there are other resources that provide these details, but I will provide an overview of my approach, point you to the relevant resources, describe some of the problems I encountered, and explain how I addressed those problems to achieve my desired goal. Because my goal was not simply to set up SharePoint Server 2010 on my laptop, but specifically PowerPivot for SharePoint, I started out by referring to the installation instructions at the PowerPiovt-Info site, but mainly to confirm that I was performing steps in the proper sequence. I didn't perform the steps in Part 1 because those steps are applicable only to a server operating system which I am not running on my laptop. Then, the instructions in Part 2, won't work exactly as written for the same reason. Instead, I followed the instructions on MSDN, Setting Up the Development Environment for SharePoint 2010 on Windows Vista, Windows 7, and Windows Server 2008. In general, I found the following differences in installation steps from the steps at PowerPivot-Info: You must copy the SharePoint installation media to the local drive so that you can edit the config.xml to allow installation on a Windows client. You also have to manually install the prerequisites. The instructions provides links to each item that you must manually install and provides a command-line instruction to execute which enables required Windows features. I will digress for a moment to save you some grief in the sequence of steps to perform. I discovered later that a missing step in the MSDN instructions is to install the November CTP Reporting Services add-in for SharePoint. When I went to test my SharePoint site (I believe I tested after I had a successful PowerPivot installation), I ran into the following error: Could not load file or assembly 'RSSharePointSoapProxy, Version=10.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91' or one of its dependencies. The system cannot find the file specified. I was rather surprised that Reporting Services was required. Then I found an article by Alan le Marquand, Working Together: SQL Server 2008 R2 Reporting Services Integration in SharePoint 2010,that instructed readers to install the November add-in. My first reaction was, "Really?!?" But I confirmed it in another TechNet article on hardware and software requirements for SharePoint Server 2010. It doesn't refer explicitly to the November CTP but following the link took me there. (Interestingly, I retested today and there's no longer any reference to the November CTP. Here's the link to download the latest and greatest Reporting Services Add-in for SharePoint Technologies 2010.) You don't need to download the add-in anymore if you're doing a regular server-based installation of SharePoint because it installs as part of the prerequisites automatically. When it was time to start the installation of SharePoint, I deviated from the MSDN instructions and from the PowerPivot-Info instructions: On the Choose the installation you want page of the installation wizard, I chose Server Farm. On the Server Type page, I chose Complete. At the end of the installation, I did not run the configuration wizard. Returning to the PowerPivot-Info instructions, I tried to follow the instructions in Part 3 which describe installing SQL Server 2008 R2 with the PowerPivot option. These instructions tell you to choose the New Server option on the Setup Role page where you add PowerPivot for SharePoint. However, I ran into problems with this approach and got installation errors at the end. It wasn't until much later as I was investigating an error that I encountered Dave Wickert's post that installing PowerPivot for SharePoint on Windows 7 is unsupported. Uh oh. But he did want to hear about it if anyone succeeded, so I decided to take the plunge. Perseverance paid off, and I can happily inform Dave that it does work so far. I haven't tested absolutely everything with PowerPivot for SharePoint but have successfully deployed a workbook and viewed the PowerPivot Management Dashboard. I have not yet tested the data refresh feature, but I have installed. Continue reading to see how I accomplished my objective. I unintalled SQL Server 2008 R2 and started again. I had different problems which I don't recollect now. However, I uninstalled again and approached installation from a different angle and my next attempt succeeded. The downside of this approach is that you must do all of the things yourself that are done automatically when you install PowerPivot as a new server. Here are the steps that I followed: Install SQL Server 2008 R2 to get a database engine instance installed. Run the SharePoint configuration wizard to set up the SharePoint databases. In Central Administration, create a Web application using classic mode authentication as per a TechNet article on PowerPivot Authentication and Authorization. Then I followed the steps I found at How to: Install PowerPivot for SharePoint on an Existing SharePoint Server. Especially important to note - you must launch setup by using Run as administrator. I did not have to manually deploy the PowerPivot solution as the instructions specify, but it's good to know about this step because it tells you where to look in Central Administration to confirm a successful deployment. I did spot some incorrect steps in the instructions (at the time of this writing) in How To: Configure Stored Credentials for PowerPivot Data Refresh. Specifically, in the section entitled Step 1: Create a target application and set the credentials, both steps 10 and 12 are incorrect. They tell you to provide an actual Windows user name and password on the page where you are simply defining the prompts for your application in the Secure Store Service. To add the Windows user name and password that you want to associate with the application - after you have successfully created the target application - you select the target application and then click Set credentials in the ribbon. Lastly, I followed the instructions at How to: Install Office Data Connectivity Components on a PowerPivot server. However, I have yet to test this in my current environment. I did have several stops and starts throughout this process and edited those out to spare you from reading non-essential information. I believe the explanation I have provided here accurately reflect the steps I followed to produce a working configuration. If you follow these steps and get a different result, please let me know so that together we can work through the issue and correct these instructions. I'm sure there are many other folks in the Microsoft BI community that will appreciate the ability to set up the BI stack in a Windows 7 environment for development or learning purposes. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • The Red Gate and .NET Reflector Debacle

    - by Rick Strahl
    About a month ago Red Gate – the company who owns the NET Reflector tool most .NET devs use at one point or another – decided to change their business model for Reflector and take the product from free to a fully paid for license model. As a bit of history: .NET Reflector was originally created by Lutz Roeder as a free community tool to inspect .NET assemblies. Using Reflector you can examine the types in an assembly, drill into type signatures and quickly disassemble code to see how a particular method works.  In case you’ve been living under a rock and you’ve never looked at Reflector, here’s what it looks like drilled into an assembly from disk with some disassembled source code showing: Note that you get tons of information about each element in the tree, and almost all related types and members are clickable both in the list and source view so it’s extremely easy to navigate and follow the code flow even in this static assembly only view. For many year’s Lutz kept the the tool up to date and added more features gradually improving an already amazing tool and making it better. Then about two and a half years ago Red Gate bought the tool from Lutz. A lot of ruckus and noise ensued in the community back then about what would happen with the tool and… for the most part very little did. Other than the incessant update notices with prominent Red Gate promo on them life with Reflector went on. The product didn’t die and and it didn’t go commercial or to a charge model. When .NET 4.0 came out it still continued to work mostly because the .NET feature set doesn’t drastically change how types behave.  Then a month back Red Gate started making noise about a new Version Version 7 which would be commercial. No more free version - and a shit storm broke out in the community. Now normally I’m not one to be critical of companies trying to make money from a product, much less for a product that’s as incredibly useful as Reflector. There isn’t day in .NET development that goes by for me where I don’t fire up Reflector. Whether it’s for examining the innards of the .NET Framework, checking out third party code, or verifying some of my own code and resources. Even more so recently I’ve been doing a lot of Interop work with a non-.NET application that needs to access .NET components and Reflector has been immensely valuable to me (and my clients) if figuring out exact type signatures required to calling .NET components in assemblies. In short Reflector is an invaluable tool to me. Ok, so what’s the problem? Why all the fuss? Certainly the $39 Red Gate is trying to charge isn’t going to kill any developer. If there’s any tool in .NET that’s worth $39 it’s Reflector, right? Right, but that’s not the problem here. The problem is how Red Gate went about moving the product to commercial which borders on the downright bizarre. It’s almost as if somebody in management wrote a slogan: “How can we piss off the .NET community in the most painful way we can?” And that it seems Red Gate has a utterly succeeded. People are rabid, and for once I think that this outrage isn’t exactly misplaced. Take a look at the message thread that Red Gate dedicated from a link off the download page. Not only is Version 7 going to be a paid commercial tool, but the older versions of Reflector won’t be available any longer. Not only that but older versions that are already in use also will continually try to update themselves to the new paid version – which when installed will then expire unless registered properly. There have also been reports of Version 6 installs shutting themselves down and failing to work if the update is refused (I haven’t seen that myself so not sure if that’s true). In other words Red Gate is trying to make damn sure they’re getting your money if you attempt to use Reflector. There’s a lot of temptation there. Think about the millions of .NET developers out there and all of them possibly upgrading – that’s a nice chunk of change that Red Gate’s sitting on. Even with all the community backlash these guys are probably making some bank right now just because people need to get life to move on. Red Gate also put up a Feedback link on the download page – which not surprisingly is chock full with hate mail condemning the move. Oddly there’s not a single response to any of those messages by the Red Gate folks except when it concerns license questions for the full version. It puzzles me what that link serves for other yet than another complete example of failure to understand how to handle customer relations. There’s no doubt that that all of this has caused some serious outrage in the community. The sad part though is that this could have been handled so much less arrogantly and without pissing off the entire community and causing so much ill-will. People are pissed off and I have no doubt that this negative publicity will show up in the sales numbers for their other products. I certainly hope so. Stupidity ought to be painful! Why do Companies do boneheaded stuff like this? Red Gate’s original decision to buy Reflector was hotly debated but at that the time most of what would happen was mostly speculation. But I thought it was a smart move for any company that is in need of spreading its marketing message and corporate image as a vendor in the .NET space. Where else do you get to flash your corporate logo to hordes of .NET developers on a regular basis?  Exploiting that marketing with some goodwill of providing a free tool breeds positive feedback that hopefully has a good effect on the company’s visibility and the products it sells. Instead Red Gate seems to have taken exactly the opposite tack of corporate bullying to try to make a quick buck – and in the process ruined any community goodwill that might have come from providing a service community for free while still getting valuable marketing. What’s so puzzling about this boneheaded escapade is that the company doesn’t need to resort to underhanded tactics like what they are trying with Reflector 7. The tools the company makes are very good. I personally use SQL Compare, Sql Data Compare and ANTS Profiler on a regular basis and all of these tools are essential in my toolbox. They certainly work much better than the tools that are in the box with Visual Studio. Chances are that if Reflector 7 added useful features I would have been more than happy to shell out my $39 to upgrade when the time is right. It’s Expensive to give away stuff for Free At the same time, this episode shows some of the big problems that come with ‘free’ tools. A lot of organizations are realizing that giving stuff away for free is actually quite expensive and the pay back is often very intangible if any at all. Those that rely on donations or other voluntary compensation find that they amount contributed is absolutely miniscule as to not matter at all. Yet at the same time I bet most of those clamouring the loudest on that Red Gate Reflector feedback page that Reflector won’t be free anymore probably have NEVER made a donation to any open source project or free tool ever. The expectation of Free these days is just too great – which is a shame I think. There’s a lot to be said for paid software and having somebody to hold to responsible to because you gave them some money. There’s an incentive –> payback –> responsibility model that seems to be missing from free software (not all of it, but a lot of it). While there certainly are plenty of bad apples in paid software as well, money tends to be a good motivator for people to continue working and improving products. Reasons for giving away stuff are many but often it’s a naïve desire to share things when things are simple. At first it might be no problem to volunteer time and effort but as products mature the fun goes out of it, and as the reality of product maintenance kicks in developers want to get something back for the time and effort they’re putting in doing non-glamorous work. It’s then when products die or languish and this is painful for all to watch. For Red Gate however, I think there was always a pretty good payback from the Reflector acquisition in terms of marketing: Visibility and possible positioning of their products although they seemed to have mostly ignored that option. On the other hand they started this off pretty badly even 2 and a half years back when they aquired Reflector from Lutz with the same arrogant attitude that is evident in the latest episode. You really gotta wonder what folks are thinking in management – the sad part is from advance emails that were circulating, they were fully aware of the shit storm they were inciting with this and I suspect they are banking on the sheer numbers of .NET developers to still make them a tidy chunk of change from upgrades… Alternatives are coming For me personally the single license isn’t a problem, but I actually have a tool that I sell (an interop Web Service proxy generation tool) to customers and one of the things I recommend to use with has been Reflector to view assembly information and to find which Interop classes to instantiate from the non-.NET environment. It’s been nice to use Reflector for this with its small footprint and zero-configuration installation. But now with V7 becoming a paid tool that option is not going to be available anymore. Luckily it looks like the .NET community is jumping to it and trying to fill the void. Amidst the Red Gate outrage a new library called ILSpy has sprung up and providing at least some of the core functionality of Reflector with an open source library. It looks promising going forward and I suspect there will be a lot more support and interest to support this project now that Reflector has gone over to the ‘dark side’…© Rick Strahl, West Wind Technologies, 2005-2011

    Read the article

  • Movement and Collision with AABB

    - by Jeremy Giberson
    I'm having a little difficulty figuring out the following scenarios. http://i.stack.imgur.com/8lM6i.png In scenario A, the moving entity has fallen to (and slightly into the floor). The current position represents the projected position that will occur if I apply the acceleration & velocity as usual without worrying about collision. The Next position, represents the corrected projection position after collision check. The resulting end position is the falling entity now rests ON the floor--that is, in a consistent state of collision by sharing it's bottom X axis with the floor's top X axis. My current update loop looks like the following: // figure out forces & accelerations and project an objects next position // check collision occurrence from current position -> projected position // if a collision occurs, adjust projection position Which seems to be working for the scenario of my object falling to the floor. However, the situation becomes sticky when trying to figure out scenario's B & C. In scenario B, I'm attempt to move along the floor on the X axis (player is pressing right direction button) additionally, gravity is pulling the object into the floor. The problem is, when the object attempts to move the collision detection code is going to recognize that the object is already colliding with the floor to begin with, and auto correct any movement back to where it was before. In scenario C, I'm attempting to jump off the floor. Again, because the object is already in a constant collision with the floor, when the collision routine checks to make sure moving from current position to projected position doesn't result in a collision, it will fail because at the beginning of the motion, the object is already colliding. How do you allow movement along the edge of an object? How do you allow movement away from an object you are already colliding with. Extra Info My collision routine is based on AABB sweeping test from an old gamasutra article, http://www.gamasutra.com/view/feature/3383/simple_intersection_tests_for_games.php?page=3 My bounding box implementation is based on top left/bottom right instead of midpoint/extents, so my min/max functions are adjusted. Otherwise, here is my bounding box class with collision routines: public class BoundingBox { public XYZ topLeft; public XYZ bottomRight; public BoundingBox(float x, float y, float z, float w, float h, float d) { topLeft = new XYZ(); bottomRight = new XYZ(); topLeft.x = x; topLeft.y = y; topLeft.z = z; bottomRight.x = x+w; bottomRight.y = y+h; bottomRight.z = z+d; } public BoundingBox(XYZ position, XYZ dimensions, boolean centered) { topLeft = new XYZ(); bottomRight = new XYZ(); topLeft.x = position.x; topLeft.y = position.y; topLeft.z = position.z; bottomRight.x = position.x + (centered ? dimensions.x/2 : dimensions.x); bottomRight.y = position.y + (centered ? dimensions.y/2 : dimensions.y); bottomRight.z = position.z + (centered ? dimensions.z/2 : dimensions.z); } /** * Check if a point lies inside a bounding box * @param box * @param point * @return */ public static boolean isPointInside(BoundingBox box, XYZ point) { if(box.topLeft.x <= point.x && point.x <= box.bottomRight.x && box.topLeft.y <= point.y && point.y <= box.bottomRight.y && box.topLeft.z <= point.z && point.z <= box.bottomRight.z) return true; return false; } /** * Check for overlap between two bounding boxes using separating axis theorem * if two boxes are separated on any axis, they cannot be overlapping * @param a * @param b * @return */ public static boolean isOverlapping(BoundingBox a, BoundingBox b) { XYZ dxyz = new XYZ(b.topLeft.x - a.topLeft.x, b.topLeft.y - a.topLeft.y, b.topLeft.z - a.topLeft.z); // if b - a is positive, a is first on the axis and we should use its extent // if b -a is negative, b is first on the axis and we should use its extent // check for x axis separation if ((dxyz.x >= 0 && a.bottomRight.x-a.topLeft.x < dxyz.x) // negative scale, reverse extent sum, flip equality ||(dxyz.x < 0 && b.topLeft.x-b.bottomRight.x > dxyz.x)) return false; // check for y axis separation if ((dxyz.y >= 0 && a.bottomRight.y-a.topLeft.y < dxyz.y) // negative scale, reverse extent sum, flip equality ||(dxyz.y < 0 && b.topLeft.y-b.bottomRight.y > dxyz.y)) return false; // check for z axis separation if ((dxyz.z >= 0 && a.bottomRight.z-a.topLeft.z < dxyz.z) // negative scale, reverse extent sum, flip equality ||(dxyz.z < 0 && b.topLeft.z-b.bottomRight.z > dxyz.z)) return false; // not separated on any axis, overlapping return true; } public static boolean isContactEdge(int xyzAxis, BoundingBox a, BoundingBox b) { switch(xyzAxis) { case XYZ.XCOORD: if(a.topLeft.x == b.bottomRight.x || a.bottomRight.x == b.topLeft.x) return true; return false; case XYZ.YCOORD: if(a.topLeft.y == b.bottomRight.y || a.bottomRight.y == b.topLeft.y) return true; return false; case XYZ.ZCOORD: if(a.topLeft.z == b.bottomRight.z || a.bottomRight.z == b.topLeft.z) return true; return false; } return false; } /** * Sweep test min extent value * @param box * @param xyzCoord * @return */ public static float min(BoundingBox box, int xyzCoord) { switch(xyzCoord) { case XYZ.XCOORD: return box.topLeft.x; case XYZ.YCOORD: return box.topLeft.y; case XYZ.ZCOORD: return box.topLeft.z; default: return 0f; } } /** * Sweep test max extent value * @param box * @param xyzCoord * @return */ public static float max(BoundingBox box, int xyzCoord) { switch(xyzCoord) { case XYZ.XCOORD: return box.bottomRight.x; case XYZ.YCOORD: return box.bottomRight.y; case XYZ.ZCOORD: return box.bottomRight.z; default: return 0f; } } /** * Test if bounding box A will overlap bounding box B at any point * when box A moves from position 0 to position 1 and box B moves from position 0 to position 1 * Note, sweep test assumes bounding boxes A and B's dimensions do not change * * @param a0 box a starting position * @param a1 box a ending position * @param b0 box b starting position * @param b1 box b ending position * @param aCollisionOut xyz of box a's position when/if a collision occurs * @param bCollisionOut xyz of box b's position when/if a collision occurs * @return */ public static boolean sweepTest(BoundingBox a0, BoundingBox a1, BoundingBox b0, BoundingBox b1, XYZ aCollisionOut, XYZ bCollisionOut) { // solve in reference to A XYZ va = new XYZ(a1.topLeft.x-a0.topLeft.x, a1.topLeft.y-a0.topLeft.y, a1.topLeft.z-a0.topLeft.z); XYZ vb = new XYZ(b1.topLeft.x-b0.topLeft.x, b1.topLeft.y-b0.topLeft.y, b1.topLeft.z-b0.topLeft.z); XYZ v = new XYZ(vb.x-va.x, vb.y-va.y, vb.z-va.z); // check for initial overlap if(BoundingBox.isOverlapping(a0, b0)) { // java pass by ref/value gotcha, have to modify value can't reassign it aCollisionOut.x = a0.topLeft.x; aCollisionOut.y = a0.topLeft.y; aCollisionOut.z = a0.topLeft.z; bCollisionOut.x = b0.topLeft.x; bCollisionOut.y = b0.topLeft.y; bCollisionOut.z = b0.topLeft.z; return true; } // overlap min/maxs XYZ u0 = new XYZ(); XYZ u1 = new XYZ(1,1,1); float t0, t1; // iterate axis and find overlaps times (x=0, y=1, z=2) for(int i = 0; i < 3; i++) { float aMax = max(a0, i); float aMin = min(a0, i); float bMax = max(b0, i); float bMin = min(b0, i); float vi = XYZ.getCoord(v, i); if(aMax < bMax && vi < 0) XYZ.setCoord(u0, i, (aMax-bMin)/vi); else if(bMax < aMin && vi > 0) XYZ.setCoord(u0, i, (aMin-bMax)/vi); if(bMax > aMin && vi < 0) XYZ.setCoord(u1, i, (aMin-bMax)/vi); else if(aMax > bMin && vi > 0) XYZ.setCoord(u1, i, (aMax-bMin)/vi); } // get times of collision t0 = Math.max(u0.x, Math.max(u0.y, u0.z)); t1 = Math.min(u1.x, Math.min(u1.y, u1.z)); // collision only occurs if t0 < t1 if(t0 <= t1 && t0 != 0) // not t0 because we already tested it! { // t0 is the normalized time of the collision // then the position of the bounding boxes would // be their original position + velocity*time aCollisionOut.x = a0.topLeft.x + va.x*t0; aCollisionOut.y = a0.topLeft.y + va.y*t0; aCollisionOut.z = a0.topLeft.z + va.z*t0; bCollisionOut.x = b0.topLeft.x + vb.x*t0; bCollisionOut.y = b0.topLeft.y + vb.y*t0; bCollisionOut.z = b0.topLeft.z + vb.z*t0; return true; } else return false; } }

    Read the article

  • 3D game engine for networked world simulation / AI sandbox

    - by Martin
    More than 5 years ago I was playing with DirectSound and Direct3D and I found it really exciting although it took much time to get some good results with C++. I was a college student then. Now I have mostly enterprise development experience in C# and PHP, and I do it for living. There is really no chance to earn money with serious game development in our country. Each day more and more I find that I miss something. So I decided to spend an hour or so each day to do programming for fun. So my idea is to build a world simulation. I would like to begin with something simple - some human-like creatures that live their life - like Sims 3 but much more simple, just basic needs, basic animations, minimum graphic assets - I guess it won't be a city but just a large house for a start. The idea is to have some kind of a server application which stores the world data in MySQL database, and some client applications - body-less AI bots which simulate movement and some interactions with the world and each other. But it wouldn't be fun without 3D. So there are also 3D clients - I can enter that virtual world and see the AI bots living. When the bot enters visible area, it becomes material - loads a mesh and animations, so I can see it. When I leave, the bots lose their 3d mesh bodies again, but their virtual life still continues. With time I hope to make it like some expandable scriptable sandbox to experiment with various AI algorithms and so on. But I am not intended to create a full-blown MMORPG :D I have looked for many possible things I would need (free and open source) and now I have to make a choice: OGRE3D + enet (or RakNet). Old good C++. But won't it slow me down so much that I won't have fun any more? CrystalSpace. Formally not a game engine but very close to that. C++ again. MOgre (OGRE3D wrapper for .NET) + lidgren (networking library which is already used in some gaming projects). Good - I like C#, it is good for fast programming and also can be used for scripting. XNA seems just a framework, not an engine, so really have doubts, should I even look at XNA Game Studio :( Panda3D - full game engine with positive feedback. I really like idea to have all the toolset in one package, it has good reviews as a beginner-friendly engine...if you know Python. On the C++ side, Panda3D has almost non-existent documentation. I have 0 experience with Python, but I've heard it is easy to learn. And if it will be fun and challenging then I guess I would benefit from experience in one more programming language. Which of those would you suggest, not because of advanced features or good platform support but mostly for fun, easy workflow and expandability, and so I can create and integrate all the components I need - the server with the database, AI bots and a 3D client application?

    Read the article

  • zLib on iPhone, stop at first BLOCK

    - by cedric
    I am trying to call iPhone zLib to decompress the zlib stream from our HTTP based server, but the code always stop after finishing the first zlib block. Obviously, iPhone SDK is using the standard open Zlib. My doubt is that the parameter for inflateInit2 is not appropriate here. I spent lots of time reading the zlib manual, but it isn't that helpful. Here is the details, your help is appreciated. (1) the HTTP request: NSURL *url = [NSURL URLWithString:@"http://192.168.0.98:82/WIC?query=getcontacts&PIN=12345678&compression=Y"]; (2) The data I get from server is something like this (if decompressed). The stream was compressed by C# zlib class DeflateStream: $REC_TYPE=SYS Status=OK Message=OK SetID= IsLast=Y StartIndex=0 LastIndex=6 EOR ...... $REC_TYPE=CONTACTSDISTLIST ID=2 Name=CTU+L%2EA%2E OnCallEnabled=Y OnCallMinUsers=1 OnCallEditRight= OnCallEditDLRight=D Fields= CL= OnCallStatus= EOR (3) However, I will only get the first Block. The code for decompression on iPhone (copied from a code piece from somewhere here) is as follow. The loop between Line 23~38 always break the second time execution. + (NSData *) uncompress: (NSData*) data { 1 if ([data length] == 0) return nil; 2 NSInteger length = [data length]; 3 unsigned full_length = length; 4 unsigned half_length =length/ 2; 5 NSMutableData *decompressed = [NSMutableData dataWithLength: 5*full_length + half_length]; 6 BOOL done = NO; 7 int status; 8 z_stream strm; 9 length=length-4; 10 void* bytes= malloc(length); 11 NSRange range; 12 range.location=4; 13 range.length=length; 14 [data getBytes: bytes range: range]; 15 strm.next_in = bytes; 16 strm.avail_in = length; 17 strm.total_out = 0; 18 strm.zalloc = Z_NULL; 19 strm.zfree = Z_NULL; 20 strm.data_type= Z_BINARY; 21 // if (inflateInit(&strm) != Z_OK) return nil; 22 if (inflateInit2(&strm, (-15)) != Z_OK) return nil; //It won't work if change -15 to positive numbers. 23 while (!done) 24 { 25 // Make sure we have enough room and reset the lengths. 26 if (strm.total_out >= [decompressed length]) 27 [decompressed increaseLengthBy: half_length]; 28 strm.next_out = [decompressed mutableBytes] + strm.total_out; 29 strm.avail_out = [decompressed length] - strm.total_out; 30 31 // Inflate another chunk. 32 status = inflate (&strm, Z_SYNC_FLUSH); //Z_SYNC_FLUSH-->Z_BLOCK, won't work either 33 if (status == Z_STREAM_END){ 34 35 done = YES; 36 } 37 else if (status != Z_OK) break; 38 } 39 if (inflateEnd (&strm) != Z_OK) return nil; 40 // Set real length. 41 if (done) 42 { 43 [decompressed setLength: strm.total_out]; 44 return [NSData dataWithData: decompressed]; 45 } 46 else return nil; 47 }

    Read the article

  • Team Foundation Server 2012 Build Global List Problems

    - by Bob Hardister
    My experience with the upgrade and use of TFS 2012 has been very positive. I did come across a couple of issues recently that tripped things up for a while. ISSUE 1 The first issue is that 2012 prior to Update 1 published an invalid build list item value to the collection global list. In 2010, the build global list, list item value syntax is an underscore between the build definition and the build number. In the 2012 RTM this underscore was replaced with a backslash, which is invalid.  Specifically, an upload of the global list fails when the backslash is followed at some point by a period. The error when using the API is: <detail ExceptionMessage="TF26204: The account you entered is not recognized. Contact your Team Foundation Server administrator to add your account." BaseExceptionName="Microsoft.TeamFoundation.WorkItemTracking.Server.ValidationException"><details id="600019" http://schemas.microsoft.com/TeamFoundation/2005/06/WorkItemTracking/faultdetail/03"http://schemas.microsoft.com/TeamFoundation/2005/06/WorkItemTracking/faultdetail/03" /></detail> when uploading the global list via the process editor the error is: This issue is corrected in Update1 as the backslash is changed to a forward slash. ISSUE 2 The second issue is that when upgrading from 2010 to 2012, the builds in 2010 are not published to the 2012 global list.  After the upgrade the 2012 global lists doesn’t have any builds and only builds run in 2012 are published to the global list. This was reported to the MSDN forums and Connect. To correct this I wrote a utility to pull all the builds and recreate the builds global list for each project in each collection.  This is a console application with a program.cs, a globallists.cs and a app.config (not published here). The utility connects to TFS 2012, loops through the collections or a target collection as specified in the app.config. Then loops through the projects, the build definitions, and builds.  It creates a global list for each project if that project has at least one build. Then it imports the new list to TFS.  Here’s the code for program and globalists classes. Program.CS using System; using System.Collections.Generic; using System.Linq; using System.Text; using Microsoft.TeamFoundation.Framework.Client; using Microsoft.TeamFoundation.Framework.Common; using Microsoft.TeamFoundation.Client; using Microsoft.TeamFoundation.Server; using System.IO; using System.Xml; using Microsoft.TeamFoundation.WorkItemTracking.Client; using System.Diagnostics; using Utilities; using System.Configuration; namespace TFSProjectUpdater_CLC { class Program { static void Main(string[] args) { DateTime temp_d = System.DateTime.Now; string logName = temp_d.ToShortDateString(); logName = logName.Replace("/", "_"); logName = logName + "_" + temp_d.TimeOfDay; logName = logName.Replace(":", "."); logName = "TFSGlobalListBuildsUpdater_" + logName + ".log"; Trace.Listeners.Add(new TextWriterTraceListener(Path.Combine(ConfigurationManager.AppSettings["logLocation"], logName))); Trace.AutoFlush = true; Trace.WriteLine("Start:" + DateTime.Now.ToString()); Console.WriteLine("Start:" + DateTime.Now.ToString()); string tfsServer = ConfigurationManager.AppSettings["TargetTFS"].ToString(); GlobalLists gl = new GlobalLists(); //replace this with the URL to your TFS instance. Uri tfsUri = new Uri("https://" + tfsServer + "/tfs"); //bool foundLite = false; TfsConfigurationServer config = new TfsConfigurationServer(tfsUri, new UICredentialsProvider()); config.EnsureAuthenticated(); ITeamProjectCollectionService collectionService = config.GetService<ITeamProjectCollectionService>(); IList<TeamProjectCollection> collections = collectionService.GetCollections().OrderBy(collection => collection.Name.ToString()).ToList(); //target Collection string targetCollection = ConfigurationManager.AppSettings["targetCollection"]; foreach (TeamProjectCollection coll in collections) { if (targetCollection.Equals(string.Empty)) { if (!coll.Name.Equals("TFS Archive") && !coll.Name.Equals("DefaultCol") && !coll.Name.Equals("Team Project Template Gallery")) { doWork(coll, tfsServer); } } else { if (coll.Name.Equals(targetCollection)) { doWork(coll, tfsServer); } } } Trace.WriteLine("Finished:" + DateTime.Now.ToString()); Console.WriteLine("Finished:" + DateTime.Now.ToString()); if (System.Diagnostics.Debugger.IsAttached) { Console.WriteLine("\nHit any key to exit..."); Console.ReadKey(); } Trace.Close(); } static void doWork(TeamProjectCollection coll, string tfsServer) { GlobalLists gl = new GlobalLists(); //target Collection string targetProject = ConfigurationManager.AppSettings["targetProject"]; Trace.WriteLine("Collection: " + coll.Name); Uri u = new Uri("https://" + tfsServer + "/tfs/" + coll.Name.ToString()); TfsTeamProjectCollection c = TfsTeamProjectCollectionFactory.GetTeamProjectCollection(u); ICommonStructureService icss = c.GetService<ICommonStructureService>(); try { Trace.WriteLine("\tChecking Collection Global Lists."); gl.RebuildBuildGlobalLists(c); } catch (Exception ex) { Console.WriteLine("Exception! :" + coll.Name); } } } } GlobalLists.CS using System; using System.Collections.Generic; using System.Linq; using System.Text; using Microsoft.TeamFoundation.Client; using Microsoft.TeamFoundation.Framework.Client; using Microsoft.TeamFoundation.Framework.Common; using Microsoft.TeamFoundation.Server; using Microsoft.TeamFoundation.WorkItemTracking.Client; using Microsoft.TeamFoundation.Build.Client; using System.Configuration; using System.Xml; using System.Xml.Linq; using System.Diagnostics; namespace Utilities { public class GlobalLists { string GL_NewList = @"<gl:GLOBALLISTS xmlns:gl=""http://schemas.microsoft.com/VisualStudio/2005/workitemtracking/globallists""> <GLOBALLIST> </GLOBALLIST> </gl:GLOBALLISTS>"; public void RebuildBuildGlobalLists(TfsTeamProjectCollection _tfs) { WorkItemStore wis = new WorkItemStore(_tfs); //export the current globals lists file for the collection to save as a backup XmlDocument globalListsFile = wis.ExportGlobalLists(); globalListsFile.Save(@"c:\temp\" + _tfs.Name.Replace("\\", "_") + "_backupGlobalList.xml"); LogExportCurrentCollectionGlobalListsAsBackup(_tfs); //Build a new global build list from each build definition within each team project IBuildServer buildServer = _tfs.GetService<IBuildServer>(); foreach (Project p in wis.Projects) { XmlDocument newProjectGlobalList = new XmlDocument(); newProjectGlobalList.LoadXml(GL_NewList); LogInstanciateNewProjectBuildGlobalList(_tfs, p); BuildNewProjectBuildGlobalList(_tfs, wis, newProjectGlobalList, buildServer, p); LogEndOfProject(_tfs, p); } } // Private Methods private static void BuildNewProjectBuildGlobalList(TfsTeamProjectCollection _tfs, WorkItemStore wis, XmlDocument newProjectGlobalList, IBuildServer buildServer, Project p) { //locate the template node XmlNamespaceManager nsmgr = new XmlNamespaceManager(newProjectGlobalList.NameTable); nsmgr.AddNamespace("gl", "http://schemas.microsoft.com/VisualStudio/2005/workitemtracking/globallists"); XmlNode node = newProjectGlobalList.SelectSingleNode("//gl:GLOBALLISTS/GLOBALLIST", nsmgr); LogLocatedGlobalListNode(_tfs, p); //add the name attribute for the project build global list XmlElement buildListNode = (XmlElement)node; buildListNode.SetAttribute("name", "Builds - " + p.Name); LogAddedBuildNodeName(_tfs, p); //add new builds to the team project build global list bool buildsExist = false; if (AddNewBuilds(_tfs, newProjectGlobalList, buildServer, p, node, buildsExist)) { //import the new build global list for each project that has builds newProjectGlobalList.Save(@"c:\temp\" + _tfs.Name.Replace("\\", "_") + "_" + p.Name + "_" + "newGlobalList.xml"); //write out temp copy of the global list file to be imported LogImportReady(_tfs, p); wis.ImportGlobalLists(newProjectGlobalList.InnerXml); LogImportComplete(_tfs, p); } } private static bool AddNewBuilds(TfsTeamProjectCollection _tfs, XmlDocument newProjectGlobalList, IBuildServer buildServer, Project p, XmlNode node, bool buildsExist) { var buildDefinitions = buildServer.QueryBuildDefinitions(p.Name); foreach (var buildDefinition in buildDefinitions) { var builds = buildDefinition.QueryBuilds(); foreach (var build in builds) { //insert the builds into the current build list node in the correct 2012 format buildsExist = true; XmlElement listItem = newProjectGlobalList.CreateElement("LISTITEM"); listItem.SetAttribute("value", buildDefinition.Name + "/" + build.BuildNumber.ToString().Replace(buildDefinition.Name + "_", "")); node.AppendChild(listItem); } } if (buildsExist) LogBuildListCreated(_tfs, p); else LogNoBuildsInProject(_tfs, p); return buildsExist; } // Logging Methods private static void LogExportCurrentCollectionGlobalListsAsBackup(TfsTeamProjectCollection _tfs) { Trace.WriteLine("\tExported Global List for " + _tfs.Name + " collection."); Console.WriteLine("\tExported Global List for " + _tfs.Name + " collection."); } private void LogInstanciateNewProjectBuildGlobalList(TfsTeamProjectCollection _tfs, Project p) { Trace.WriteLine("\t\tInstanciated the new build global list for project " + p.Name + " in the " + _tfs.Name + " collection."); Console.WriteLine("\t\tInstanciated the new build global list for project \n\t\t\t" + p.Name + " in the \n\t\t\t" + _tfs.Name + " collection."); } private static void LogLocatedGlobalListNode(TfsTeamProjectCollection _tfs, Project p) { Trace.WriteLine("\t\tLocated the build global list node for project " + p.Name + " in the " + _tfs.Name + " collection."); Console.WriteLine("\t\tLocated the build global list node for project \n\t\t\t" + p.Name + " in the \n\t\t\t" + _tfs.Name + " collection."); } private static void LogAddedBuildNodeName(TfsTeamProjectCollection _tfs, Project p) { Trace.WriteLine("\t\tAdded the name attribute to the build global list for project " + p.Name + " in the " + _tfs.Name + " collection."); Console.WriteLine("\t\tAdded the name attribute to the build global list for project \n\t\t\t" + p.Name + " in the \n\t\t\t" + _tfs.Name + " collection."); } private static void LogBuildListCreated(TfsTeamProjectCollection _tfs, Project p) { Trace.WriteLine("\t\tAdded all builds into the " + "Builds - " + p.Name + " list in the " + _tfs.Name + " collection."); Console.WriteLine("\t\tAdded all builds into the " + "Builds - \n\t\t\t" + p.Name + " list in the \n\t\t\t" + _tfs.Name + " collection."); } private static void LogNoBuildsInProject(TfsTeamProjectCollection _tfs, Project p) { Trace.WriteLine("\t\tNo builds found for project " + p.Name + " in the " + _tfs.Name + " collection."); Console.WriteLine("\t\tNo builds found for project " + p.Name + " \n\t\t\tin the " + _tfs.Name + " collection."); } private void LogEndOfProject(TfsTeamProjectCollection _tfs, Project p) { Trace.WriteLine("\t\tEND OF PROJECT " + p.Name); Trace.WriteLine(" "); Console.WriteLine("\t\tEND OF PROJECT " + p.Name); Console.WriteLine(); } private static void LogImportReady(TfsTeamProjectCollection _tfs, Project p) { Trace.WriteLine("\t\tReady to import the build global list for project " + p.Name + " to the " + _tfs.Name + " collection."); Console.WriteLine("\t\tReady to import the build global list for project \n\t\t\t" + p.Name + " to the \n\t\t\t" + _tfs.Name + " collection."); } private static void LogImportComplete(TfsTeamProjectCollection _tfs, Project p) { Trace.WriteLine("\t\tImport of the build global list for project " + p.Name + " to the " + _tfs.Name + " collection completed."); Console.WriteLine("\t\tImport of the build global list for project \n\t\t\t" + p.Name + " to the \n\t\t\t" + _tfs.Name + " collection completed."); } } }

    Read the article

  • CodePlex Daily Summary for Tuesday, June 05, 2012

    CodePlex Daily Summary for Tuesday, June 05, 2012Popular ReleasesApplication Architecture Guidelines: Application Architecture Guidelines 3.0.7: 3.0.7Jolt Environment: Jolt v2 Stable: Many new features. Follow development here for more information: http://www.rune-server.org/runescape-development/rs-503-client-server/projects/298763-jolt-environment-v2.html Setup instructions in downloadtedplay: tedplay 1.0: First public release of the Commodore 264 family (C16, plus/4) music player based on the SDL version of YAPE http://yape.homeserver.hu.SharePoint Euro 2012 - UEFA European Football Predictor: havivi.euro2012.wsp (1.5): New fetures:Multilingual Support Max users property in Standings Web Part Games time zone change (UTC +1) bug fix - Version 1.4 locking problem http://euro2012.codeplex.com/discussions/358262 bug fix - Field Title not found (v.1.3) German SP http://euro2012.codeplex.com/discussions/358189#post844228 Bug fix - Access is denied.for users with contribute rights Bug fix - Installing on non-English version of SharePoint Bug fix - Title Rules Installing SharePoint Euro 2012 PredictorSharePoint E...xNet: xNet 2.1.1: Release xNet 2.1.1Command Line Parser Library: 1.9.2.4 stable: This is the first stable of 1.9.* branch. Added tests for HelpText::AutoBuild. Fixed minor formatting error in HelpText::DefaultParsingErrorsHandler.myManga: myManga v1.0.0.4: ChangeLogUpdating from Previous Version: Extract contents of Release - myManga v1.0.0.4.zip to previous version's folder. Replaces: myManga.exe BakaBox.dll CoreMangaClasses.dll Manga.dll Plugins/MangaReader.manga.dll Plugins/MangaFox.manga.dll Plugins/MangaHere.manga.dll Plugins/MangaPanda.manga.dllMVVM Light Toolkit: V4RC (binaries only) including Windows 8 RP: This package contains all the latest DLLs for MVVM Light V4 RC. It includes the DLLs for Windows 8 Release Preview. An updated Nuget package is also available at http://nuget.org/packages/MvvmLightLibsPreviewExtAspNet: ExtAspNet v3.1.7: +2012-06-03 v3.1.7 -?????????BUG,??????RadioButtonList?,AJAX????????BUG(swtseaman、????)。 +?Grid?BoundField、HyperLinkField、LinkButtonField、WindowField??HtmlEncode?HtmlEncodeFormatString(TiDi)。 -HtmlEncode?HtmlEncodeFormatString??????true,??????HTML????????。 -??????Asp.Net??GridView?BoundField?????????。 -http://msdn.microsoft.com/en-us/library/system.web.ui.webcontrols.boundfield.htmlencode -?Grid?HyperLinkField、WindowField??UrlEncode??,????URL??(???true)。 -?????????????,?????????????...Ela, functional language: Ela Platform 2012.3: 2012.3 is a stabilization release. It contains several improvements to Ela, std lib and Elide. Ela changes Fix:Op code 'Show' didn't work correctly when a format string was a thunk. Fix:Flipping a function wrapped in a thunk caused VM to crush. Fix:A bug fixed in concatenation of a thunk and a lazy list. Fix:Concatenation of a lazy list and a strict list could cause stack overflow in a case of recursive thunks. Fix:A bug fixed in applying 'show' to a result of a lazy and strict list...Cross Site Treeview - For MOSS 2007: Cross Site Treeview-Beta - WSP Solution: Cross Site Treeview-Beta - WSP SolutionLiveChat Starter Kit: LCSK v1.5.2: New features: Visitor location (City - Country) from geo-location Pass configuration via javascript for the chat box New visitor identification (no more using the IP address as visitor identification) To update from 1.5.1 Run the /src/1.5.2-sql-updates.txt SQL script to update your database tables. If you have it installed via NuGet, simply update your package and the file will be included so you can run the update script. New installation The easiest way to add LCSK to your app is by...Kendo UI ASP.NET Sample Applications: Sample Applications (2012-06-01): Sample application(s) demonstrating the use of Kendo UI in ASP.NET applications.Better Explorer: Better Explorer Beta 1: Finally, the first Beta is here! There were a lot of changes, including: Translations into 10 different languages (the translations are not complete and will be updated soon) Conditional Select new tools for managing archives Folder Tools tab new search bar and Search Tab new image editing tools update function many bug fixes, stability fixes, and memory leak fixes other new features as well! Please check it out and if there are any problems, let us know. :) Also, do not forge...Player Framework by Microsoft: Player Framework for Windows 8 Metro (Preview 3): Player Framework for HTML/JavaScript and XAML/C# Metro Style Applications. Additional DownloadsIIS Smooth Streaming Client SDK for Windows 8 Microsoft PlayReady Client SDK for Metro Style Apps Release notes:Support for Windows 8 Release Preview (released 5/31/12) Advertising support (VAST, MAST, VPAID, & clips) Miscellaneous improvements and bug fixesMicrosoft Ajax Minifier: Microsoft Ajax Minifier 4.54: Fix for issue #18161: pretty-printing CSS @media rule throws an exception due to mismatched Indent/Unindent pair.Silverlight Toolkit: Silverlight 5 Toolkit Source - May 2012: Source code for December 2011 Silverlight 5 Toolkit release.Windows 8 Metro RSS Reader: RSS Reader release 6: Changed background and foreground colors Used VariableSizeGrid layout to wrap blog posts with images Sort items with Images first, text-only last Enabled Caching to improve navigation between framesJson.NET: Json.NET 4.5 Release 6: New feature - Added IgnoreDataMemberAttribute support New feature - Added GetResolvedPropertyName to DefaultContractResolver New feature - Added CheckAdditionalContent to JsonSerializer Change - Metro build now always uses late bound reflection Change - JsonTextReader no longer returns no content after consecutive underlying content read failures Fix - Fixed bad JSON in an array with error handling creating an infinite loop Fix - Fixed deserializing objects with a non-default cons...DotNetNuke® Community Edition CMS: 06.02.00: Major Highlights Fixed issue in the Site Settings when single quotes were being treated as escape characters Fixed issue loading the Mobile Premium Data after upgrading from CE to PE Fixed errors logged when updating folder provider settings Fixed the order of the mobile device capabilities in the Site Redirection Management UI The User Profile page was completely rebuilt. We needed User Profiles to have multiple child pages. This would allow for the most flexibility by still f...New ProjectsDish: ????? ???????? ??? ??.DRESSCOLLECTION: THIS IS A MVC APPLICAITON WHICH HELPS IN CREATING RANGE OF CLOTH AND DRESS COLLECTION TO CHOOSE AND BUY.FlowTasks: FlowTasks is a framework to develop human and business workflowGroup3.ERP.Roleadministration: Software zum generieren von RollendefinitionenHierarchical Pages Orchard Module: Creates a Hierarchical Page content type that does everything the built-in Page type does, but items can also contain other pages (and thus are containable by other pages too).hot24.vn: is first my project, it very cu chuoi.Household Manager: fcsdvsdvsdcdLabView interface for windows HPC: This project is a prototype library that attempts to integrate in the same enviroment the Data Acquisition and the Data Analysis systems. The library is a collection of LabVIEW Virtual Instruments that permts the job submission to a Windows HPC cluster. Jobs results can be easaly collected in order to perform actions via the DAQ control. Full documentation and examples are included.markgroves.us: Hello this is the blog template for http://markgroves.us. MostrarDireccion: Muestra la Direccion donde Reside cada personaMSPTH2012: This project is for MSPTH2012. Pattern Rolling File Appender for log4net: Appender for log4net that is combination of PatternFileAppender and RollingFileAppenderPowershell By Example: The vision of Powershell By Example is to give those interested in Powershell the chance to learn Powershell scripting by example with the goal of empowering them to use Powershell to maintain their own Windows environments. Practica Cuarto A: proyecto bakanProvidence: Providence is an application meant to capture audio, video, and screen-shots for various purposes.Proyecto T2: Tarea 2 - aprendiendo a utilizar la herramientaServer Config Tools: ??? ?????? IIS ?????SharePoint Scheduling Assistant: Scheduling assistant built for schools and universities. Works with out-of-the-box list and involves no custom workflows.Steam Group Players: This tool is meant to allow access to Steam community groups with the ability to sort group members according to hours spent playing specific games (as well as overall play time). The reason for starting this project was to allow a "player of the week"-selection that is slightly more complex than picking one by random or just by total hours played (recently).t2belenlm4c: Ana Belen LandaTaskLogger: Handy little pop-up that allows you to track time spent on project tasks. Can be used to generate timesheets.TechnicBlog: ????TPL DataFlow Debugger Visualizer: Graphic debugger visualizer to TPL DataFlow networks enable to see live state of blocks and relations Compatible with VS 2012 RC VsDoc2JsDoc: The goal of this project is to convert VSDoc JavaScript comments of JayData classes to JSDoc comment format. Using this tool you can generate the API reference of your JayData components and keep your JavaScript documentation up-to-date.WorkOutTimer: WorkOutTimer This little tool provides an easy to use timer for your workouts, trainings, and more... It has two time period, one for working, and one to be cool! By default work time is set to 2 minutes, cool time is set to 1minute (Kick boxing settings), but you can set up each period time as you want. You can hang up, restart, or reset time. It offers a high visibility timer. It also counts rounds. So have great workout! If you use it and you think it’s cool for y...XLIFF Editor: This is an attempt at making an open source XLIFF editor for Windows. I am using the Open Language Tools Editor as the primary source for the ideas of the XLIFF Editor. I am new to C# so this is is for me an ambitious learning and hopefull succesfull project. Any help in positive criticism will be looked at gratefully. xNet: xNet - a class library for .NET Framework, which includes: - Classes for work with proxy servers: HTTP, Socks4 (and), Socks5. - Classes for work with HTTP 1.0/1.1 protocol: keep-alive, gzip, deflate, chunked, SSL, proxies and more. - Classes for work with multithreading: _a multithreaded bypassing the collection, asynchronous events and more_. - Classes helpers that extend standard classes .NET Framework: FileHelper, DirectoryHelper, StringHelper, XmlHelper, BitHelper and others.?????? «??????????? ??????»: ??????????? ??????????: description

    Read the article

  • Oracle on Oracle: Is that all?

    - by Darin Pendergraft
    On October 17th, I posted a short blog and a podcast interview with Chirag Andani, talking about how Oracle IT uses its own IDM products. Blog link here. In response, I received a comment from reader Jaime Cardoso ([email protected]) who posted: “- You could have talked about how by deploying Oracle's Open standards base technology you were able to integrate any new system in your infrastructure in days. - You could have talked about how by deploying federation you were enabling the business side to keep all their options open in terms of companies to buy and sell while maintaining perfect employee and customer's single view. - You could have talked about how you are now able to cut response times to your audit and security teams into 1/10th of your former times Instead you spent 6 minutes talking about single sign on and self provisioning? If I didn't knew your IDM offer so well I would now be wondering what its differences from Microsoft's offer was. Sorry for not giving a positive comment here but, please your IDM suite is very good and, you simply aren't promoting it well enough” So I decided to send Jaime a note asking him about his experience, and to get his perspective on what makes the Oracle products great. What I found out is that Jaime is a very experienced IDM Architect with several major projects under his belt. Darin Pendergraft: Can you tell me a bit about your experience? How long have you worked in IT, and what is your IDM experience? Jaime Cardoso: I started working in "serious" IT in 1998 when I became Netscape's technical specialist in Portugal. Netscape Portugal didn't exist so, I was working for their VAR here. Most of my work at the time was with Netscape's mail server and LDAP server. Since that time I've been bouncing between the system's side like Sun resellers, Solaris stuff and even worked with Sun's Engineering in the making of an Hierarchical Storage Product (Sun CIS if you know it) and the application's side, mostly in LDAP and IDM. Over the years I've been doing support, service delivery and pre-sales / architecture design of IDM solutions in most big customers in Portugal, to name a few projects: - The first European deployment of Sun Access Manager (SAPO – Portugal Telecom) - The identity repository of 5/5 of the Biggest Portuguese banks - The Portuguese government federation of services project DP: OK, in your blog response, you mentioned 3 topics: 1. Using Oracle's standards based architecture; (you) were able to integrate any new system in days: can you give an example? What systems, how long did it take, number of apps/users/accounts/roles etc. JC: It's relatively easy to design a user management strategy for a static environment, or if you simply assume that you're an <insert vendor here> shop and all your systems will bow to that vendor's will. We've all seen that path, the use of proprietary technologies in interoperability solutions but, then reality kicks in. As an ISP I recall that I made the technical decision to use Active Directory as a central authentication system for the entire IT infrastructure. Clients, systems, apps, everything was there. As a good part of the systems and apps were running on UNIX, then a connector became needed in order to have UNIX boxes to authenticate against AD. And, that strategy worked but, each new machine required the component to be installed, monitoring had to be made for that component and each new app had to be independently certified. A self care user portal was an ongoing project, AD access assumes the client is inside the domain, something the ISP's customers (and UNIX boxes) weren't nor had any intention of ever being. When the Windows 2008 rollout was done, Microsoft changed the Active Directory interface. The Windows administrators didn't have enough know-how about directories and the way systems outside the MS world behaved so, on the go live, things weren't properly tested and a general outage followed. Several hours and 1 roll back later, everything was back working. But, the ISP still had to change all of its applications to work with the new access methods and reset the effort spent on the self service user portal. To keep with the same strategy, they would also have to trust Microsoft not to change interfaces again. Simply by putting up an Oracle LDAP server in the middle and replicating the user info from the AD into LDAP, most of the problems went away. Even systems for which no AD connector existed had PAM in them so, integration was made at the OS level, fully supported by the OS supplier. Sun Identity Manager already had a self care portal, combined with a user workflow so, all the clearances had to be given before the account was created or updated. Adding a new system as a client for these authentication services was simply a new checkbox in the OS installer and, even True64 systems were, for the first time integrated also with a 5 minute work of a junior system admin. True, all the windows clients and MS apps still went to the AD for their authentication needs so, from the start everybody knew that they weren't 100% free of migration pains but, now they had a single point of problems to look at. If you're looking for numbers: - 500K directory entries (users) - 2-300 systems After the initial setup, I personally integrated about 20 systems / apps against LDAP in 1 day while being watched by the different IT teams. The internal IT staff did the rest. DP: 2. Using Federation allows the business to keep options open for buying and selling companies, and yet maintain a single view for both employee and customer. What do you mean by this? Can you give an example? JC: The market is dynamic. The company that's being bought today tomorrow will be sold again. Companies that spread on different markets may see the regulator forcing a sale of part of a company due to monopoly reasons and companies that are in multiple countries have to comply with different legislations. Our job, as IT architects, while addressing the customers and employees authentication services, is quite hard and, quite contrary. On one hand, we need to give access to all of our employees to the relevant systems, apps and resources and, we already have marketing talking with us trying to find out who's a customer of the bough company but not from ours to address. On the other hand, we have to do that and keep in mind we may have to break up all that effort and that different countries legislation may became a problem with a full integration plan. That's a job for user Federation. you don't want to be the one who's telling your President that he will sell that business unit without it's customer's database (making the deal worth a lot less) or that the buyer will take with him a copy of your entire customer's database. Federation enables you to start controlling permissions to users outside of your traditional authentication realm. So what if the people of that company you just bought are keeping their old logins? Do you want, because of that, to have a dedicated system for their expenses reports? And do you want to keep their sales (and pre-sales) people out of the loop in terms of your group's path? Control the information flow, establish a Federation trust circle and give access to your apps to users that haven't (yet?) been brought into your internal login systems. You can still see your users in a unified view, you obviously control if a user has access to any particular application, either that user is in your local database or stored in a directory on the other side of the world. DP: 3. Cut response times of audit and security teams to 1/10. Is this a real number? Can you give an example? JC: No, I don't have any backing for this number. One of the companies I did system Administration for has a SOX compliance policy in place (I remind you that I live in Portugal so, this definition of SOX may be somewhat different from what you're used to) and, every time the audit team says they'll do another audit, we have to negotiate with them the size of the sample and we spend about 15 man/days gathering all the required info they ask. I did some work with Sun's Identity auditor and, from what I've been seeing, Oracle's product is even better and, I've seen that most of the information they ask would have been provided in a few hours with the help of this tool. I do stand by what I said here but, to be honest, someone from Identity Auditor team would do a much better job than me explaining this time savings. Jaime is right: the Oracle IDM products have a lot of business value, and Oracle IT is using them for a lot more than I was able to cover in the short podcast that I posted. I want to thank Jaime for his comments and perspective. We want these blog posts to be informative and honest – so if you have feedback for the Oracle IDM team on any topic discussed here, please post your comments below.

    Read the article

  • Clean Code Developer & Certification in IT - MSCC 21.09.2013

    It was a very busy weekend this time, and quite some hectic to organise the second meetup on a Saturday for the Mauritius Software Craftsmanship Community (MSCC) but it was absolutely fun. Following, I'm writing a brief summary about the topics we spoke about and the new impulses I got. "What a meetup... I was positively impressed. At the beginning I thought that noone would actually show up but then by the time the room got filled. Lots of conversation, great dialogues and fantastic networking between fresh students, experienced students, experienced employees, and self-employed attendees. That's what community is all about!" Above quote was my first reaction shortly after the gathering. And despite being busy during the weekend and yesterday, I took my time to reflect a little bit on things happened and statements made before writing it here on my blog. Additionally, I was also very curious about possible reactions and blogs from other attendees. Reactions from other craftsmen Let me quickly give you some links and quotes from others first... "Like Jochen posted on facebook, that was indeed a 5+ hours marathon (maybe 4 hours for me but still) … Wohoo! We’re indeed a bunch of crazy geeks who did not realise how time flew as we dived into the myriad discussions that sprouted. Yet in the end everyone was happy (:" -- Ish on MSCC meetup - The marathon (: "And the 4hours spent @ Talking drums bore its fruit..I was doing something I never did before....reading the borrowed book while walking....and though I was not that familiar with things mentionned in the book...I was skimming,scanning & flipping...reading titles...short paragraphs...and I skipped pages till I reached home." -- Yannick on Mauritius Software Craftsmanship 1st Meet-up "Hi Developers, Just wanted to share with you the meetups i attended last Saturday - [...] - The second meetup is the one hosted by Jochen Kirstätter, the MSCC, where the attendees were Craftsman, no woman, this time - all sharing the same passion of being a developer - even though it is on different platforms(Windows - Windows Phone - Linux - Adobe(yes a designer) - .Net) - but we manage to sit at the same table - sharing developer views and experience in the corporate world - also talking about good practice when coding( where Jochen initiated a discussion on Clean Coding ) i could not stay till the end - but from what i have heard - the longer you stay the more fun you have till 1600. Developers in the Facebook grouping i invite you to stay tuned about the various developer communities popping up - where you can come to share and learn good practices, develop the entrepreneurial spirit, and learn and share your passion about technologies" -- Arnaud on Facebook More feedback has been posted on the event directly. So, should I really write more? Wouldn't that spoil the impressions? Starting the day with a surprise Indeed, I was very pleased to stumble over the existence of Mobile Monday Mauritius on LinkedIn, an association about any kind of mobile app development, mobile gadgets and latest smartphones on the market. Despite the Monday in their name they had scheduled their recent meeting on Saturday between 10:00 and 12:00hrs. Wow, what a coincidence! Let's grap the bull by its horns and pay them an introductory visit. As they chose the Ebene Accelerator at the Orange Tower in Ebene it was a no-brainer to leave home a bit earlier and stop by. It was quite an experience and fun to talk to the geeks over there. Really looking forward to organise something together.... Arriving at the venue As the children got a bit uneasy at the MoMo gathering and I didn't want to disturb them too much, we arrived early at Bagatelle. Well, no problems as we went for a decent breakfast at Food Lover's Market. Shortly afterwards we went to our venue location, Talking Drums, and prepared the room for the meeting. We only had to take off a repro-painting of the wall in order to have a decent area for the projector. All went very smooth and my two little ones were of great help. Just in time, our first craftsman Avinash arrived on the spot. And then the waiting started... Luckily, not too long. Bit by bit more and more IT people came to join our meeting. Meanwhile, I used the time to give a brief introduction about the MSCC in general, what we are (hm, maybe I am) trying to achieve and that the recent phase is completely focused on creating more awareness that a community like the MSCC is active here in Mauritius. As soon as we reached some 'critical mass' of about ten people I asked everyone for a short introduction and bio, just in case... Conversation between participants started to kick in and we were actually more networking than having a focus on our topics of the day. Quick updates on latest news and development around the MSCC Finally, Clean Code Developer No matter how the position is actually called, whether it is Software Engineer, Software Developer, Programmer, Architect, or Craftsman, anyone working in IT is facing almost the same obstacles. As for the process of writing software applications there are re-occurring patterns and principles combined with some common exercise and best practices on how to resolve them. Initiated by the must-read book 'Clean Code' by Robert C. Martin (aka Uncle Bob) the concept of the Clean Code Developer (CCD) was born already some years ago. CCD is much likely to traditional martial arts where you create awareness of certain principles and learn how to apply practices to improve your style. The CCD initiative recommends to indicate your level of knowledge and experience with coloured wrist bands - equivalent to the belt colours - for various reasons. Frankly speaking, I think that the biggest advantage here is provided by the obvious recognition of conceptual understanding. For example, take the situation of a team meeting... A member with a higher grade in CCD, say Green grade, sees that there are mainly Red grades to talk to, and adjusts her way of communication to their level of understanding. The choice of words might change as certain elements of CCD are not yet familiar to all team members. So instead of talking in an abstract way which only Green grades could follow the whole scenario comes down to Red grade level. Different story, better results... Similar to learning martial arts, we only covered two grades during this occasion - black and red. Most interestingly, there was quite some positive feedback and lots of questions about the principles and practices of the red grade. And we gathered real-world examples from various craftsman and discussed them. Following the Clean Code Developer Red Grade and some annotations from our meetup: CCD Red Grade - Principles Don't Repeat Yourself - DRY Keep It Simple, Stupid (and Short) - KISS Beware of Optimisations! Favour Composition over Inheritance - FCoI Interestingly most of the attendees already heard about those key words but couldn't really classify or categorize them. It's very similar to a situation in which you do not the particular for a thing and have to describe it to others... until someone tells you the actual name and suddenly all is very simple. CCD Red Grade - Practices Follow the Boy Scouts Rule Root Cause Analysis - RCA Use a Version Control System Apply Simple Refactoring Pattern Reflect Daily Introduction to the principles and practices of Clean Code Developer - here: Red Grade As for the various ToDo's we commonly agreed that the Boy Scout Rule clearly is not limited to software development or IT administration but applies to daily life in general. Same for the root cause analysis, btw. We really had good stories with surprisingly endings and conclusions. A quick check about who is using a version control system brought more drive into the conversation. Not only that we had people that aren't using any VCS at all, we also had the 'classic' approach of backup folders and naming conventions as well as the VCS 'junkie' that has to use multiple systems at a time. Just for the records: Git and GitHub seem to be in favour of some of the attendees. Regarding the daily reflection at the end of the day we came up with an easy solution: Wrap it up as a blog entry! Certifications in IT This is kind of a controversy in IT in general. Is it interesting to go for certifications or are they completely obsolete? What are the possibilities to get certified? What are the options we have in Mauritius? How would certificates stand compared to other educational tracks like Computer Science or Web Design. The ratio between craftsmen with certifications like MCP, MSTS, CCNA or LPI versus the ones without wasn't in favour for the first group but there was a high interest in the topic itself and some were really surprised to hear that exam preparations are completely free available online including temporarily voucher codes for either discounts or completely free exams. Furthermore, we discussed possible options on forming so-called study groups on a specific certificates and organising more frequent meetups in order to learn together. Taking into consideration that we have sponsored access to the video course material of Pluralsight (and now PeepCode as well as TrainSignal), we might give it a try by the end of the year. Current favourites are LPIC Level 1 and one of the Microsoft exams 40-78x. Feedback and ideas for the MSCC The closing conversations and discussions about how the MSCC is recently doing, what are the possibilities and what's (hopefully) going to happen in the future were really fertile and I made a couple of mental bullet points which I'm looking forward to tackle down together with orher craftsmen. Eventually, it might be a good option to elaborate on some issues during our weekly Code & Coffee sessions one Wednesday morning. Active discussion on various IT topics like certifications (LPI, MCP, CCNA, etc) and sharing experience Finally, we made it till the end of the planned time. Well, actually the talk was still on and we continued even after 16:00hrs. Unfortunately, we (the children and I) had to leave for evening activities. My resume of the day... It was great to have 15 craftsmen in one room. There are hundreds of IT geeks out there in Mauritius, and as Mauritius Software Craftsmanship Community we still have a lot of work to do to pass on the message to some more key players and companies. Currently, it seems that we are able to attract a good number of students in Computer Science... but we have a lot more to offer, even or especially for IT people on the job. I'm already looking forward to our next Saturday meetup in the near future. PS: Meetup pictures are courtesy of Nirvan Pagooah. Thanks for sharing...

    Read the article

  • quick look at: dm_db_index_physical_stats

    - by fatherjack
    A quick look at the key data from this dmv that can help a DBA keep databases performing well and systems online as the users need them. When the dynamic management views relating to index statistics became available in SQL Server 2005 there was much hype about how they can help a DBA keep their servers running in better health than ever before. This particular view gives an insight into the physical health of the indexes present in a database. Whether they are use or unused, complete or missing some columns is irrelevant, this is simply the physical stats of all indexes; disabled indexes are ignored however. In it’s simplest form this dmv can be executed as:   The results from executing this contain a record for every index in every database but some of the columns will be NULL. The first parameter is there so that you can specify which database you want to gather index details on, rather than scan every database. Simply specifying DB_ID() in place of the first NULL achieves this. In order to avoid the NULLS, or more accurately, in order to choose when to have the NULLS you need to specify a value for the last parameter. It takes one of 4 values – DEFAULT, ‘SAMPLED’, ‘LIMITED’ or ‘DETAILED’. If you execute the dmv with each of these values you can see some interesting details in the times taken to complete each step. DECLARE @Start DATETIME DECLARE @First DATETIME DECLARE @Second DATETIME DECLARE @Third DATETIME DECLARE @Finish DATETIME SET @Start = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, DEFAULT) AS ddips SET @First = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, 'SAMPLED') AS ddips SET @Second = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, 'LIMITED') AS ddips SET @Third = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, 'DETAILED') AS ddips SET @Finish = GETDATE() SELECT DATEDIFF(ms, @Start, @First) AS [DEFAULT] , DATEDIFF(ms, @First, @Second) AS [SAMPLED] , DATEDIFF(ms, @Second, @Third) AS [LIMITED] , DATEDIFF(ms, @Third, @Finish) AS [DETAILED] Running this code will give you 4 result sets; DEFAULT will have 12 columns full of data and then NULLS in the remainder. SAMPLED will have 21 columns full of data. LIMITED will have 12 columns of data and the NULLS in the remainder. DETAILED will have 21 columns full of data. So, from this we can deduce that the DEFAULT value (the same one that is also applied when you query the view using a NULL parameter) is the same as using LIMITED. Viewing the final result set has some details that are worth noting: Running queries against this view takes significantly longer when using the SAMPLED and DETAILED values in the last parameter. The duration of the query is directly related to the size of the database you are working in so be careful running this on big databases unless you have tried it on a test server first. Let’s look at the data we get back with the DEFAULT value first of all and then progress to the extra information later. We know that the first parameter that we supply has to be a database id and for the purposes of this blog we will be providing that value with the DB_ID function. We could just as easily put a fixed value in there or a function such as DB_ID (‘AnyDatabaseName’). The first columns we get back are database_id and object_id. These are pretty explanatory and we can wrap those in some code to make things a little easier to read: SELECT DB_NAME([ddips].[database_id]) AS [DatabaseName] , OBJECT_NAME([ddips].[object_id]) AS [TableName] … FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, NULL) AS ddips  gives us   SELECT DB_NAME([ddips].[database_id]) AS [DatabaseName] , OBJECT_NAME([ddips].[object_id]) AS [TableName], [i].[name] AS [IndexName] , ….. FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, NULL) AS ddips INNER JOIN [sys].[indexes] AS i ON [ddips].[index_id] = [i].[index_id] AND [ddips].[object_id] = [i].[object_id]     These handily tie in with the next parameters in the query on the dmv. If you specify an object_id and an index_id in these then you get results limited to either the table or the specific index. Once again we can place a  function in here to make it easier to work with a specific table. eg. SELECT * FROM [sys].[dm_db_index_physical_stats] (DB_ID(), OBJECT_ID(‘AdventureWorks2008.Person.Address’) , 1, NULL, NULL) AS ddips   Note: Despite me showing that functions can be placed directly in the parameters for this dmv, best practice recommends that functions are not used directly in the function as it is possible that they will fail to return a valid object ID. To be certain of not passing invalid values to this function, and therefore setting an automated process off on the wrong path, declare variables for the OBJECT_IDs and once they have been validated, use them in the function: DECLARE @db_id SMALLINT; DECLARE @object_id INT; SET @db_id = DB_ID(N’AdventureWorks_2008′); SET @object_id = OBJECT_ID(N’AdventureWorks_2008.Person.Address’); IF @db_id IS NULL BEGINPRINT N’Invalid database’; ENDELSE IF @object_id IS NULL BEGINPRINT N’Invalid object’; ENDELSE BEGINSELECT * FROM sys.dm_db_index_physical_stats (@db_id, @object_id, NULL, NULL , ‘LIMITED’); END; GO In cases where the results of querying this dmv don’t have any effect on other processes (i.e. simply viewing the results in the SSMS results area)  then it will be noticed when the results are not consistent with the expected results and in the case of this blog this is the method I have used. So, now we can relate the values in these columns to something that we recognise in the database lets see what those other values in the dmv are all about. The next columns are: We’ll skip partition_number, index_type_desc, alloc_unit_type_desc, index_depth and index_level  as this is a quick look at the dmv and they are pretty self explanatory. The final columns revealed by querying this view in the DEFAULT mode are avg_fragmentation_in_percent. This is the amount that the index is logically fragmented. It will show NULL when the dmv is queried in SAMPLED mode. fragment_count. The number of pieces that the index is broken into. It will show NULL when the dmv is queried in SAMPLED mode. avg_fragment_size_in_pages. The average size, in pages, of a single fragment in the leaf level of the IN_ROW_DATA allocation unit. It will show NULL when the dmv is queried in SAMPLED mode. page_count. Total number of index or data pages in use. OK, so what does this give us? Well, there is an obvious correlation between fragment_count, page_count and avg_fragment_size-in_pages. We see that an index that takes up 27 pages and is in 3 fragments has an average fragment size of 9 pages (27/3=9). This means that for this index there are 3 separate places on the hard disk that SQL Server needs to locate and access to gather the data when it is requested by a DML query. If this index was bigger than 72KB then having it’s data in 3 pieces might not be too big an issue as each piece would have a significant piece of data to read and the speed of access would not be too poor. If the number of fragments increases then obviously the amount of data in each piece decreases and that means the amount of work for the disks to do in order to retrieve the data to satisfy the query increases and this would start to decrease performance. This information can be useful to keep in mind when considering the value in the avg_fragmentation_in_percent column. This is arrived at by an internal algorithm that gives a value to the logical fragmentation of the index taking into account the multiple files, type of allocation unit and the previously mentioned characteristics if index size (page_count) and fragment_count. Seeing an index with a high avg_fragmentation_in_percent value will be a call to action for a DBA that is investigating performance issues. It is possible that tables will have indexes that suffer from rapid increases in fragmentation as part of normal daily business and that regular defragmentation work will be needed to keep it in good order. In other cases indexes will rarely become fragmented and therefore not need rebuilding from one end of the year to another. Keeping this in mind DBAs need to use an ‘intelligent’ process that assesses key characteristics of an index and decides on the best, if any, defragmentation method to apply should be used. There is a simple example of this in the sample code found in the Books OnLine content for this dmv, in example D. There are also a couple of very popular solutions created by SQL Server MVPs Michelle Ufford and Ola Hallengren which I would wholly recommend that you review for much further detail on how to care for your SQL Server indexes. Right, let’s get back on track then. Querying the dmv with the fifth parameter value as ‘DETAILED’ takes longer because it goes through the index and refreshes all data from every level of the index. As this blog is only a quick look a we are going to skate right past ghost_record_count and version_ghost_record_count and discuss avg_page_space_used_in_percent, record_count, min_record_size_in_bytes, max_record_size_in_bytes and avg_record_size_in_bytes. We can see from the details below that there is a correlation between the columns marked. Column 1 (Page_Count) is the number of 8KB pages used by the index, column 2 is how full each page is (how much of the 8KB has actual data written on it), column 3 is how many records are recorded in the index and column 4 is the average size of each record. This approximates to: ((Col1*8) * 1024*(Col2/100))/Col3 = Col4*. avg_page_space_used_in_percent is an important column to review as this indicates how much of the disk that has been given over to the storage of the index actually has data on it. This value is affected by the value given for the FILL_FACTOR parameter when creating an index. avg_record_size_in_bytes is important as you can use it to get an idea of how many records are in each page and therefore in each fragment, thus reinforcing how important it is to keep fragmentation under control. min_record_size_in_bytes and max_record_size_in_bytes are exactly as their names set them out to be. A detail of the smallest and largest records in the index. Purely offered as a guide to the DBA to better understand the storage practices taking place. So, keeping an eye on avg_fragmentation_in_percent will ensure that your indexes are helping data access processes take place as efficiently as possible. Where fragmentation recurs frequently then potentially the DBA should consider; the fill_factor of the index in order to leave space at the leaf level so that new records can be inserted without causing fragmentation so rapidly. the columns used in the index should be analysed to avoid new records needing to be inserted in the middle of the index but rather always be added to the end. * – it’s approximate as there are many factors associated with things like the type of data and other database settings that affect this slightly.  Another great resource for working with SQL Server DMVs is Performance Tuning with SQL Server Dynamic Management Views by Louis Davidson and Tim Ford – a free ebook or paperback from Simple Talk. Disclaimer – Jonathan is a Friend of Red Gate and as such, whenever they are discussed, will have a generally positive disposition towards Red Gate tools. Other tools are often available and you should always try others before you come back and buy the Red Gate ones. All code in this blog is provided “as is” and no guarantee, warranty or accuracy is applicable or inferred, run the code on a test server and be sure to understand it before you run it on a server that means a lot to you or your manager.

    Read the article

  • jQuery Validation hiding or tearing down jquery Modal Dialog on submit

    - by Programmin Tool
    Basically when clicking the modal created submit button, and calling jQuery('#FormName').submit(), it will run the validation and then call the method assigned in the submitHandler. After that it is either creating a new modal div or hiding the form, and I don't get why. I have debugged it and noticed that after the method call in the submitHandler, the form .is(':hidden') = true and this is true for the modal div also. I'm positive I've done this before but can't seem to figure out what I've done wrong this time. The odd this is a modal div is showing up on the screen, but it's completely devoid of content. (Even after putting in random text outside of the form. It's like it's a whole new modal div) Here are the set up methods: function setUpdateTaskDiv() { jQuery("#UpdateTaskForm").validate({ errorLabelContainer: "#ErrorDiv", wrapper: "div", rules: { TaskSubject: { required: true } }, messages: { TaskSubject: { required: 'Subject is required.' } }, onfocusout: false, onkeyup: false, submitHandler: function(label) { updateTaskSubject(null); } } ); jQuery('#UpdateDiv').dialog({ autoOpen: false, bgiframe: true, height: 400, width: 500, modal: true, beforeclose: function() { }, buttons: { Submit: function() { jQuery('#UpdateTaskForm').submit(); }, Cancel: function() { ... } } }); where: function updateTaskSubject(task) { //does nothing, it's just a shell right now } Doesn't really do anything right now. Here's the html: <div id="UpdateDiv"> <div id="ErrorDiv"> </div> <form method="post" id="UpdateTaskForm" action="Calendar.html"> <div> <div class="floatLeft"> Date: </div> <div class="floatLeft"> </div> <div class="clear"> </div> </div> <div> <div class="floatLeft"> Start Time: </div> <div class="floatLeft"> <select id="TaskStartDate" name="TaskStartDate"> </select> </div> <div class="clear"> </div> </div> <div> <div class="floatLeft"> End Time: </div> <div class="floatLeft"> <select id="TaskEndDate" name="TaskEndDate"> </select> </div> <div class="clear"> </div> </div> <div> <div class="floatLeft"> Subject: </div> <div class="floatLeft"> <textarea id="TaskSubject" name="TaskSubject" cols="30" rows="10"></textarea> </div> <div class="clear"> </div> </div> <div> <input type="hidden" id="TaskId" value="" /> </div> <div class="clear"></div> </form> </div> Odd Discovery Turns out that the examples that I got this to work all had the focus being put back on the modal itself. For example, using the validator to add messages to the error div. (Success or not) Without doing this, the modal dialog apparently thinks that it's done with what it needs to do and just hides everything. Not sure exactly why, but to stop this behavior some kind of focus has to be assigned to something within the div itself.

    Read the article

  • SQL SERVER – Weekly Series – Memory Lane – #037

    - by Pinal Dave
    Here is the list of selected articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2007 Convert Text to Numbers (Integer) – CAST and CONVERT If table column is VARCHAR and has all the numeric values in it, it can be retrieved as Integer using CAST or CONVERT function. List All Stored Procedure Modified in Last N Days If SQL Server suddenly start behaving in un-expectable behavior and if stored procedure were changed recently, following script can be used to check recently modified stored procedure. If a stored procedure was created but never modified afterwards modified date and create a date for that stored procedure are same. Count Duplicate Records – Rows Validate Field For DATE datatype using function ISDATE() We always checked DATETIME field for incorrect data type. One of the user input date as 30/2/2007. The date was sucessfully inserted in the temp table but while inserting from temp table to final table it crashed with error. We had now task to validate incorrect date value before we insert in final table. Jr. Developer asked me how can he do that? We check for incorrect data type (varchar, int, NULL) but this is incorrect date value. Regular expression works fine with them because of mm/dd/yyyy format. 2008 Find Space Used For Any Particular Table It is very simple to find out the space used by any table in the database. Two Convenient Features Inline Assignment – Inline Operations Here is the script which does both – Inline Assignment and Inline Operation DECLARE @idx INT = 0 SET @idx+=1 SELECT @idx Introduction to SPARSE Columns SPARSE column are better at managing NULL and ZERO values in SQL Server. It does not take any space in database at all. If column is created with SPARSE clause with it and it contains ZERO or NULL it will be take lesser space then regular column (without SPARSE clause). SP_CONFIGURE – Displays or Changes Global Configuration Settings If advanced settings are not enabled at configuration level SQL Server will not let user change the advanced features on server. Authorized user can turn on or turn off advance settings. 2009 Standby Servers and Types of Standby Servers Standby Server is a type of server that can be brought online in a situation when Primary Server goes offline and application needs continuous (high) availability of the server. There is always a need to set up a mechanism where data and objects from primary server are moved to secondary (standby) server. BLOB – Pointer to Image, Image in Database, FILESTREAM Storage When it comes to storing images in database there are two common methods. I had previously blogged about the same subject on my visit to Toronto. With SQL Server 2008, we have a new method of FILESTREAM storage. However, the answer on when to use FILESTREAM and when to use other methods is still vague in community. 2010 Upper Case Shortcut SQL Server Management Studio I select the word and hit CTRL+SHIFT+U and it SSMS immediately changes the case of the selected word. Similar way if one want to convert cases to lower case, another short cut CTRL+SHIFT+L is also available. The Self Join – Inner Join and Outer Join Self Join has always been a noteworthy case. It is interesting to ask questions about self join in a room full of developers. I often ask – if there are three kinds of joins, i.e.- Inner Join, Outer Join and Cross Join; what type of join is Self Join? The usual answer is that it is an Inner Join. However, the reality is very different. Parallelism – Row per Processor – Row per Thread – Thread 0  If you look carefully in the Properties window or XML Plan, there is “Thread 0?. What does this “Thread 0” indicate? Well find out from the blog post. How do I Learn and How do I Teach The blog post has raised three very interesting questions. How do you learn? How do you teach? What are you learning or teaching? Let me try to answer the same. 2011 SQL SERVER – Interview Questions and Answers – Frequently Asked Questions – Day 7 of 31 What are Different Types of Locks? What are Pessimistic Lock and Optimistic Lock? When is the use of UPDATE_STATISTICS command? What is the Difference between a HAVING clause and a WHERE clause? What is Connection Pooling and why it is Used? What are the Properties and Different Types of Sub-Queries? What are the Authentication Modes in SQL Server? How can it be Changed? SQL SERVER – Interview Questions and Answers – Frequently Asked Questions – Day 8 of 31 Which Command using Query Analyzer will give you the Version of SQL Server and Operating System? What is an SQL Server Agent? Can a Stored Procedure call itself or a Recursive Stored Procedure? How many levels of SP nesting is possible? What is Log Shipping? Name 3 ways to get an Accurate Count of the Number of Records in a Table? What does it mean to have QUOTED_IDENTIFIER ON? What are the Implications of having it OFF? What is the Difference between a Local and a Global Temporary Table? What is the STUFF Function and How Does it Differ from the REPLACE Function? What is PRIMARY KEY? What is UNIQUE KEY Constraint? What is FOREIGN KEY? SQL SERVER – Interview Questions and Answers – Frequently Asked Questions – Day 9 of 31 What is CHECK Constraint? What is NOT NULL Constraint? What is the difference between UNION and UNION ALL? What is B-Tree? How to get @@ERROR and @@ROWCOUNT at the Same Time? What is a Scheduled Job or What is a Scheduled Task? What are the Advantages of Using Stored Procedures? What is a Table Called, if it has neither Cluster nor Non-cluster Index? What is it Used for? Can SQL Servers Linked to other Servers like Oracle? What is BCP? When is it Used? SQL SERVER – Interview Questions and Answers – Frequently Asked Questions – Day 10 of 31 What Command do we Use to Rename a db, a Table and a Column? What are sp_configure Commands and SET Commands? How to Implement One-to-One, One-to-Many and Many-to-Many Relationships while Designing Tables? What is Difference between Commit and Rollback when Used in Transactions? What is an Execution Plan? When would you Use it? How would you View the Execution Plan? SQL SERVER – Interview Questions and Answers – Frequently Asked Questions – Day 11 of 31 What is Difference between Table Aliases and Column Aliases? Do they Affect Performance? What is the difference between CHAR and VARCHAR Datatypes? What is the Difference between VARCHAR and VARCHAR(MAX) Datatypes? What is the Difference between VARCHAR and NVARCHAR datatypes? Which are the Important Points to Note when Multilanguage Data is Stored in a Table? How to Optimize Stored Procedure Optimization? What is SQL Injection? How to Protect Against SQL Injection Attack? How to Find Out the List Schema Name and Table Name for the Database? What is CHECKPOINT Process in the SQL Server? SQL SERVER – Interview Questions and Answers – Frequently Asked Questions – Day 12 of 31 How does Using a Separate Hard Drive for Several Database Objects Improves Performance Right Away? How to Find the List of Fixed Hard Drive and Free Space on Server? Why can there be only one Clustered Index and not more than one? What is Difference between Line Feed (\n) and Carriage Return (\r)? Is It Possible to have Clustered Index on Separate Drive From Original Table Location? What is a Hint? How to Delete Duplicate Rows? Why the Trigger Fires Multiple Times in Single Login? 2012 CTRL+SHIFT+] Shortcut to Select Code Between Two Parenthesis Shortcut key is CTRL+SHIFT+]. This key can be very useful when dealing with multiple subqueries, CTE or query with multiple parentheses. When exercised this shortcut key it selects T-SQL code between two parentheses. Monday Morning Puzzle – Query Returns Results Sometimes but Not Always I am beginner with SQL Server. I have one query, it sometime returns a result and sometime it does not return me the result. Where should I start looking for a solution and what kind of information I should send to you so you can help me with solving. I have no clue, please guide me. Remove Debug Button in SSMS – SQL in Sixty Seconds #020 – Video Effect of Case Sensitive Collation on Resultset Collation is a very interesting concept but I quite often see it is heavily neglected. I have seen developer and DBA looking for a workaround to fix collation error rather than understanding if the side effect of the workaround. Switch Between Two Parenthesis using Shortcut CTRL+] Earlier this week I wrote a blog post about CTRL+SHIFT+] Shortcut to Select Code Between Two Parenthesis, I received quite a lot of positive feedback from readers. If you are a regular reader of the blog post, you must be aware that I appreciate the learning shared by readers. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

< Previous Page | 42 43 44 45 46 47 48 49  | Next Page >