Search Results

Search found 28762 results on 1151 pages for 'go goo go'.

Page 196/1151 | < Previous Page | 192 193 194 195 196 197 198 199 200 201 202 203  | Next Page >

  • Investigation: Can different combinations of components effect Dataflow performance?

    - by jamiet
    Introduction The Dataflow task is one of the core components (if not the core component) of SQL Server Integration Services (SSIS) and often the most misunderstood. This is not surprising, its an incredibly complicated beast and we’re abstracted away from that complexity via some boxes that go yellow red or green and that have some lines drawn between them. Example dataflow In this blog post I intend to look under that facade and get into some of the nuts and bolts of the Dataflow Task by investigating how the decisions we make when building our packages can affect performance. I will do this by comparing the performance of three dataflows that all have the same input, all produce the same output, but which all operate slightly differently by way of having different transformation components. I also want to use this blog post to challenge a common held opinion that I see perpetuated over and over again on the SSIS forum. That is, that people assume adding components to a dataflow will be detrimental to overall performance. Its not surprising that people think this –it is intuitive to think that more components means more work- however this is not a view that I share. I have always been of the opinion that there are many factors affecting dataflow duration and the number of components is actually one of the less important ones; having said that I have never proven that assertion and that is one reason for this investigation. I have actually seen evidence that some people think dataflow duration is simply a function of number of rows and number of components. I’ll happily call that one out as a myth even without any investigation!  The Setup I have a 2GB datafile which is a list of 4731904 (~4.7million) customer records with various attributes against them and it contains 2 columns that I am going to use for categorisation: [YearlyIncome] [BirthDate] The data file is a SSIS raw format file which I chose to use because it is the quickest way of getting data into a dataflow and given that I am testing the transformations, not the source or destination adapters, I want to minimise external influences as much as possible. In the test I will split the customers according to month of birth (12 of those) and whether or not their yearly income is above or below 50000 (2 of those); in other words I will be splitting them into 24 discrete categories and in order to do it I shall be using different combinations of SSIS’ Conditional Split and Derived Column transformation components. The 24 datapaths that occur will each input to a rowcount component, again because this is the least resource intensive means of terminating a datapath. The test is being carried out on a Dell XPS Studio laptop with a quad core (8 logical Procs) Intel Core i7 at 1.73GHz and Samsung SSD hard drive. Its running SQL Server 2008 R2 on Windows 7. The Variables Here are the three combinations of components that I am going to test:     One Conditional Split - A single Conditional Split component CSPL Split by Month of Birth and income category that will use expressions on [YearlyIncome] & [BirthDate] to send each row to one of 24 outputs. This next screenshot displays the expression logic in use: Derived Column & Conditional Split - A Derived Column component DER Income Category that adds a new column [IncomeCategory] which will contain one of two possible text values {“LessThan50000”,”GreaterThan50000”} and uses [YearlyIncome] to determine which value each row should get. A Conditional Split component CSPL Split by Month of Birth and Income Category then uses that new column in conjunction with [BirthDate] to determine which of the same 24 outputs to send each row to. Put more simply, I am separating the Conditional Split of #1 into a Derived Column and a Conditional Split. The next screenshots display the expression logic in use: DER Income Category         CSPL Split by Month of Birth and Income Category       Three Conditional Splits - A Conditional Split component that produces two outputs based on [YearlyIncome], one for each Income Category. Each of those outputs will go to a further Conditional Split that splits the input into 12 outputs, one for each month of birth (identical logic in each). In this case then I am separating the single Conditional Split of #1 into three Conditional Split components. The next screenshots display the expression logic in use: CSPL Split by Income Category         CSPL Split by Month of Birth 1& 2       Each of these combinations will provide an input to one of the 24 rowcount components, just the same as before. For illustration here is a screenshot of the dataflow containing three Conditional Split components: As you can these dataflows have a fair bit of work to do and remember that they’re doing that work for 4.7million rows. I will execute each dataflow 10 times and use the average for comparison. I foresee three possible outcomes: The dataflow containing just one Conditional Split (i.e. #1) will be quicker There is no significant difference between any of them One of the two dataflows containing multiple transformation components will be quicker Regardless of which of those outcomes come to pass we will have learnt something and that makes this an interesting test to carry out. Note that I will be executing the dataflows using dtexec.exe rather than hitting F5 within BIDS. The Results and Analysis The table below shows all of the executions, 10 for each dataflow. It also shows the average for each along with a standard deviation. All durations are in seconds. I’m pasting a screenshot because I frankly can’t be bothered with the faffing about needed to make a presentable HTML table. It is plain to see from the average that the dataflow containing three conditional splits is significantly faster, the other two taking 43% and 52% longer respectively. This seems strange though, right? Why does the dataflow containing the most components outperform the other two by such a big margin? The answer is actually quite logical when you put some thought into it and I’ll explain that below. Before progressing, a side note. The standard deviation for the “Three Conditional Splits” dataflow is orders of magnitude smaller – indicating that performance for this dataflow can be predicted with much greater confidence too. The Explanation I refer you to the screenshot above that shows how CSPL Split by Month of Birth and salary category in the first dataflow is setup. Observe that there is a case for each combination of Month Of Date and Income Category – 24 in total. These expressions get evaluated in the order that they appear and hence if we assume that Month of Date and Income Category are uniformly distributed in the dataset we can deduce that the expected number of expression evaluations for each row is 12.5 i.e. 1 (the minimum) + 24 (the maximum) divided by 2 = 12.5. Now take a look at the screenshots for the second dataflow. We are doing one expression evaluation in DER Income Category and we have the same 24 cases in CSPL Split by Month of Birth and Income Category as we had before, only the expression differs slightly. In this case then we have 1 + 12.5 = 13.5 expected evaluations for each row – that would account for the slightly longer average execution time for this dataflow. Now onto the third dataflow, the quick one. CSPL Split by Income Category does a maximum of 2 expression evaluations thus the expected number of evaluations per row is 1.5. CSPL Split by Month of Birth 1 & CSPL Split by Month of Birth 2 both have less work to do than the previous Conditional Split components because they only have 12 cases to test for thus the expected number of expression evaluations is 6.5 There are two of them so total expected number of expression evaluations for this dataflow is 6.5 + 6.5 + 1.5 = 14.5. 14.5 is still more than 12.5 & 13.5 though so why is the third dataflow so much quicker? Simple, the conditional expressions in the first two dataflows have two boolean predicates to evaluate – one for Income Category and one for Month of Birth; the expressions in the Conditional Split in the third dataflow however only have one predicate thus they are doing a lot less work. To sum up, the difference in execution times can be attributed to the difference between: MONTH(BirthDate) == 1 && YearlyIncome <= 50000 and MONTH(BirthDate) == 1 In the first two dataflows YearlyIncome <= 50000 gets evaluated an average of 12.5 times for every row whereas in the third dataflow it is evaluated once and once only. Multiply those 11.5 extra operations by 4.7million rows and you get a significant amount of extra CPU cycles – that’s where our duration difference comes from. The Wrap-up The obvious point here is that adding new components to a dataflow isn’t necessarily going to make it go any slower, moreover you may be able to achieve significant improvements by splitting logic over multiple components rather than one. Performance tuning is all about reducing the amount of work that needs to be done and that doesn’t necessarily mean use less components, indeed sometimes you may be able to reduce workload in ways that aren’t immediately obvious as I think I have proven here. Of course there are many variables in play here and your mileage will most definitely vary. I encourage you to download the package and see if you get similar results – let me know in the comments. The package contains all three dataflows plus a fourth dataflow that will create the 2GB raw file for you (you will also need the [AdventureWorksDW2008] sample database from which to source the data); simply disable all dataflows except the one you want to test before executing the package and remember, execute using dtexec, not within BIDS. If you want to explore dataflow performance tuning in more detail then here are some links you might want to check out: Inequality joins, Asynchronous transformations and Lookups Destination Adapter Comparison Don’t turn the dataflow into a cursor SSIS Dataflow – Designing for performance (webinar) Any comments? Let me know! @Jamiet

    Read the article

  • So&hellip; What is a SharePoint Developer?

    - by Mark Rackley
    A few days ago Stacy Draper and I were chatting about what it means to be a SharePoint Developer. That actually turns about to be a conversation with lots of shades of grey. Stacy thought it would make a good blog post… well, I can’t promise this to be a GOOD blog post… So, anyway, I decided to let off a little bomb this morning by posting the following tweet on Twitter: @mrackley: Can someone be considered a SharePoint Developer if all they know how to do is work in SPD? Now, I knew this is a debate that has been going on since the first SharePoint Designer User put SharePoint Developer on their resume. There are probably several blogs out there on the subject, but with the wildfire that is jQuery and a few other new features out there I believe it is an important subject to tackle again. I got a lot of great feedback as well on Twitter. The entire twitter conversation is at the end of this blog posting. Thanks everyone for their opinions. Who cares? Why does it matter? Can’t we all just get along? Yes it matters… everything must be labeled and put in it’s proper place. Pigeon holing is the only way to go!  Just kidding.. I’m not near that anal, but yes! It is important to be able to properly identify the skill set of those people on your team and correctly identify the role you are wanting to hire. Saying you are a “SharePoint Developer” is just too vague and just barely begins to answer the question. Also, knowing who’s on your team and what they can do will ensure you give your clients the best people for the job. A Developer writes code right? So, a Developer uses Visual Studio! Whoa, hold on there Sparky. Even if I concede that to be a developer you have to write code then you still can’t say a SharePoint Developer has to use Visual Studio.  So, you can spell C#, how well can you write XSLT? How’s your jQuery? Sorry bud, that’s code whether you like it or not. There are many ways to write code in SharePoint that have nothing to do with cracking open Visual Studio. So, what are the different ways to develop in SharePoint then? How many different ways can you “develop” in SharePoint?? A lot… Out of the box features In SharePoint you can create a site, create a custom list on that site, do basic calculations in a calculated column, set up alerts, and add all sorts of web parts to a page. Let’s face it.. that IS development! javaScript/jQuery Perhaps you’ve heard by now about this thing called jQuery? It’s all over the place and the answer to a lot of people’s prayers. However be careful, with great power comes great responsibility. Remember, javaScript is executed on the client side and if you abuse it your performance could be affected. Also, Marc Anderson (@sympmarc) wrote a pretty awesome javaScript library called SPServices.  This allows you to access SharePoint’s Web Services using jQuery. How freakin cool is that? With these tools at your disposal the number of things you CAN’T do without Visual Studio grows smaller and smaller. This is definitely development no matter what anyone else says and there is no Visual Studio involved. SharePoint Designer Ahhh.. The cause of and the answer to all of your SharePoint development problems. With SharePoint Designer you can use DataView Web Parts, develop (there’s that word again) your branding, and even connect to external datasources.  There’s a lot you can do in SharePoint Designer. It’s got it’s shortcomings, but it is an invaluable tool in the SharePoint developers toolbox. InfoPath So, can InfoPath development really be considered SharePoint development? I would say yes. You can connect to SharePoint lists, populate fields in a SharePoint list, and even write code in InfoPath. Sounds like SharePoint development to me. Visual Studio – Web Services/WCF So, get this. You can write code for SharePoint and not have a clue what the 12 hive is, what “site actions” means, or know how to do ANYTHING in SharePoint? Poppycock! You say? SharePoint Web Services I say… With SharePoint Web Services you can totally interact with SharePoint without knowing anything about SharePoint. I don’t recommend it of course, but it’s possible. What can you write using SharePoint Web Services? How about a little application called SharePoint Designer? Visual Studio – Object Model And here we are finally:  the SharePoint Object Model.  When you hear “SharePoint Developer” most people think of someone opening Visual Studio and creating a custom web part, workflow, event receiver, etc.. etc.. but I hope that by now I have made the point that this is NOT the only form of SharePoint Development! Again… Who cares? Just crack open Visual Studio for everything! Problem solved! Let’s ponder for a moment, shall we? The business comes to you with a requirement that involves some pretty fancy business calculations, and a complicated view that they do NOT want to look like SharePoint. “No Problem” you proclaim you mighty SharePoint Developer. You go back to your cube, chuckle at the latest Dilbert comic, and crack open Visual Studio. Then you build your custom web part… fight with all the deployment, migration, and UAT that you must go through and proclaim victory two weeks later!!!! Well done my good sir/ma’am! Oh wait… it turns out Sally who is not a “developer” did the exact same thing with a Dataview web part and some jQuery and it’s been in production for two weeks? #CockinessFail I know there are many ASP.NET developers out there that can create a custom control and wrap it to be a SharePoint Web Part.  That does NOT mean they are SharePoint Developers though as far as I’m concerned and I personally would much rather have someone on my team that can manipulate the heck (yes, I said ‘heck’) out of SharePoint using Dataview Web Parts, jQuery, and a roll of duct tape. Just because you know how to write code in Visual Studio does not mean you are a SharePoint Developer. What’s the conclusion here? How do we define ‘it’ and what ‘it’ is called? Fortunately, this is MY blog. I don’t have to give answers, I can stir the pot, laugh and leave you to ponder what it means! There is obviously no right or wrong answer here (unless you disagree with me,then you are flat out wrong). Anyway, there are many opinions.  Here’s mine.  If you put SharePoint Developer on your resume make sure to clearly specify HOW you develop in SharePoint and what tools you use. If we must label these gurus of jQuery and SPD, how about “SharePoint Client Developer” or “SharePoint Front End Developer”? Just throwing out an idea. Whatever we call them, to say they are not developers is short-sighted, arrogant, and unfair. Of course, then we need to figure out what to call all those other SharePoint development types.  Twitter Conversation @next_connect: RT @mrackley: Can someone be considered a SharePoint Developer if all they know how to do is work in SPD? | I say no.... @mikegil:  @mrackley re: yr Developer question: SPD expert <> SP Developer. Can be "sous-developer," though. #SharePoint #SPD @WonderLaura:  Rt @mrackley Can someone be considered a SharePoint Dev if all they know how to do is work in SPD? -- My opinion is that devs write code. @exnav29:  Rt @mrackley Can someone be considered a SharePoint Dev if all they know how to do is work in SPD? => I think devs would use VS as well @ssKevin:  @WonderLaura @mrackley does that mean strictly vb and c# when it comes to #SharePoint ? @jimmywim:  @exnav29 @mrackley nah, I'd say they were a power user. Devs know their way around the 12 hive ;) @sympmarc:  RT @mrackley: Can someone be considered a SharePoint Developer if all they know how to do is work in SPD? -> Fighting words. @sympmarc:  @next_connect @mrackley Besides, we prefer to be called "hacks". ;+) @next_connect:  @sympmarc The important thing is that you don't have to develop code to solve problems and create solutions. @mrackley @mrackley:  @sympmarc @next_connect not tryin to pick fight.. just try and find consensus on definition @usher:  @mrackley I'd still argue that you have a DevLite title that's out there for the collaboration engineers (@sympmarc @next_connect) @next_connect: @usher I agree. I've called it Light Dev/ Configuration before. @sympmarc @mrackley @usher:  @next_connect I like DevLite, low calorie but still same great taste :) @mrackley @sympmarc @mrackley:  @next_connect @usher @sympmarc I don't think there's any "lite" to someone who can bend jQuery and XSLT to their will. @usher:  @mrackley okay, so would you refer to someone that writes user controls and assemblies something different (@next_connect @sympmarc) @usher:  @mrackley when looking for a developer that can write .net code, it's a bit different than an XSLT/jQuery designer. @sympmarc @next_connect @jimmywim:  @mrackley @sympmarc @next_connect I reckon a "dev" does managed code and works in the 12 hive @sympmarc:  @jimmywim @mrackley @next_connect We had a similar debate a few days ago @toddbleeker et al @sympmarc:  @sympmarc @jimmywim @mrackley @next_connect @toddbleeker @stevenmfowler More abt my Middle Tier term, but still connected. Meet bus need. @toddbleeker:  @sympmarc @jimmywim @mrackley @next_connect I used "No Assembly Required" in the past. I also suggested "Supplimenting the SharePoint DOM" @toddbleeker:  @sympmarc @jimmywim @mrackley @next_connect Others suggested Information Worker Solutions/Enhancements @toddbleeker:  @sympmarc @jimmywim @mrackley @next_connect @stevenmfowler I also like "SharePoint Scripting Solutions". All the technologies are script. @jimmywim:  @toddbleeker @sympmarc @mrackley @next_connect I like the IW solutions one... @toddbleeker:  @sympmarc @jimmywim @mrackley @next_connect @stevenmfowler This is like the debate that never ends: it is definitely not called Middle Tier. @jimmywim:  @toddbleeker @sympmarc @mrackley @next_connect @stevenmfowler "Scripting" these days makes me think PowerShell... @sympmarc:  @toddbleeker @jimmywim @mrackley @next_connect @stevenmfowler If it forces a debate on h2 best solve bus probs, I'll keep sayin Middle Tier. @usher:  @sympmarc so we know what we're looking for, we just can't define a name? @toddbleeker @jimmywim @mrackley @next_connect @stevemfowler @sympmarc:  @usher @sympmarc @toddbleeker @jimmywim @mrackley @next_connect @stevemfowler The naming seems to matter more than the substance. :-( @jimmywim:  @sympmarc @usher @toddbleeker @mrackley @next_connect @stevemfowler work brkdn defines tasks, defines tools needed, can then b grp'd by user @WonderLaura:  @mrackley @toddbleeker @jimmywim @sympmarc @usher @next_connect Funny you're asking. @johnrossjr and I spent hours this week on the subject. @stevenmfowler:  RT @toddbleeker: @sympmarc @jimmywim @mrackley @next_connect @stevenmfowler it is definitely not called Middle Tier. < I'm with Todd

    Read the article

  • SQL Cruise Alaska 2011

    - by Grant Fritchey
    I had the extreme good fortune to get sent on the last SQL Cruise to Alaska. I love my job. In case you don't what this is, SQL Cruise is a trip on a cruise ship during which you get to attend classes while on the boat, learning all about SQL Server and related topics as well as network with the instructors and the other Cruisers. Frankly, it's amazing. Classes ran from Monday, 5/30, to Saturday, 6/4. The networking was constant, between classes, at night on cruise ship, out on excursions in Alaskan rainforests and while snorkeling in ocean waters. Here's a run down of the experience from my point of view. Because I couldn't travel out 2 days early, I missed the BBQ that occurred the day before the cruise when many of the Cruisers received their swag bags. Some of that swag came from Red Gate. I researched what was useful on a cruise like this and purchased small flashlights and binoculars for all the Cruisers. The flashlights were because, depending on your cabin, ships can be very dark. The binoculars were so that the cruisers could watch all the beautiful landscape as it flowed by. I would have liked to have been there when the bags were opened, but I heard from several people that they appreciated the gifts. Cruisers "In" the hot tub. Pictured: Marjory Woody, Michele Grondin, Kyle Brandt, Grant Fritchey, John Halunen Sunday I went to board the ship with my wife. We had a bit of an adventure because I messed up our documents. It all worked out and we got on board to meet up at the back of the boat at one of the outdoor bars with the other Cruisers, thanks to tweets letting everyone know where to go. That was the end of electronic coordination on the trip (connectivity in Alaska was horrible for everyone except AT&T). The Cruisers were a great bunch of people and it was a real honor to meet them and get to spend time with them. After everyone settled into their cabins, our very first activity was a contest, sponsored by Red Gate. The Cruisers, in an effort to get to know each other and the ship, were required to go all over taking various photographs, some of them hilarious. The winning team of three would all win prizes. Some of the significant others helped out and I tagged along with a team that tied for first but lost the coin toss. The winning team consisted of Christina Leo (blog|twitter), Ryan Malcom (twitter), Neil Hambly (blog|twitter). They then had to do math and identify the cabin with the lowest prime number, oh, and get a picture of it and be the first to get back up to the bar where we were waiting. Christina came in first and very happily carried home an Ipad2. Ryan won a 1TB portable hard drive and Neil won a wireless mouse (picture below, note my special SQL Server Central Friday Shirt. Thanks Steve (blog|twitter)). Winners: Christina Leo, Neil Hambly, Ryan Malcolm. Just Lucky: Grant Fritchey Monday morning classes started. Buck Woody (blog|twitter) was a special guest speaker on this cruise. His theme was "Three C's on the High Seas: Career, Communication and Cloud." The first session was all on Career. I'm not going to type out all my notes from the session, but let's just say, if you get the chance to hear Buck talk about how to manage your career, I suggest you attend. I have a ton of blog posts that I'll be putting together over the next several months (yes, months) both here and over on ScaryDBA. I also have a bunch of work I'm going to be doing to get my career performance bumped up a notch or two (and let's face it, that won't be easy). Later on Monday, Tim Ford (blog|twitter) did a session on DMOs. Specifically the session was on Tim's Period Table of DMOs that he has put together, and how to use some of the more interesting DMOs in your day to day job. It was a great session, packed with good information. Next, Brent Ozar (blog|twitter) did a session on how to monitor and guide SAN configuration for the DBA that doesn't have access to the SAN. That was some seriously useful information. Tuesday morning we only had a single class. Kendra Little (blog|twitter) taught us all about "No Lock for Yes Fun".  It was all about the different transaction isolation levels and how they work. There is so often confusion in this area and Kendra does a great job in clarifying the information. Also, she tosses in her excellent drawings to liven up the presentation. Then it was excursion time in Juneau. My wife and I, along with several other Cruisers, took a hike up around the Mendenhall Glacier. It was absolutely beautiful weather and walking through the Alaskan rain forest was a treat. Our guide, Jason, was a great guy and it was a good day of hiking. Wednesday was an all day excursion in Skagway. My wife and I took the "Ghost and Good Time Girls" walking tour that ended up at a bar that used to be a brothel, the Red Onion. It was a great history of the town. We went back out and hit a few museums and exhibits. We also hiked up the side of the mountain to see the Dewey Lake and some great views of the town. Finally we hiked out to the far side of town to see the Gold Rush cemetery. Hiking done we went back to the boat and had a quiet dinner on our own. Thursday we cruised through Glacier Bay and saw at least four different glaciers including sitting next to the Marjory Glacier for  about an hour. It was amazing. Then it got better. We went into class with Buck again, this time to talk about Communication. Again, I've got pages of notes that I'm going to be referring back to for some time to come. This was an excellent opportunity to learn. Snorkelers: Nicole Bertrand, Aaron Bertrand, Grant Fritchey, Neil Hambly, Christina Leo, John Robel, Yanni Robel, Tim Ford Friday we pulled into Ketchikan. A bunch of us went snorkeling. Yes, snorkeling. Yes, in Alaska. Yes, snorkeling in the ocean in Alaska. It was fantastic. They had us put on 7mm thick wet suits (an adventure all by itself) so it was basically warm the entire time we were in the water (except for the occasional squirt of cold water down my back). Before we got in the water a bald eagle flew up and landed about 15 feet in front of us, which was just an incredible event. Then our guide pointed out about 14 other eagles in the area, hanging out in the trees. Wow! The water was pretty clear and there was a ton of things to see. That was absolutely a blast. Back on the boat I presented a session called Execution Plans: The Deep Dive (note the nautical theme). It seemed to go over well and I had several good questions come out of the session that will lead to new blog posts. After I presented, it was Aaron Bertrand's (blog|twitter) turn. He did a session on "What's New in Denali" that provided a lot of great information. He was able to incorporate new things straight out of Tech-Ed, so this was expanded beyond his usual presentation. The man really knows what he's talking about and communicates it well. Saturday we were travelling so there was time for a bunch of classes. Jeremiah Peschka (blog|twitter) did a great overview of some of the NoSQL databases and what they should be used for. The session was called "The Database is Dead" but it was really about how there are specific uses for these databases that SQL Server doesn't fill, but also that these databases can't replace SQL Server in other areas. Again, good material. Brent Ozar presented again with a session on Defensive Indexing. It was an overview of how indexes work and a deep dive into how to apply them appropriately in your databases to better support access. A good session, as you would expect. Then we pulled into Victoria, BC, in Canada and had a nice dinner with several of the Cruisers, including Denny Cherry (blog|twitter). After that it was back to Seattle on Sunday. By the way, the Science Fiction Museum in Seattle isn't a Science Fiction Museum any more. I was very disappointed to discover this. Overall, it was a great experience. I'm extremely appreciative of Red Gate for sending me and for Tim, Brent, Kendra and Jeremiah for having me. The other Cruisers were all amazing people and it was an honor & privilege to meet them and spend time with them. While this was a seriously fun time, it was also a very serious training opportunity with solid information coming from seasoned industry pros.

    Read the article

  • Announcing: Great Improvements to Windows Azure Web Sites

    - by ScottGu
    I’m excited to announce some great improvements to the Windows Azure Web Sites capability we first introduced earlier this summer.  Today’s improvements include: a new low-cost shared mode scaling option, support for custom domains with shared and reserved mode web-sites using both CNAME and A-Records (the later enabling naked domains), continuous deployment support using both CodePlex and GitHub, and FastCGI extensibility.  All of these improvements are now live in production and available to start using immediately. New “Shared” Scaling Tier Windows Azure allows you to deploy and host up to 10 web-sites in a free, shared/multi-tenant hosting environment. You can start out developing and testing web sites at no cost using this free shared mode, and it supports the ability to run web sites that serve up to 165MB/day of content (5GB/month).  All of the capabilities we introduced in June with this free tier remain the same with today’s update. Starting with today’s release, you can now elastically scale up your web-site beyond this capability using a new low-cost “shared” option (which we are introducing today) as well as using a “reserved instance” option (which we’ve supported since June).  Scaling to either of these modes is easy.  Simply click on the “scale” tab of your web-site within the Windows Azure Portal, choose the scaling option you want to use with it, and then click the “save” button.  Changes take only seconds to apply and do not require any code to be changed, nor the app to be redeployed: Below are some more details on the new “shared” option, as well as the existing “reserved” option: Shared Mode With today’s release we are introducing a new low-cost “shared” scaling mode for Windows Azure Web Sites.  A web-site running in shared mode is deployed in a shared/multi-tenant hosting environment.  Unlike the free tier, though, a web-site in shared mode has no quotas/upper-limit around the amount of bandwidth it can serve.  The first 5 GB/month of bandwidth you serve with a shared web-site is free, and then you pay the standard “pay as you go” Windows Azure outbound bandwidth rate for outbound bandwidth above 5 GB. A web-site running in shared mode also now supports the ability to map multiple custom DNS domain names, using both CNAMEs and A-records, to it.  The new A-record support we are introducing with today’s release provides the ability for you to support “naked domains” with your web-sites (e.g. http://microsoft.com in addition to http://www.microsoft.com).  We will also in the future enable SNI based SSL as a built-in feature with shared mode web-sites (this functionality isn’t supported with today’s release – but will be coming later this year to both the shared and reserved tiers). You pay for a shared mode web-site using the standard “pay as you go” model that we support with other features of Windows Azure (meaning no up-front costs, and you pay only for the hours that the feature is enabled).  A web-site running in shared mode costs only 1.3 cents/hr during the preview (so on average $9.36/month). Reserved Instance Mode In addition to running sites in shared mode, we also support scaling them to run within a reserved instance mode.  When running in reserved instance mode your sites are guaranteed to run isolated within your own Small, Medium or Large VM (meaning no other customers run within it).  You can run any number of web-sites within a VM, and there are no quotas on CPU or memory limits. You can run your sites using either a single reserved instance VM, or scale up to have multiple instances of them (e.g. 2 medium sized VMs, etc).  Scaling up or down is easy – just select the “reserved” instance VM within the “scale” tab of the Windows Azure Portal, choose the VM size you want, the number of instances of it you want to run, and then click save.  Changes take effect in seconds: Unlike shared mode, there is no per-site cost when running in reserved mode.  Instead you pay only for the reserved instance VMs you use – and you can run any number of web-sites you want within them at no extra cost (e.g. you could run a single site within a reserved instance VM or 100 web-sites within it for the same cost).  Reserved instance VMs start at 8 cents/hr for a small reserved VM.  Elastic Scale-up/down Windows Azure Web Sites allows you to scale-up or down your capacity within seconds.  This allows you to deploy a site using the shared mode option to begin with, and then dynamically scale up to the reserved mode option only when you need to – without you having to change any code or redeploy your application. If your site traffic starts to drop off, you can scale back down the number of reserved instances you are using, or scale down to the shared mode tier – all within seconds and without having to change code, redeploy, or adjust DNS mappings.  You can also use the “Dashboard” view within the Windows Azure Portal to easily monitor your site’s load in real-time (it shows not only requests/sec and bandwidth but also stats like CPU and memory usage). Because of Windows Azure’s “pay as you go” pricing model, you only pay for the compute capacity you use in a given hour.  So if your site is running most of the month in shared mode (at 1.3 cents/hr), but there is a weekend when it gets really popular and you decide to scale it up into reserved mode to have it run in your own dedicated VM (at 8 cents/hr), you only have to pay the additional pennies/hr for the hours it is running in the reserved mode.  There is no upfront cost you need to pay to enable this, and once you scale back down to shared mode you return to the 1.3 cents/hr rate.  This makes it super flexible and cost effective. Improved Custom Domain Support Web sites running in either “shared” or “reserved” mode support the ability to associate custom host names to them (e.g. www.mysitename.com).  You can associate multiple custom domains to each Windows Azure Web Site.  With today’s release we are introducing support for A-Records (a big ask by many users). With the A-Record support, you can now associate ‘naked’ domains to your Windows Azure Web Sites – meaning instead of having to use www.mysitename.com you can instead just have mysitename.com (with no sub-name prefix).  Because you can map multiple domains to a single site, you can optionally enable both a www and naked domain for a site (and then use a URL rewrite rule/redirect to avoid SEO problems). We’ve also enhanced the UI for managing custom domains within the Windows Azure Portal as part of today’s release.  Clicking the “Manage Domains” button in the tray at the bottom of the portal now brings up custom UI that makes it easy to manage/configure them: As part of this update we’ve also made it significantly smoother/easier to validate ownership of custom domains, and made it easier to switch existing sites/domains to Windows Azure Web Sites with no downtime. Continuous Deployment Support with Git and CodePlex or GitHub One of the more popular features we released earlier this summer was support for publishing web sites directly to Windows Azure using source control systems like TFS and Git.  This provides a really powerful way to manage your application deployments using source control.  It is really easy to enable this from a website’s dashboard page: The TFS option we shipped earlier this summer provides a very rich continuous deployment solution that enables you to automate builds and run unit tests every time you check in your web-site, and then if they are successful automatically publish to Azure. With today’s release we are expanding our Git support to also enable continuous deployment scenarios and integrate with projects hosted on CodePlex and GitHub.  This support is enabled with all web-sites (including those using the “free” scaling mode). Starting today, when you choose the “Set up Git publishing” link on a website’s “Dashboard” page you’ll see two additional options show up when Git based publishing is enabled for the web-site: You can click on either the “Deploy from my CodePlex project” link or “Deploy from my GitHub project” link to walkthrough a simple workflow to configure a connection between your website and a source repository you host on CodePlex or GitHub.  Once this connection is established, CodePlex or GitHub will automatically notify Windows Azure every time a checkin occurs.  This will then cause Windows Azure to pull the source and compile/deploy the new version of your app automatically.  The below two videos walkthrough how easy this is to enable this workflow and deploy both an initial app and then make a change to it: Enabling Continuous Deployment with Windows Azure Websites and CodePlex (2 minutes) Enabling Continuous Deployment with Windows Azure Websites and GitHub (2 minutes) This approach enables a really clean continuous deployment workflow, and makes it much easier to support a team development environment using Git: Note: today’s release supports establishing connections with public GitHub/CodePlex repositories.  Support for private repositories will be enabled in a few weeks. Support for multiple branches Previously, we only supported deploying from the git ‘master’ branch.  Often, though, developers want to deploy from alternate branches (e.g. a staging or future branch). This is now a supported scenario – both with standalone git based projects, as well as ones linked to CodePlex or GitHub.  This enables a variety of useful scenarios.  For example, you can now have two web-sites - a “live” and “staging” version – both linked to the same repository on CodePlex or GitHub.  You can configure one of the web-sites to always pull whatever is in the master branch, and the other to pull what is in the staging branch.  This enables a really clean way to enable final testing of your site before it goes live. This 1 minute video demonstrates how to configure which branch to use with a web-site. Summary The above features are all now live in production and available to use immediately.  If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using them today.  Visit the Windows Azure Developer Center to learn more about how to build apps with it. We’ll have even more new features and enhancements coming in the weeks ahead – including support for the recent Windows Server 2012 and .NET 4.5 releases (we will enable new web and worker role images with Windows Server 2012 and .NET 4.5 next month).  Keep an eye out on my blog for details as these new features become available. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Applications: The mathematics of movement, Part 1

    - by TechTwaddle
    Before you continue reading this post, a suggestion; if you haven’t read “Programming Windows Phone 7 Series” by Charles Petzold, go read it. Now. If you find 150+ pages a little too long, at least go through Chapter 5, Principles of Movement, especially the section “A Brief Review of Vectors”. This post is largely inspired from this chapter. At this point I assume you know what vectors are, how they are represented using the pair (x, y), what a unit vector is, and given a vector how you would normalize the vector to get a unit vector. Our task in this post is simple, a marble is drawn at a point on the screen, the user clicks at a random point on the device, say (destX, destY), and our program makes the marble move towards that point and stop when it is reached. The tricky part of this task is the word “towards”, it adds a direction to our problem. Making a marble bounce around the screen is simple, all you have to do is keep incrementing the X and Y co-ordinates by a certain amount and handle the boundary conditions. Here, however, we need to find out exactly how to increment the X and Y values, so that the marble appears to move towards the point where the user clicked. And this is where vectors can be so helpful. The code I’ll show you here is not ideal, we’ll be working with C# on Windows Mobile 6.x, so there is no built-in vector class that I can use, though I could have written one and done all the math inside the class. I think it is trivial to the actual problem that we are trying to solve and can be done pretty easily once you know what’s going on behind the scenes. In other words, this is an excuse for me being lazy. The first approach, uses the function Atan2() to solve the “towards” part of the problem. Atan2() takes a point (x, y) as input, Atan2(y, x), note that y goes first, and then it returns an angle in radians. What angle you ask. Imagine a line from the origin (0, 0), to the point (x, y). The angle which Atan2 returns is the angle the positive X-axis makes with that line, measured clockwise. The figure below makes it clear, wiki has good details about Atan2(), give it a read. The pair (x, y) also denotes a vector. A vector whose magnitude is the length of that line, which is Sqrt(x*x + y*y), and a direction ?, as measured from positive X axis clockwise. If you’ve read that chapter from Charles Petzold’s book, this much should be clear. Now Sine and Cosine of the angle ? are special. Cosine(?) divides x by the vectors length (adjacent by hypotenuse), thus giving us a unit vector along the X direction. And Sine(?) divides y by the vectors length (opposite by hypotenuse), thus giving us a unit vector along the Y direction. Therefore the vector represented by the pair (cos(?), sin(?)), is the unit vector (or normalization) of the vector (x, y). This unit vector has a length of 1 (remember sin2(?) + cos2(?) = 1 ?), and a direction which is the same as vector (x, y). Now if I multiply this unit vector by some amount, then I will always get a point which is a certain distance away from the origin, but, more importantly, the point will always be on that line. For example, if I multiply the unit vector with the length of the line, I get the point (x, y). Thus, all we have to do to move the marble towards our destination point, is to multiply the unit vector by a certain amount each time and draw the marble, and the marble will magically move towards the click point. Now time for some code. The application, uses a timer based frame draw method to draw the marble on the screen. The timer is disabled initially and whenever the user clicks on the screen, the timer is enabled. The callback function for the timer follows the standard Update and Draw cycle. private double totLenToTravelSqrd = 0; private double startPosX = 0, startPosY = 0; private double destX = 0, destY = 0; private void Form1_MouseUp(object sender, MouseEventArgs e) {     destX = e.X;     destY = e.Y;     double x = marble1.x - destX;     double y = marble1.y - destY;     //calculate the total length to be travelled     totLenToTravelSqrd = x * x + y * y;     //store the start position of the marble     startPosX = marble1.x;     startPosY = marble1.y;     timer1.Enabled = true; } private void timer1_Tick(object sender, EventArgs e) {     UpdatePosition();     DrawMarble(); } Form1_MouseUp() method is called when ever the user touches and releases the screen. In this function we save the click point in destX and destY, this is the destination point for the marble and we also enable the timer. We store a few more values which we will use in the UpdatePosition() method to detect when the marble has reached the destination and stop the timer. So we store the start position of the marble and the square of the total length to be travelled. I’ll leave out the term ‘sqrd’ when speaking of lengths from now on. The time out interval of the timer is set to 40ms, thus giving us a frame rate of about ~25fps. In the timer callback, we update the marble position and draw the marble. We know what DrawMarble() does, so here, we’ll only look at how UpdatePosition() is implemented; private void UpdatePosition() {     //the vector (x, y)     double x = destX - marble1.x;     double y = destY - marble1.y;     double incrX=0, incrY=0;     double distanceSqrd=0;     double speed = 6;     //distance between destination and current position, before updating marble position     distanceSqrd = x * x + y * y;     double angle = Math.Atan2(y, x);     //Cos and Sin give us the unit vector, 6 is the value we use to magnify the unit vector along the same direction     incrX = speed * Math.Cos(angle);     incrY = speed * Math.Sin(angle);     marble1.x += incrX;     marble1.y += incrY;     //check for bounds     if ((int)marble1.x < MinX + marbleWidth / 2)     {         marble1.x = MinX + marbleWidth / 2;     }     else if ((int)marble1.x > (MaxX - marbleWidth / 2))     {         marble1.x = MaxX - marbleWidth / 2;     }     if ((int)marble1.y < MinY + marbleHeight / 2)     {         marble1.y = MinY + marbleHeight / 2;     }     else if ((int)marble1.y > (MaxY - marbleHeight / 2))     {         marble1.y = MaxY - marbleHeight / 2;     }     //distance between destination and current point, after updating marble position     x = destX - marble1.x;     y = destY - marble1.y;     double newDistanceSqrd = x * x + y * y;     //length from start point to current marble position     x = startPosX - (marble1.x);     y = startPosY - (marble1.y);     double lenTraveledSqrd = x * x + y * y;     //check for end conditions     if ((int)lenTraveledSqrd >= (int)totLenToTravelSqrd)     {         System.Console.WriteLine("Stopping because destination reached");         timer1.Enabled = false;     }     else if (Math.Abs((int)distanceSqrd - (int)newDistanceSqrd) < 4)     {         System.Console.WriteLine("Stopping because no change in Old and New position");         timer1.Enabled = false;     } } Ok, so in this function, first we subtract the current marble position from the destination point to give us a vector. The first three lines of the function construct this vector (x, y). The vector (x, y) has the same length as the line from (marble1.x, marble1.y) to (destX, destY) and is in the direction pointing from (marble1.x, marble1.y) to (destX, destY). Note that marble1.x and marble1.y denote the center point of the marble. Then we use Atan2() to get the angle which this vector makes with the positive X axis and use Cosine() and Sine() of that angle to get the unit vector along that same direction. We multiply this unit vector with 6, to get the values which the position of the marble should be incremented by. This variable, speed, can be experimented with and determines how fast the marble moves towards the destination. After this, we check for bounds to make sure that the marble stays within the screen limits and finally we check for the end condition and stop the timer. The end condition has two parts to it. The first case is the normal case, where the user clicks well inside the screen. Here, we stop when the total length travelled by the marble is greater than or equal to the total length to be travelled. Simple enough. The second case is when the user clicks on the very corners of the screen. Like I said before, the values marble1.x and marble1.y denote the center point of the marble. When the user clicks on the corner, the marble moves towards the point, and after some time tries to go outside of the screen, this is when the bounds checking comes into play and corrects the marble position so that the marble stays inside the screen. In this case the marble will never travel a distance of totLenToTravelSqrd, because of the correction is its position. So here we detect the end condition when there is not much change in marbles position. I use the value 4 in the second condition above. After experimenting with a few values, 4 seemed to work okay. There is a small thing missing in the code above. In the normal case, case 1, when the update method runs for the last time, marble position over shoots the destination point. This happens because the position is incremented in steps (which are not small enough), so in this case too, we should have corrected the marble position, so that the center point of the marble sits exactly on top of the destination point. I’ll add this later and update the post. This has been a pretty long post already, so I’ll leave you with a video of how this program looks while running. Notice in the video that the marble moves like a bot, without any grace what so ever. And that is because the speed of the marble is fixed at 6. In the next post we will see how to make the marble move a little more elegantly. And also, if Atan2(), Sine() and Cosine() are a little too much to digest, we’ll see how to achieve the same effect without using them, in the next to next post maybe. Ciao!

    Read the article

  • Code Behaviour via Unit Tests

    - by Dewald Galjaard
    Normal 0 false false false EN-ZA X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Some four months ago my car started acting up. Symptoms included a sputtering as my car’s computer switched between gears intermittently. Imagine building up speed, then when you reach 80km/h the car magically and mysteriously decide to switch back to third or even second gear. Clearly it was confused! I managed to track down a technician, an expert in his field to help me out. As he fitted his handheld computer to some hidden port under the dash, he started to explain “These cars are quite intelligent, you know. When they sense something is wrong they run in a restrictive program which probably account for how you managed to drive here in the first place...”  I was surprised and thought this was certainly going to be an interesting test drive. The car ran smoothly down the first couple of stretches as the technician ran through routine checks. Then he said “Ok, all looking good. We need to start testing aspects of the gearbox. Inside the gearbox there are a couple of sensors. One of them is a speed sensor which talks to the computer, which in turn will decide which gear to switch to. The restrictive program avoid these sensors altogether and allow the computer to obtain its input from other [non-affected] sources”. Then, as soon as he forced the speed sensor to come back online the symptoms and ill behaviour re-emerged... What an incredible analogy for getting into a discussion on unit testing software? Besides I should probably put my ill fortune to some good use, right? This example provide a lot of insight into how and why we should conduct unit tests when writing code. More importantly, it captures what is easily and unfortunately often the most overlooked goal of writing unit tests by those new to the art and those who oppose it alike - The goal of writing unit tests is to test the behaviour of our code under predefined conditions. Although it is very possible to test the intrinsic workings of each and every component in your code, writing several tests for each method in practise will soon prove to be an exhausting and ultimately fruitless exercise given the certain and ever changing nature of business requirements. Consequently it is true and quite possible whilst conducting proper unit tests, to call any single method several times as you examine and contemplate different scenarios. Let’s write some code to demonstrate what I mean. In my example I make use of the Moq framework and NUnit to create my tests. Truly you can use whatever you’re comfortable with. First we’ll create an ISpeedSensor interface. This is to represent the speed sensor located in the gearbox.  Then we’ll create a Gearbox class which we’ll pass to a constructor when we instantiate an object of type Computer. All three are described below.   ISpeedSensor.cs namespace AutomaticVehicle {     public interface ISpeedSensor     {         int ReportCurrentSpeed();     } }   Gearbox.cs namespace AutomaticVehicle {      public class Gearbox     {         private ISpeedSensor _speedSensor;           public Gearbox( ISpeedSensor gearboxSpeedSensor )         {             _speedSensor = gearboxSpeedSensor;         }         /// <summary>         /// This method obtain it's reading from the speed sensor.         /// </summary>         /// <returns></returns>         public int ReportCurrentSpeed()         {             return _speedSensor.ReportCurrentSpeed();         }     } } Computer.cs namespace AutomaticVehicle {     public class Computer     {         private Gearbox _gearbox;         public Computer( Gearbox gearbox )         {                     }          public int GetCurrentSpeed()         {             return _gearbox.ReportCurrentSpeed( );         }     } } Since this post is about Unit testing, that is exactly what we’ll create next. Create a second project in your solution. I called mine AutomaticVehicleTests and I immediately referenced the respective nunit, moq and AutomaticVehicle dll’s. We’re going to write a test to examine what happens inside the Computer class. ComputerTests.cs namespace AutomaticVehicleTests {     [TestFixture]     public class ComputerTests     {         [Test]         public void Computer_Gearbox_SpeedSensor_DoesThrow()         {             // Mock ISpeedSensor in gearbox             Mock< ISpeedSensor > speedSensor = new Mock< ISpeedSensor >( );             speedSensor.Setup( n => n.ReportCurrentSpeed() ).Throws<Exception>();             Gearbox gearbox = new Gearbox( speedSensor.Object );               // Create Computer instance to test it's behaviour  towards an exception in gearbox             Computer carComputer = new Computer( gearbox );             // For simplicity let’s assume for now the car only travels at 60 km/h.             Assert.AreEqual( 60, carComputer.GetCurrentSpeed( ) );          }     } }   What is happening in this test? We have created a mocked object using the ISpeedsensor interface which we've passed to our Gearbox object. Notice that I created the mocked object using an interface, not the implementation. I’ll talk more about this in future posts but in short I do this to accentuate the fact that I'm not not really concerned with how SpeedSensor work internally at this particular point in time. Next I’ve gone ahead and created a scenario where I’ve declared the speed sensor in Gearbox to be faulty by forcing it to throw an exception should we ask Gearbox to report on its current speed. Sneaky, sneaky. This test is a simulation of how things may behave in the real world. Inevitability things break, whether it’s caused by mechanical failure, some logical error on your part or a fellow developer which didn’t consult the documentation (or the lack thereof ) - whether you’re calling a speed sensor, making a call to a database, calling a web service or just trying to write a file to disk. It’s a scenario I’ve created and this test is about how the code within the Computer instance will behave towards any such error as I’ve depicted. Now, if you’ve followed closely in my final assert method you would have noticed I did something quite unexpected. I might be getting ahead of myself now but I’m testing to see if the value returned is equal to what I expect it to be under perfect conditions – I’m not testing to see if an error has been thrown! Why is that? Well, in short this is TDD. Test Driven Development is about first writing your test to define the result we want, then to go back and change the implementation within your class to obtain the desired output (I need to make sure I can drive back to the repair shop. Remember? ) So let’s go ahead and run our test as is. It’s fails miserably... Good! Let’s go back to our Computer class and make a small change to the GetCurrentSpeed method.   Computer.cs public int GetCurrentSpeed() {   try   {     return _gearbox.ReportCurrentSpeed( );   }   catch   {     RunRestrictiveProgram( );   } }     This is a simple solution, I know, but it does provide a way to allow for different behaviour. You’re more than welcome to provide an implementation for RunRestrictiveProgram should you feel the need to. It's not within the scope of this post or related to the point I'm trying to make. What is important is to notice how the focus has shifted in our approach from how things can break - to how things behave when broken.   Happy coding!

    Read the article

  • Software development is (mostly) a trade, and what to do about it

    - by Jeff
    (This is another cross-post from my personal blog. I don’t even remember when I first started to write it, but I feel like my opinion is well enough baked to share.) I've been sitting on this for a long time, particularly as my opinion has changed dramatically over the last few years. That I've encountered more crappy code than maintainable, quality code in my career as a software developer only reinforces what I'm about to say. Software development is just a trade for most, and not a huge academic endeavor. For those of you with computer science degrees readying your pitchforks and collecting your algorithm interview questions, let me explain. This is not an assault on your way of life, and if you've been around, you know I'm right about the quality problem. You also know the HR problem is very real, or we wouldn't be paying top dollar for mediocre developers and importing people from all over the world to fill the jobs we can't fill. I'm going to try and outline what I see as some of the problems, and hopefully offer my views on how to address them. The recruiting problem I think a lot of companies are doing it wrong. Over the years, I've had two kinds of interview experiences. The first, and right, kind of experience involves talking about real life achievements, followed by some variation on white boarding in pseudo-code, drafting some basic system architecture, or even sitting down at a comprooder and pecking out some basic code to tackle a real problem. I can honestly say that I've had a job offer for every interview like this, save for one, because the task was to debug something and they didn't like me asking where to look ("everyone else in the company died in a plane crash"). The other interview experience, the wrong one, involves the classic torture test designed to make the candidate feel stupid and do things they never have, and never will do in their job. First they will question you about obscure academic material you've never seen, or don't care to remember. Then they'll ask you to white board some ridiculous algorithm involving prime numbers or some kind of string manipulation no one would ever do. In fact, if you had to do something like this, you'd Google for a solution instead of waste time on a solved problem. Some will tell you that the academic gauntlet interview is useful to see how people respond to pressure, how they engage in complex logic, etc. That might be true, unless of course you have someone who brushed up on the solutions to the silly puzzles, and they're playing you. But here's the real reason why the second experience is wrong: You're evaluating for things that aren't the job. These might have been useful tactics when you had to hire people to write machine language or C++, but in a world dominated by managed code in C#, or Java, people aren't managing memory or trying to be smarter than the compilers. They're using well known design patterns and techniques to deliver software. More to the point, these puzzle gauntlets don't evaluate things that really matter. They don't get into code design, issues of loose coupling and testability, knowledge of the basics around HTTP, or anything else that relates to building supportable and maintainable software. The first situation, involving real life problems, gives you an immediate idea of how the candidate will work out. One of my favorite experiences as an interviewee was with a guy who literally brought his work from that day and asked me how to deal with his problem. I had to demonstrate how I would design a class, make sure the unit testing coverage was solid, etc. I worked at that company for two years. So stop looking for algorithm puzzle crunchers, because a guy who can crush a Fibonacci sequence might also be a guy who writes a class with 5,000 lines of untestable code. Fashion your interview process on ways to reveal a developer who can write supportable and maintainable code. I would even go so far as to let them use the Google. If they want to cut-and-paste code, pass on them, but if they're looking for context or straight class references, hire them, because they're going to be life-long learners. The contractor problem I doubt anyone has ever worked in a place where contractors weren't used. The use of contractors seems like an obvious way to control costs. You can hire someone for just as long as you need them and then let them go. You can even give them the work that no one else wants to do. In practice, most places I've worked have retained and budgeted for the contractor year-round, meaning that the $90+ per hour they're paying (of which half goes to the person) would have been better spent on a full-time person with a $100k salary and benefits. But it's not even the cost that is an issue. It's the quality of work delivered. The accountability of a contractor is totally transient. They only need to deliver for as long as you keep them around, and chances are they'll never again touch the code. There's no incentive for them to get things right, there's little incentive to understand your system or learn anything. At the risk of making an unfair generalization, craftsmanship doesn't matter to most contractors. The education problem I don't know what they teach in college CS courses. I've believed for most of my adult life that a college degree was an essential part of being successful. Of course I would hold that bias, since I did it, and have the paper to show for it in a box somewhere in the basement. My first clue that maybe this wasn't a fully qualified opinion comes from the fact that I double-majored in journalism and radio/TV, not computer science. Eventually I worked with people who skipped college entirely, many of them at Microsoft. Then I worked with people who had a masters degree who sucked at writing code, next to the high school diploma types that rock it every day. I still think there's a lot to be said for the social development of someone who has the on-campus experience, but for software developers, college might not matter. As I mentioned before, most of us are not writing compilers, and we never will. It's actually surprising to find how many people are self-taught in the art of software development, and that should reveal some interesting truths about how we learn. The first truth is that we learn largely out of necessity. There's something that we want to achieve, so we do what I call just-in-time learning to meet those goals. We acquire knowledge when we need it. So what about the gaps in our knowledge? That's where the most valuable education occurs, via our mentors. They're the people we work next to and the people who write blogs. They are critical to our professional development. They don't need to be an encyclopedia of jargon, but they understand the craft. Even at this stage of my career, I probably can't tell you what SOLID stands for, but you can bet that I practice the principles behind that acronym every day. That comes from experience, augmented by my peers. I'm hell bent on passing that experience to others. Process issues If you're a manager type and don't do much in the way of writing code these days (shame on you for not messing around at least), then your job is to isolate your tradespeople from nonsense, while bringing your business into the realm of modern software development. That doesn't mean you slap up a white board with sticky notes and start calling yourself agile, it means getting all of your stakeholders to understand that frequent delivery of quality software is the best way to deal with change and evolving expectations. It also means that you have to play technical overlord to make sure the education and quality issues are dealt with. That's why I make the crack about sticky notes, because without the right technique being practiced among your code monkeys, you're just a guy with sticky notes. You're asking your business to accept frequent and iterative delivery, now make sure that the folks writing the code can handle the same thing. This means unit testing, the right instrumentation, integration tests, automated builds and deployments... all of the stuff that makes it easy to see when change breaks stuff. The prognosis I strongly believe that education is the most important part of what we do. I'm encouraged by things like The Starter League, and it's the kind of thing I'd love to see more of. I would go as far as to say I'd love to start something like this internally at an existing company. Most of all though, I can't emphasize enough how important it is that we mentor each other and share our knowledge. If you have people on your staff who don't want to learn, fire them. Seriously, get rid of them. A few months working with someone really good, who understands the craftsmanship required to build supportable and maintainable code, will change that person forever and increase their value immeasurably.

    Read the article

  • 12c - Invisible Columns...

    - by noreply(at)blogger.com (Thomas Kyte)
    Remember when 11g first came out and we had "invisible indexes"?  It seemed like a confusing feature - indexes that would be maintained by modifications (hence slowing them down), but would not be used by queries (hence never speeding them up).  But - after you looked at them a while, you could see how they can be useful.  For example - to add an index in a running production system, an index used by the next version of the code to be introduced later that week - but not tested against the queries in version one of the application in place now.  We all know that when you add an index - one of three things can happen - a given query will go much faster, it won't affect a given query at all, or... It will make some untested query go much much slower than it used to.  So - invisible indexes allowed us to modify the schema in a 'safe' manner - hiding the change until we were ready for it.Invisible columns accomplish the same thing - the ability to introduce a change while minimizing any negative side effects of that change.  Normally when you add a column to a table - any program with a SELECT * would start seeing that column, and programs with an INSERT INTO T VALUES (...) would pretty much immediately break (an INSERT without a list of columns in it).  Now we can add a column to a table in an invisible fashion, the column will not show up in a DESCRIBE command in SQL*Plus, it will not be returned with a SELECT *, it will not be considered in an INSERT INTO T VALUES statement.  It can be accessed by any query that asks for it, it can be populated by an INSERT statement that references it, but you won't see it otherwise.For example, let's start with a simple two column table:ops$tkyte%ORA12CR1> create table t  2  ( x int,  3    y int  4  )  5  /Table created.ops$tkyte%ORA12CR1> insert into t values ( 1, 2 );1 row created.Now, we will add an invisible column to it:ops$tkyte%ORA12CR1> alter table t add                     ( z int INVISIBLE );Table altered.Notice that a DESCRIBE will not show us this column:ops$tkyte%ORA12CR1> desc t Name              Null?    Type ----------------- -------- ------------ X                          NUMBER(38) Y                          NUMBER(38)and existing inserts are unaffected by it:ops$tkyte%ORA12CR1> insert into t values ( 3, 4 );1 row created.A SELECT * won't see it either:ops$tkyte%ORA12CR1> select * from t;         X          Y---------- ----------         1          2         3          4But we have full access to it (in well written programs! The ones that use a column list in the insert and select - never relying on "defaults":ops$tkyte%ORA12CR1> insert into t (x,y,z)                         values ( 5,6,7 );1 row created.ops$tkyte%ORA12CR1> select x, y, z from t;         X          Y          Z---------- ---------- ----------         1          2         3          4         5          6          7and when we are sure that we are ready to go with this column, we can just modify it:ops$tkyte%ORA12CR1> alter table t modify z visible;Table altered.ops$tkyte%ORA12CR1> select * from t;         X          Y          Z---------- ---------- ----------         1          2         3          4         5          6          7I will say that a better approach to this - one that is available in 11gR2 and above - would be to use editioning views (part of Edition Based Redefinition - EBR ).  I would rather use EBR over this approach, but in an environment where EBR is not being used, or the editioning views are not in place, this will achieve much the same.Read these for information on EBR:http://www.oracle.com/technetwork/issue-archive/2010/10-jan/o10asktom-172777.htmlhttp://www.oracle.com/technetwork/issue-archive/2010/10-mar/o20asktom-098897.htmlhttp://www.oracle.com/technetwork/issue-archive/2010/10-may/o30asktom-082672.html

    Read the article

  • This program runs but not correctly; the numbers aren't right.

    - by user320950
    this program runs but not correctly numbers arent right, i read numbers from a file and then when i am using them in the program they are not right.:brief decription of what i am trying to do can someone tell me if something doesnt look right. this is what i have to do: write a program that determines the grade dispersal for 100 students You are to read the exam scores into three arrays, one array for each exam. You must then calculate how many students scored A’s (90 or above), B’s (80 or above), C’s (70 or above), D’s (60 or above), and F’s (less than 60). Do this for each exam and write the distribution to the screen. // basic file operations #include <iostream> #include <fstream> using namespace std; int read_file_in_array(double exam[100][3]); double calculate_total(double exam1[], double exam2[], double exam3[]); // function that calcualates grades to see how many 90,80,70,60 //void display_totals(); double exam[100][3]; int main() { double go,go2,go3; double exam[100][3],exam1[100],exam2[100],exam3[100]; go=read_file_in_array(exam); go2=calculate_total(exam1,exam2,exam3); //go3=display_totals(); cout << go,go2,go3; return 0; } /* int display_totals() { int grade_total; grade_total=calculate_total(exam1,exam2,exam3); return 0; } */ double calculate_total(double exam1[],double exam2[],double exam3[]) { int calc_tot,above90=0, above80=0, above70=0, above60=0,i,j, fail=0; double exam[100][3]; calc_tot=read_file_in_array(exam); for(i=0;i<100;i++) { for (j=0; j<3; j++) { exam1[i]=exam[100][0]; exam2[i]=exam[100][1]; exam3[i]=exam[100][2]; if(exam[i][j] <=90 && exam[i][j] >=100) { above90++; { if(exam[i][j] <=80 && exam[i][j] >=89) { above80++; { if(exam[i][j] <=70 && exam[i][j] >=79) { above70++; { if(exam[i][j] <=60 && exam[i][j] >=69) { above60++; { if(exam[i][j] >=59) { fail++; } } } } } } } } } } } return 0; } int read_file_in_array(double exam[100][3]) { ifstream infile; int exam1[100]; int exam2[100]; int exam3[100]; infile.open("grades.txt");// file containing numbers in 3 columns if(infile.fail()) // checks to see if file opended { cout << "error" << endl; } int num, i=0,j=0; while(!infile.eof()) // reads file to end of line { for(i=0;i<100;i++) // array numbers less than 100 { for(j=0;j<3;j++) // while reading get 1st array or element infile >> exam[i][j]; infile >> exam[i][j]; infile >> exam[i][j]; cout << exam[i][j] << endl; { if (! (infile >> exam[i][j]) ) cout << exam[i][j] << endl; } exam[i][j]=exam1[i]; exam[i][j]=exam2[i]; exam[i][j]=exam3[i]; } infile.close(); } return 0; }

    Read the article

  • Understanding the memory consumption on iPhone

    - by zoul
    Hello! I am working on a 2D iPhone game using OpenGL ES and I keep hitting the 24 MB memory limit – my application keeps crashing with the error code 101. I tried real hard to find where the memory goes, but the numbers in Instruments are still much bigger than what I would expect. I ran the application with the Memory Monitor, Object Alloc, Leaks and OpenGL ES instruments. When the application gets loaded, free physical memory drops from 37 MB to 23 MB, the Object Alloc settles around 7 MB, Leaks show two or three leaks a few bytes in size, the Gart Object Size is about 5 MB and Memory Monitor says the application takes up about 14 MB of real memory. I am perplexed as where did the memory go – when I dig into the Object Allocations, most of the memory is in the textures, exactly as I would expect. But both my own texture allocation counter and the Gart Object Size agree that the textures should take up somewhere around 5 MB. I am not aware of allocating anything else that would be worth mentioning, and the Object Alloc agrees. Where does the memory go? (I would be glad to supply more details if this is not enough.) Update: I really tried to find where I could allocate so much memory, but with no results. What drives me wild is the difference between the Object Allocations (~7 MB) and real memory usage as shown by Memory Monitor (~14 MB). Even if there were huge leaks or huge chunks of memory I forget about, the should still show up in the Object Allocations, shouldn’t they? I’ve already tried the usual suspects, ie. the UIImage with its caching, but that did not help. Is there a way to track memory usage “debugger-style”, line by line, watching each statement’s impact on memory usage? What I have found so far: I really am using that much memory. It is not easy to measure the real memory consumption, but after a lot of counting I think the memory consumption is really that high. My fault. I found no easy way to measure the memory used. The Memory Monitor numbers are accurate (these are the numbers that really matter), but the Memory Monitor can’t tell you where exactly the memory goes. The Object Alloc tool is almost useless for tracking the real memory usage. When I create a texture, the allocated memory counter goes up for a while (reading the texture into the memory), then drops (passing the texture data to OpenGL, freeing). This is OK, but does not always happen – sometimes the memory usage stays high even after the texture has been passed on to OpenGL and freed from “my” memory. This means that the total amount of memory allocated as shown by the Object Alloc tool is smaller than the real total memory consumption, but bigger than the real consumption minus textures (real – textures < object alloc < real). Go figure. I misread the Programming Guide. The memory limit of 24 MB applies to textures and surfaces, not the whole application. The actual red line lies a bit further, but I could not find any hard numbers. The consensus is that 25–30 MB is the ceiling. When the system gets short on memory, it starts sending the memory warning. I have almost nothing to free, but other applications do release some memory back to the system, especially Safari (which seems to be caching the websites). When the free memory as shown in the Memory Monitor goes zero, the system starts killing. I had to bite the bullet and rewrite some parts of the code to be more efficient on memory, but I am probably still pushing it. I

    Read the article

  • Running in to some issues with Tumblr's Theme Parser

    - by Kylee
    Below part of a tumblr theme I'm working on and you will notice that I use the {block:PostType} syntax to declare the opening tag of each post, which is a <li> element in an Ordered List. This allows me to not only dynamicly set the li's class based on the type of post but cuts down on the number of times I'm calling the ShareThis JS which was really bogging down the page. This creates a new issue though which I believe is a flaw in Tumblr's parser. Each post is an ordered list with one <li> element in it. I know I could solve this by having each post as a <div> but I really like the control and semantics of using a list. Tumblr gurus? Suggestions? Sample of code: {block:Posts} <ol class="posts"> {block:Text} <li class="post type_text" id="{PostID}"> {block:Title} <h2><a href="{Permalink}" title="Go to post '{Title}'.">{Title}</a></h2> {/block:Title} {Body} {/block:Text} {block:Photo} <li class="post type_photo" id="{PostID}"> <div class="image"> <a href="{LinkURL}"><img src="{PhotoURL-500}" alt="{PhotoAlt}"></a> </div> {block:Caption} {Caption} {/block:Caption} {/block:Photo} {block:Photoset} <li class="post type_photoset" id="{PostID}"> {Photoset-500} {block:Caption} {Caption} {/block:Caption} {/block:Photoset} {block:Quote} <li class="post type_quote" id="{PostID}"> <blockquote> <div class="quote_symbol">&ldquo;</div> {Quote} </blockquote> {block:Source} <div class="quote_source">{Source}</div> {/block:Source} {/block:Quote} {block:Link} <li class="post type_link" id="{PostID}"> <h2><a href="{URL}" {Target} title="Go to {Name}.">{Name}</a></h2> {block:Description} {Description} {/block:Description} {/block:Link} {block:Chat} <li class="post type_chat" id="{PostID}"> {block:Title} <h2><a href="{Permalink}" title="Go to post {PostID} '{Title}'.">{Title}</a></h2> {/block:Title} <table class="chat_log"> {block:Lines} <tr class="{Alt} user_{UserNumber}"> <td class="person">{block:Label}{Label}{/block:Label}</td> <td class="message">{Line}</td> </tr> {/block:Lines} </table> {/block:Chat} {block:Video} <li class="post type_video" id="{PostID}"> {Video-500} {block:Caption} {Caption} {/block:Caption} {/block:Video} {block:Audio} <li class="post type_audio" id="{PostID}"> {AudioPlayerWhite} {block:Caption} {Caption} {/block:Caption} {block:ExternalAudio} <p><a href="{ExternalAudioURL}" title="Download '{ExternalAudioURL}'">Download</a></p> {/block:ExternalAudio} {/block:Audio} <div class="post_footer"> <p class="post_date">Posted on {ShortMonth} {DayOfMonth}, {Year} at {12hour}:{Minutes} {AmPm}</p> <ul> <li><a class="comment_link" href="{Permalink}#disqus_thread">Comments</a></li> <li><script type="text/javascript" src="http://w.sharethis.com/button/sharethis.js#publisher=722e181d-1d8a-4363-9ebe-82d5263aea94&amp;type=website"></script></li> </ul> {block:PermalinkPage} <div id="disqus_thread"></div> <script type="text/javascript" src="http://disqus.com/forums/kyleetilley/embed.js"></script> <noscript><a href="http://disqus.com/forums/kyleetilley/?url=ref">View the discussion thread.</a></noscript> {/block:PermalinkPage} </div> </li> </ol> {/block:Posts}

    Read the article

  • Problems with multiple setIntervals running simultaneously

    - by Roel V.
    Hello, My first post here. I want to make a horizontal menu with submenu's sliding down on mouseover. I know I could use jQuery but this is to practice my javascript skills. I use the following code: var up = new Array() var down = new Array() var submenustart function titleover(headmenu, inter) { submenu = headmenu.lastChild up[inter] = window.clearInterval(up[inter]) down[inter] = window.setInterval("slidedown(submenu)",1) } function slidedown(submenu) { if(submenu.offsetTop < submenustart) { submenu.style.top = submenu.offsetTop + 1 + "px" } } function titleout(headmenu, inter) { submenu = headmenu.lastChild down[inter] = window.clearInterval(down[inter]) up[inter] = window.setInterval("slideup(submenu)", 1) } function slideup(submenu) { if(submenu.offsetTop > submenustart - submenu.clientHeight + 1) { submenu.style.top = submenu.offsetTop - 1 + "px" } } The variable submenustart gets appointed a value in another function which is not relevant for my question. HTML looks like this: <table class="hoofding" id="hoofding"> <tr> <td onmouseover="titleover(this, 0)" onmouseout="titleout(this, 0)"><a href="#" class="hoofdinglink" id="hoofd1">AAAA</a> <table class="menu"> <tr><td><a href="...">1111</a></td></tr> <tr><td><a href="...">2222</a></td></tr> <tr><td><a href="...">3333</a></td></tr> </table></td> <td onmouseover="titleover(this, 1)" onmouseout="titleout(this, 1)"><a href="#" class="hoofdinglink">BBBB</a> <table class="menu"> <tr><td><a href="...">1111</a></td></tr> <tr><td><a href="...">2222</a></td></tr> <tr><td><a href="...">3333</a></td></tr> <tr><td><a href="...">4444</a></td></tr> <tr><td><a href="...">5555</a></td></tr> </table></td> ... </tr> </table> What happens is the following: If I go over and out (for ex) menu A it works fine. If i go now over menu B the interval applied to A is now applied to B. There are now 2 interval functions applied to B. The one originally for A and a new one triggered by the mouseover on B. If I would go to A all the intervals are now applied to A. I have been searching for hours but and I am completely stuck. Thanks in advance.

    Read the article

  • Strategy and AI for the game 'Proximity'

    - by smci
    'Proximity' is a strategy game of territorial domination similar to Othello, Go and Risk. Two players, uses a 10x12 hex grid. Game invented by Brian Cable in 2007. Seems to be a worthy game for discussing a) optimal strategy then b) how to build an AI Strategies are going to be probabilistic or heuristic-based, due to the randomness factor, and the high branching factor (starts out at 120). So it will be kind of hard to compare objectively. A compute time limit of 5s per turn seems reasonable. Game: Flash version here and many copies elsewhere on the web Rules: here Object: to have control of the most armies after all tiles have been placed. Each turn you received a randomly numbered tile (value between 1 and 20 armies) to place on any vacant board space. If this tile is adjacent to any ally tiles, it will strengthen each tile's defenses +1 (up to a max value of 20). If it is adjacent to any enemy tiles, it will take control over them if its number is higher than the number on the enemy tile. Thoughts on strategy: Here are some initial thoughts; setting the computer AI to Expert will probably teach a lot: minimizing your perimeter seems to be a good strategy, to prevent flips and minimize worst-case damage like in Go, leaving holes inside your formation is lethal, only more so with the hex grid because you can lose armies on up to 6 squares in one move low-numbered tiles are a liability, so place them away from your main territory, near the board edges and scattered. You can also use low-numbered tiles to plug holes in your formation, or make small gains along the perimeter which the opponent will not tend to bother attacking. a triangle formation of three pieces is strong since they mutually reinforce, and also reduce the perimeter Each tile can be flipped at most 6 times, i.e. when its neighbor tiles are occupied. Control of a formation can flow back and forth. Sometimes you lose part of a formation and plug any holes to render that part of the board 'dead' and lock in your territory/ prevent further losses. Low-numbered tiles are obvious-but-low-valued liabilities, but high-numbered tiles can be bigger liabilities if they get flipped (which is harder). One lucky play with a 20-army tile can cause a swing of 200 (from +100 to -100 armies). So tile placement will have both offensive and defensive considerations. Comment 1,2,4 seem to resemble a minimax strategy where we minimize the maximum expected possible loss (modified by some probabilistic consideration of the value ß the opponent can get from 1..20 i.e. a structure which can only be flipped by a ß=20 tile is 'nearly impregnable'.) I'm not clear what the implications of comments 3,5,6 are for optimal strategy. Interested in comments from Go, Chess or Othello players. (The sequel ProximityHD for XBox Live, allows 4-player -cooperative or -competitive local multiplayer increases the branching factor since you now have 5 tiles in your hand at any given time, of which you can only play one. Reinforcement of ally tiles is increased to +2 per ally.)

    Read the article

  • Most simple way to do holiday calculation?

    - by brainfrog
    I want to make a little free calendar program to help me and others calculate how much time we have got left in a project. I mean real working time, not just time. Time in a raw form is not saying much. Typically when my boss tells me that I have time until 05-05-2011 it doesn't tell me really how much time I have to do my job. You know...so many things stop me from work: A) beeing at home, not at work (so called "free time" or "spare time"). That is in my case I work exactly 8 hours a day and then the cleaning ladies throw me out of the office with their incredible loud industrial vacuum cleaners every evening (my boss accepts that as an excuse to go home in time, regularly). B) weekends, or more precisely saturdays and sundays C) official holiday rescuing me from having to go to work. what I want to do is make a little utility which tells me how many working hours I really have in a given time period. The first two things A and B are pretty easy to implement. But the last thing C scares my pants off. Holidays. OOOHHH man. You know what that means. Chaos. Pure chaos. The huge question is: HOW TO CALCULATE HOLIDAYS?! Since I want my program to be useful for anyone anywhere in the world, I can't just hardcode all holidays for my little town. So which options do I have? I) I could hand-craft downloadable lists of holidays. Users search them within the application and download them from an webserver. Or I ship all of them in the package. But I would get very, very old if I tried that by myself for every country, state and town. II) I make an initial data sheet with holidays for my town, and don't care about the rest. However, I make that sheet with an how-to public, so that everyone who feels like beeing very nice can provide holiday data for his country / region / whatever. Those are made public on a webserver and everyone can get the data packages he/she needs for the app. III) ? I care a lot about usability. I don't want to make an ugly linux hack style hard to use app that only computer freaks can use. So you need to tell me more about holiday science. I was never really clever at this. I assume every single country in the world has it's own set of holidays. In every country there may be several states. For example the US has some, and Germany has also some states. Holidays vary from state to state. But I know from an good programmer he told me never assume anything. So the questions about holiday science are: Which categories do I need to make holiday-data-packs searchable? A guy from India should find quickly his holiday data pack, and a guy from Sillicon Valley should find his pack as equally fast. It makes most sense to me to filter for COUNTRY STATE WHATEVER. Like a drill-down-search. Did I miss something? What would be the best data format to hold holiday information? A holiday has a start and end date and a name. That should be enough. Would I put all this stuff in thousands of XML files? How would you go about this? Any hint / help is highly welcome! Thanks to everyone!

    Read the article

  • Find optimal strategy and AI for the game 'Proximity'?

    - by smci
    'Proximity' is a strategy game of territorial domination similar to Othello, Go and Risk. Two players, uses a 10x12 hex grid. Game invented by Brian Cable in 2007. Seems to be a worthy game for discussing a) optimal algorithm then b) how to build an AI. Strategies are going to be probabilistic or heuristic-based, due to the randomness factor, and the insane branching factor (20^120). So it will be kind of hard to compare objectively. A compute time limit of 5s per turn seems reasonable. Game: Flash version here and many copies elsewhere on the web Rules: here Object: to have control of the most armies after all tiles have been placed. Each turn you received a randomly numbered tile (value between 1 and 20 armies) to place on any vacant board space. If this tile is adjacent to any ally tiles, it will strengthen each tile's defenses +1 (up to a max value of 20). If it is adjacent to any enemy tiles, it will take control over them if its number is higher than the number on the enemy tile. Thoughts on strategy: Here are some initial thoughts; setting the computer AI to Expert will probably teach a lot: minimizing your perimeter seems to be a good strategy, to prevent flips and minimize worst-case damage like in Go, leaving holes inside your formation is lethal, only more so with the hex grid because you can lose armies on up to 6 squares in one move low-numbered tiles are a liability, so place them away from your main territory, near the board edges and scattered. You can also use low-numbered tiles to plug holes in your formation, or make small gains along the perimeter which the opponent will not tend to bother attacking. a triangle formation of three pieces is strong since they mutually reinforce, and also reduce the perimeter Each tile can be flipped at most 6 times, i.e. when its neighbor tiles are occupied. Control of a formation can flow back and forth. Sometimes you lose part of a formation and plug any holes to render that part of the board 'dead' and lock in your territory/ prevent further losses. Low-numbered tiles are obvious-but-low-valued liabilities, but high-numbered tiles can be bigger liabilities if they get flipped (which is harder). One lucky play with a 20-army tile can cause a swing of 200 (from +100 to -100 armies). So tile placement will have both offensive and defensive considerations. Comment 1,2,4 seem to resemble a minimax strategy where we minimize the maximum expected possible loss (modified by some probabilistic consideration of the value ß the opponent can get from 1..20 i.e. a structure which can only be flipped by a ß=20 tile is 'nearly impregnable'.) I'm not clear what the implications of comments 3,5,6 are for optimal strategy. Interested in comments from Go, Chess or Othello players. (The sequel ProximityHD for XBox Live, allows 4-player -cooperative or -competitive local multiplayer increases the branching factor since you now have 5 tiles in your hand at any given time, of which you can only play one. Reinforcement of ally tiles is increased to +2 per ally.)

    Read the article

  • Problems with multiple setIntervals running simultaniously

    - by Roel V.
    Hello, My first post here. I want to make a horizontal menu with submenu's sliding down on mouseover. I know I could use jQuery but this is to practice my javascript skills. I use the following code: var up = new Array() var down = new Array() var submenustart function titleover(headmenu, inter) { submenu = headmenu.lastChild up[inter] = window.clearInterval(up[inter]) down[inter] = window.setInterval("slidedown(submenu)",1) } function slidedown(submenu) { if(submenu.offsetTop < submenustart) { submenu.style.top = submenu.offsetTop + 1 + "px" } } function titleout(headmenu, inter) { submenu = headmenu.lastChild down[inter] = window.clearInterval(down[inter]) up[inter] = window.setInterval("slideup(submenu)", 1) } function slideup(submenu) { if(submenu.offsetTop > submenustart - submenu.clientHeight + 1) { submenu.style.top = submenu.offsetTop - 1 + "px" } } The variable submenustart gets appointed a value in another function which is not relevant for my question. HTML looks like this: <table class="hoofding" id="hoofding"> <tr> <td onmouseover="titleover(this, 0)" onmouseout="titleout(this, 0)"><a href="#" class="hoofdinglink" id="hoofd1">AAAA</a> <table class="menu"> <tr><td><a href="...">1111</a></td></tr> <tr><td><a href="...">2222</a></td></tr> <tr><td><a href="...">3333</a></td></tr> </table></td> <td onmouseover="titleover(this, 1)" onmouseout="titleout(this, 1)"><a href="#" class="hoofdinglink">BBBB</a> <table class="menu"> <tr><td><a href="...">1111</a></td></tr> <tr><td><a href="...">2222</a></td></tr> <tr><td><a href="...">3333</a></td></tr> <tr><td><a href="...">4444</a></td></tr> <tr><td><a href="...">5555</a></td></tr> </table></td> ... </tr> </table> What happens is the following: If I go over and out (for ex) menu A it works fine. If i go now over menu B the interval applied to A is now applied to B. There are now 2 interval functions applied to B. The one originally for A and a new one triggered by the mouseover on B. If I would go to A all the intervals are now applied to A. I have been searching for hours but and I am completely stuck. Thanks in advance.

    Read the article

  • In Ruby, how to I read memory values from an external process?

    - by grg-n-sox
    So all I simply want to do is make a Ruby program that reads some values from known memory address in another process's virtual memory. Through my research and basic knowledge of hex editing a running process's x86 assembly in memory, I have found the base address and offsets for the values in memory I want. I do not want to change them; I just want to read them. I asked a developer of a memory editor how to approach this abstract of language and assuming a Windows platform. He told me the Win32API calls for OpenProcess, CreateProcess, ReadProcessMemory, and WriteProcessMemory were the way to go using either C or C++. I think that the way to go would be just using the Win32API class and mapping two instances of it; One for either OpenProcess or CreateProcess, depending on if the user already has th process running or not, and another instance will be mapped to ReadProcessMemory. I probably still need to find the function for getting the list of running processes so I know which running process is the one I want if it is running already. This would take some work to put all together, but I am figuring it wouldn't be too bad to code up. It is just a new area of programming for me since I have never worked this low level from a high level language (well, higher level than C anyways). I am just wondering of the ways to approach this. I could just use a bunch or Win32API calls, but that means having to deal with a bunch of string and array pack and unpacking that is system dependant I want to eventually make this work cross-platform since the process I am reading from is produced from an executable that has multiple platform builds, (I know the memory address changes from system to system. The idea is to have a flat file that contains all memory mappings so the Ruby program can just match the current platform environment to the matching memory mapping.) but from the looks of things I'll just have to make a class that wraps whatever is the current platform's system shared library memory related function calls. For all I know, there could already exist a Ruby gem that takes care of all of this for me that I am just not finding. I could also possibly try editing the executables for each build to make it so whenever the memory values I want to read from are written to by the process, it also writes a copy of the new value to a space in shared memory that I somehow have Ruby make an instance of a class that is a pointer under the hood to that shared memory address and somehow signal to the Ruby program that the value was updated and should be reloaded. Basically a interrupt based system would be nice, but since the purpose of reading these values is just to send to a scoreboard broadcasted from a central server, I could just stick to a polling based system that sends updates at fixed time intervals. I also could just abandon Ruby altogether and go for C or C++ but I do not know those nearly as well. I actually know more x86 than C++ and I only know C as far as system independent ANSI C and have never dealt with shared system libraries before. So is there a gem or lesser known module available that has already done this? If not, then any additional information as to how to accomplish this would be nice. I guess, long story short, how do I do all this? Thanks in advance, Grg PS: Also a confirmation that those Win32API calls should be aimed at the kernel32.dll library would be nice.

    Read the article

  • Toggle KML Layers, Infowindow isnt working

    - by user1653126
    I have this code, i am trying to toggle some kml layers. The problem is that when i click the marker it isn't showing the infowindow. Maybe someone can show me my error. Thanks. Here is the CODE <!DOCTYPE html> <html> <head> <meta name="viewport" content="initial-scale=1.0, user-scalable=no" /> <style type="text/css"> html { height: 100% } body { height: 100%; margin: 0; padding: 0 } #map_canvas { height: 100% } </style> <script type="text/javascript" src="http://maps.googleapis.com/maps/api/js?key=IzaSyAvj6XNNPO8YPFbkVR8KcTl5LK1ByRHG1E&sensor=false"> </script> <script type="text/javascript"> var map; // lets define some vars to make things easier later var kml = { a: { name: "Productores", url: "https://maps.google.hn/maps/ms?authuser=0&vps=2&hl=es&ie=UTF8&msa=0&output=kml&msid=200984447026903306654.0004c934a224eca7c3ad4" } // keep adding more if you like }; // initialize our goo function initializeMap() { var options = { center: new google.maps.LatLng(13.324182,-87.080071), zoom: 8, mapTypeId: google.maps.MapTypeId.TERRAIN } map = new google.maps.Map(document.getElementById("map_canvas"), options); createTogglers(); }; google.maps.event.addDomListener(window, 'load', initializeMap); // the important function... kml[id].xxxxx refers back to the top function toggleKML(checked, id) { if (checked) { var layer = new google.maps.KmlLayer(kml[id].url, { preserveViewport: true, suppressInfoWindows: true }); // store kml as obj kml[id].obj = layer; kml[id].obj.setMap(map); } else { kml[id].obj.setMap(null); delete kml[id].obj; } }; // create the controls dynamically because it's easier, really function createTogglers() { var html = "<form><ul>"; for (var prop in kml) { html += "<li id=\"selector-" + prop + "\"><input type='checkbox' id='" + prop + "'" + " onclick='highlight(this,\"selector-" + prop + "\"); toggleKML(this.checked, this.id)' \/>" + kml[prop].name + "<\/li>"; } html += "<li class='control'><a href='#' onclick='removeAll();return false;'>" + "Remove all layers<\/a><\/li>" + "<\/ul><\/form>"; document.getElementById("toggle_box").innerHTML = html; }; function removeAll() { for (var prop in kml) { if (kml[prop].obj) { kml[prop].obj.setMap(null); delete kml[prop].obj; } } }; // Append Class on Select function highlight(box, listitem) { var selected = 'selected'; var normal = 'normal'; document.getElementById(listitem).className = (box.checked ? selected: normal); }; </script> <style type="text/css"> .selected { font-weight: bold; } </style> </head> <body> <div id="map_canvas" style="width: 50%; height: 200px;"></div> <div id="toggle_box" style="position: absolute; top: 200px; right: 1000px; padding: 20px; background: #fff; z-index: 5; "></div> </body> </html>

    Read the article

  • Find optimal/good-enough strategy and AI for the game 'Proximity'?

    - by smci
    'Proximity' is a strategy game of territorial domination similar to Othello, Go and Risk. Two players, uses a 10x12 hex grid. Game invented by Brian Cable in 2007. Seems to be a worthy game for discussing a) optimal algorithm then b) how to build an AI. Strategies are going to be probabilistic or heuristic-based, due to the randomness factor, and the insane branching factor (20^120). So it will be kind of hard to compare objectively. A compute time limit of 5s per turn seems reasonable. Game: Flash version here and many copies elsewhere on the web Rules: here Object: to have control of the most armies after all tiles have been placed. Each turn you received a randomly numbered tile (value between 1 and 20 armies) to place on any vacant board space. If this tile is adjacent to any ally tiles, it will strengthen each tile's defenses +1 (up to a max value of 20). If it is adjacent to any enemy tiles, it will take control over them if its number is higher than the number on the enemy tile. Thoughts on strategy: Here are some initial thoughts; setting the computer AI to Expert will probably teach a lot: minimizing your perimeter seems to be a good strategy, to prevent flips and minimize worst-case damage like in Go, leaving holes inside your formation is lethal, only more so with the hex grid because you can lose armies on up to 6 squares in one move low-numbered tiles are a liability, so place them away from your main territory, near the board edges and scattered. You can also use low-numbered tiles to plug holes in your formation, or make small gains along the perimeter which the opponent will not tend to bother attacking. a triangle formation of three pieces is strong since they mutually reinforce, and also reduce the perimeter Each tile can be flipped at most 6 times, i.e. when its neighbor tiles are occupied. Control of a formation can flow back and forth. Sometimes you lose part of a formation and plug any holes to render that part of the board 'dead' and lock in your territory/ prevent further losses. Low-numbered tiles are obvious-but-low-valued liabilities, but high-numbered tiles can be bigger liabilities if they get flipped (which is harder). One lucky play with a 20-army tile can cause a swing of 200 (from +100 to -100 armies). So tile placement will have both offensive and defensive considerations. Comment 1,2,4 seem to resemble a minimax strategy where we minimize the maximum expected possible loss (modified by some probabilistic consideration of the value ß the opponent can get from 1..20 i.e. a structure which can only be flipped by a ß=20 tile is 'nearly impregnable'.) I'm not clear what the implications of comments 3,5,6 are for optimal strategy. Interested in comments from Go, Chess or Othello players. (The sequel ProximityHD for XBox Live, allows 4-player -cooperative or -competitive local multiplayer increases the branching factor since you now have 5 tiles in your hand at any given time, of which you can only play one. Reinforcement of ally tiles is increased to +2 per ally.)

    Read the article

  • What is the best way to test using grails using IDEA?

    - by egervari
    I am seriously having a very non-pleasant time testing using Grails. I will describe my experience, and I'd like to know if there's a better way. The first problem I have with testing is that Grails doesn't give immediate feedback to the developer when .save() fails inside of an integration test. So let's say you have a domain class with 12 fields, and 1 of them is violating a constraint and you don't know it when you create the instance... it just doesn't save. Naturally, the test code afterward is going to fail. This is most troublesome because the thingy under test is probably fine... and the real risk and pain is the setup code for the test itself. So, I've tried to develop the habit of using .save(failOnError: true) to avoid this problem, but that's not something that can be easily enforced by everyone working on the project... and it's kind of bloaty. It'd be nice to turn this on for code that is running as part of a unit test automatically. Integration Tests run slow. I cannot understand how 1 integration test that saves 1 object takes 15-20 seconds to run. With some careful test planning, I've been able to get 1000 tests talking to an actual database and doing dbunit dumps after every test to happen in about the same time! This is dumb. It is hard to run all the unit tests and not integration tests in IDEA. Integration tests are a massive pain. Idea actually shows a GREEN BAR when integration tests fail. The output given by grails indicates that something failed, but it doesn't say what it was. It says to look in the test reports... which forces the developer to launch up their file system to hunt the stupid html file down. What a pain. Then once you got the html file and click to the failing test, it'll tell you a line number. Since these reports are not in the IDE, you can't just click the stack trace to go to that line of code... you gotta go back and find it yourself. ARGGH!@!@! Maybe people put up with this, but I refuse. Testing should not be this painful. It should be fast and painless, or people won't do it. Please help. What is the solution? Rails instead of Grails? Something else entirely? I love the Grails framework, but they never demo their testing for a reason. They have a snazzy framework, but the testing is painful. After having used Scala for the last 1.5 months, and being totally spoiled by ScalaTest... I can't go back to this.

    Read the article

  • Hardware/Software inventory open source projects

    - by Dick dastardly
    Dear Stackoverflowers I would like to develop a Network Inventory application that works on any operating system. Reports on every possible resource attacehd to a network. Reports all pertinent details of hardware and software. Thats (and i hate to use the phrase) my "End Game". However I am running before i can crawl here. I have no experience of this type of development, e.g. discovering a computers hardware and software settings. I've spent almost two weeks googling and come up short! :-(. So I am turning to you to ask these questions:- My first step is to find an existing open source project i can incorporate into my own code that extracts the fine grained details i am after, e.g. EVERYTHING there is to know about the hardaware and software on a single machine. Does this project exist? or do i have to develop that first? Have i got to write all this in C? I am guessing getting this information about a computer is going to be easier than for printers, scanners, routers etc... e.g. everything else you would find attached to a network. Once i have access to a single computers details i then need to investigate how i can traverse an entire newtork of printers, scanners, routers, load balancers, switches, firewalls, workstations, servers, storeage devices, laptops, monitors, the list goes on and on One problem i have is i dont have a 1000 machine newtork to play on! Is there any such resource available on theinternet? (is that a silly question?) Anywho, if you dont ask you wont find out! One aspect iam really looking forward to finding out how to travers the entire network, should i be using TCP/IP for this? Whats a good site, blog, usergorup, book for TCP/IP development? How do i go about getting through firewalls? How many questions can i ask in one go? :-) My previous question on this topic ended up with PYTHON being championed as the language/script to go with to develop this application in. Having looked at a few PYTHON examples they all seemed to be related to WINDOWS networks and interrogating Windows Management Instrumentation (WMI). I had the feeling you cant rely on whats in WMI, and even if you can that s no good for UNIX netwrks. Surely there exist common code for extracting hardware and software details from a computer? Why cant i find it on the internet? Pease help? Theres no prizes though :-( Thanks in advance I would like to appologise if i have broken forum rules or not tried hard enough on my own before asking for assistance. I just would like to start moving forward with this as its one of the best projects i have been involved with. I am inspired by the many differnt number of challenges involved and that if i manage to produce a useful application at the end of it it would hopefully be extremely helpful to many people. That sit Thanks in advance DD

    Read the article

  • Blogger issue with custom background images on Chrome and Safari

    - by hdx
    This is weird, the site in question is blog.andrebatista.info and it is a hosted at blogger.com. I'm trying to make the blogger template look the same as the one in my main website, www.andrebatista.info. For some reason if I go directly to the blog address Chrome and Safari fail to display all of my background images... all of them. However if I first go to www.andrebatista.info first and then go to the blog it renders just fine ?¿ The way I'm customizing it is by adding a link to my main site's stylesheet at the very end of the head tag on the blogger template. That stylesheet is displayed below: *{ margin:0; padding:0; border:0; } html,body { background:#064169 url(http://www.andrebatista.info/images/main_gradient_slice.jpg) repeat-x; font-family: Arial, "MS Trebuchet", sans-serif; font-size:18px; } #main_wrapper{ margin: 0 auto; width:1024px; } #header{ background: url(http://www.andrebatista.info/images/header.jpg); height:133px; border:none; margin:0; } #menu_wrapper{ background: url(http://www.andrebatista.info/images/menu.jpg); height:34px; overflow:hidden; } #menu_wrapper .menu_item{ color:white; float: left; border: 1 px solid red; height: 34px; padding-top:10px; text-align:center; width:100px; } #menu_wrapper .first{ padding-left:240px; float:left; width:100px; } #menu_wrapper .active,#menu_wrapper .menu_item:hover{ background-color:white; color:Teal; cursor: pointer; } #content_area_wrapper{ background: url(http://www.andrebatista.info/images/body_bg_slice.jpg) repeat-y; } #content_area{ min-height:524px; background: url(http://www.andrebatista.info/images/main_content_top.jpg) repeat-x; } #main_banner{ background: url(http://www.andrebatista.info/images/main_banner.jpg) no-repeat center center; width:662px; height:338px; margin: 0 auto; } #main_banner div{ color:white; padding-left:47px; padding-right:164px; padding-top:105px; } #text_area{ overflow:hidden; width:662px; margin:0 auto; padding:14px; } #contentList{ padding:0 20px 20px 20px; } #text_area .left{ width:50%; float:left; } #text_area .left{ width:50%; float:right; } #footer2{ background: url(http://www.andrebatista.info/images/footer.jpg); height:62px; } #footer{ background: url(http://www.andrebatista.info/images/footer.jpg); height:62px; } Any ideas on what I could be missing?

    Read the article

  • Why do some Flask session values disappear from the session after closing the browser window, but then reappear later without me adding them?

    - by Ben
    So my understanding of Flask sessions is that I can use it like a dictionary and add values to a session by doing: session['key name'] = 'some value here' And that works fine. On a route I have the client call using AJAX post, I assign a value to the session. And it works fine. I can click on various pages of my site and the value stays in the session. If I close the browser window however, and then go back to my site, the session value I had in there is gone. So that's weird and you would think the problem is the session isn't permanent. I also implemented Flask-Openid and that uses the session to store information and that does persist if I close the browser window and open it back up again. I also checked the cookie after closing the browser window, but before going back to my site, and the cookie is indeed still there. Another odd piece of behaviour (which may be related) is that some values I have written to the session for testing purposes will go away when I access the AJAX post route and assign the correct value. So that is odd, but what is truly weird is that when I then close the browser window and open it up again, and have thus lost the value I was trying to retain, the ones that I lost previously actually return! They aren't being reassigned because there's no code in my Python files to reassign those values. Here is some outputs to helper make it clearer. They are all outputed from a route for a real page, and not the AJAX post route I mentioned above. This is the output after I have assigned the value I want to store in the session. The value key is 'userid' - all the other values are dummy ones I have added in trying to solve this problem. 'userid': 8 will stay in the session as long as I don't close the browser window. I can access other routes and the value will stay there just like it should. ['session.=', <SecureCookieSession {'userid': 8, 'test_variable_num': 102, 'adding using before request': 'hi', '_permanent': True, 'test_variable_text': 'hi!'}>] If I do close the browser window, and go back into the site, but without redoing the AJAX post request, I get this output: ['session.=', <SecureCookieSession {'adding using before request': 'hi', '_permanent': True, 'yo': 'yo'}>] The 'yo' value was not in the first first output. I don't know where it came from. I searched my code for 'yo' and there is no instances of me assigning that value anywhere. I think I may have added it to the session days ago. So it seems like it is persisting, but being hidden when the other values are written. And this last one is me accessing the AJAX post route again, and then going to the page that prints out the keys using debug. Same output as the first output I pasted above, which you would expect, and the 'yo' value is gone again (but it will come back if I close the browser window) ['session.=', <SecureCookieSession {'userid': 8, 'test_variable_num': 102, 'adding using before request': 'hi', '_permanent': True, 'test_variable_text': 'hi!'}>] I tested this in both Chrome and Firefox. So I find this all weird and I am guessing it stems from a misunderstanding of how sessions work. I think they're dictionaries and I can write dictionary values into them and retrieve them days later as long as I set the session to permanent and the cookie doesn't get deleted. Any ideas why this weird behaviour is happening?

    Read the article

  • this program runs but not correctly:brief decription of what i am trying to do can someone tell me i

    - by user320950
    this is what i have to do: write a program that determines the grade dispersal for 100 students You are to read the exam scores into three arrays, one array for each exam. You must then calculate how many students scored A’s (90 or above), B’s (80 or above), C’s (70 or above), D’s (60 or above), and F’s (less than 60). Do this for each exam and write the distribution to the screen. // basic file operations #include <iostream> #include <fstream> using namespace std; int read_file_in_array(double exam[100][3]); double calculate_total(double exam1[], double exam2[], double exam3[]); // function that calcualates grades to see how many 90,80,70,60 //void display_totals(); double exam[100][3]; int main() { double go,go2,go3; double exam[100][3],exam1[100],exam2[100],exam3[100]; go=read_file_in_array(exam); go2=calculate_total(exam1,exam2,exam3); //go3=display_totals(); cout << go,go2,go3; return 0; } /* int display_totals() { int grade_total; grade_total=calculate_total(exam1,exam2,exam3); return 0; } */ double calculate_total(double exam1[],double exam2[],double exam3[]) { int calc_tot,above90=0, above80=0, above70=0, above60=0,i,j, fail=0; double exam[100][3]; calc_tot=read_file_in_array(exam); for(i=0;i<100;i++) { for (j=0; j<3; j++) { exam1[i]=exam[100][0]; exam2[i]=exam[100][1]; exam3[i]=exam[100][2]; if(exam[i][j] <=90 && exam[i][j] >=100) { above90++; { if(exam[i][j] <=80 && exam[i][j] >=89) { above80++; { if(exam[i][j] <=70 && exam[i][j] >=79) { above70++; { if(exam[i][j] <=60 && exam[i][j] >=69) { above60++; { if(exam[i][j] >=59) { fail++; } } } } } } } } } } } return 0; } int read_file_in_array(double exam[100][3]) { ifstream infile; int exam1[100]; int exam2[100]; int exam3[100]; infile.open("grades.txt");// file containing numbers in 3 columns if(infile.fail()) // checks to see if file opended { cout << "error" << endl; } int num, i=0,j=0; while(!infile.eof()) // reads file to end of line { for(i=0;i<100;i++) // array numbers less than 100 { for(j=0;j<3;j++) // while reading get 1st array or element infile >> exam[i][j]; infile >> exam[i][j]; infile >> exam[i][j]; cout << exam[i][j] << endl; { if (! (infile >> exam[i][j]) ) cout << exam[i][j] << endl; } exam[i][j]=exam1[i]; exam[i][j]=exam2[i]; exam[i][j]=exam3[i]; } infile.close(); } return 0; }

    Read the article

  • this program runs but not correctly numbers arent right, i read numbers from a file and then when i

    - by user320950
    this is what i have to do: write a program that determines the grade dispersal for 100 students You are to read the exam scores into three arrays, one array for each exam. You must then calculate how many students scored A’s (90 or above), B’s (80 or above), C’s (70 or above), D’s (60 or above), and F’s (less than 60). Do this for each exam and write the distribution to the screen. // basic file operations #include <iostream> #include <fstream> using namespace std; int read_file_in_array(double exam[100][3]); double calculate_total(double exam1[], double exam2[], double exam3[]); // function that calcualates grades to see how many 90,80,70,60 //void display_totals(); double exam[100][3]; int main() { double go,go2,go3; double exam[100][3],exam1[100],exam2[100],exam3[100]; go=read_file_in_array(exam); go2=calculate_total(exam1,exam2,exam3); //go3=display_totals(); cout << go,go2,go3; return 0; } /* int display_totals() { int grade_total; grade_total=calculate_total(exam1,exam2,exam3); return 0; } */ double calculate_total(double exam1[],double exam2[],double exam3[]) { int calc_tot,above90=0, above80=0, above70=0, above60=0,i,j, fail=0; double exam[100][3]; calc_tot=read_file_in_array(exam); for(i=0;i<100;i++) { for (j=0; j<3; j++) { exam1[i]=exam[100][0]; exam2[i]=exam[100][1]; exam3[i]=exam[100][2]; if(exam[i][j] <=90 && exam[i][j] >=100) { above90++; { if(exam[i][j] <=80 && exam[i][j] >=89) { above80++; { if(exam[i][j] <=70 && exam[i][j] >=79) { above70++; { if(exam[i][j] <=60 && exam[i][j] >=69) { above60++; { if(exam[i][j] >=59) { fail++; } } } } } } } } } } } return 0; } int read_file_in_array(double exam[100][3]) { ifstream infile; int exam1[100]; int exam2[100]; int exam3[100]; infile.open("grades.txt");// file containing numbers in 3 columns if(infile.fail()) // checks to see if file opended { cout << "error" << endl; } int num, i=0,j=0; while(!infile.eof()) // reads file to end of line { for(i=0;i<100;i++) // array numbers less than 100 { for(j=0;j<3;j++) // while reading get 1st array or element infile >> exam[i][j]; infile >> exam[i][j]; infile >> exam[i][j]; cout << exam[i][j] << endl; { if (! (infile >> exam[i][j]) ) cout << exam[i][j] << endl; } exam[i][j]=exam1[i]; exam[i][j]=exam2[i]; exam[i][j]=exam3[i]; } infile.close(); } return 0; }

    Read the article

< Previous Page | 192 193 194 195 196 197 198 199 200 201 202 203  | Next Page >