Search Results

Search found 48844 results on 1954 pages for 'first steps'.

Page 112/1954 | < Previous Page | 108 109 110 111 112 113 114 115 116 117 118 119  | Next Page >

  • MSCC: Purpose and benefits of Version Control Systems (VCS)

    Unfortunately, there was no monthly meetup during May. Which means that it was even more important and interesting to go forward with a great topic for this month. Earlier this year I already spoke to Nayar Joolfoo about doing a presentation on version control systems (VCS), and he gladly agreed since then. It was just about finding the right date for the action. Furthermore, it was also a great coincidence that Avinash Meetoo announced on social media networks that Knowledge 7 is about to have a new training on "Effective git" - which correlates to a book title Avinash is currently working on - all the best with your approach on this and reach out to our MSCC craftsmen for recessions. Once again a big Thank you to Orange Ebene Accelerator on providing the venue for us, and the MSCC members involved on securing the time slot for our event. Unfortunately, it's kind of tough to get an early confirmation for our meetups these days. I'll keep you posted on that one as there are some interesting and exciting options coming up soon. Okay, let's talk about the meeting and version control systems again. As usual, I'm going to put my first impression of the meetup: "Absolutely great topic, questions and discussions on version control systems, like git or VSO. I was also highly pleased by the number of first timers and female IT geeks. Hopefully, we will be able to keep this trend for future get-togethers." And I really have to emphasise the amount of fresh blood coming to our gathering. Also, during the initial phase it was surprising to see that exactly those first-timers, most of them students at various campuses here on the island, had absolutely no idea about version control systems. More about further down... Reactions of other attendees If I counted correctly, we had a total of 17 attendees this month, and I'd like to give you feedback from some of them: "Inspiring. Helped me understand more about GIT." -- Sean on event comments "Joined the meetup today with literally no idea what is a version control system. I have several reasons why I should be starting to use VCS as from NOW in my projects. Thanks Nayar, Jochen and other participants :)" -- Yudish on event comments "Was present today and I'm very satisfied.I was not aware if there was a such tool like git available. Thanks to those who contributed for this meetup.It was great. Learned a lot from this meetup!!" -- Leonardo on event comments "Seriously, I can see how it’s going to ease my task and help me save time. Gone are the issues with files backups.  And since I’ll be doing my dissertation this year, using Git would help me a lot for my backups and I’m grateful to Nayar for the great explanation." -- Swan-Iyah on MSCC meetup : Version Controls Hopefully, I'll be able to get some other sources - personal blogs preferred - on our meeting. Geeks, thank you so much for those encouraging comments. It's really great to experience that we, all members of the MSCC, are doing the right thing to get more IT information out, and to help each other to improve and evolve in our professional careers. Our agenda of the day Honestly, we had a bumpy start... First, I was battling a little bit with the movable room divider in order to maximize the space. I mean, we had 24 RSVPs and usually there might additional people coming along. Then, for what ever reason, we were facing power outages - actually twice in short periods. Not too good for the projector after all, but hey it went smooth for the rest of the time being. And last but not least... our first speaker Nayar got stuck somewhere on the road. ;-) Anyway, not a real show-stopper and we used the time until Nayar's arrival to introduce ourselves a little bit. It is always important for me to get to know the "newbies" a little bit, and as a result we had lots of students of university - first year, second year and recent graduates - among them. Surprisingly, none of them was ever in contact with version control systems at all. I mean, this is a shocking discovery! Similar to the ability of touch-typing I'd say that being able to use (and master) any kind of version control system is compulsory in any job in the IT industry. Seriously, I'm wondering what is being taught during the classes on the campus. All of them have to work on semester assessments or final projects, even in small teams of 2-4 people. That's the perfect occasion to get started with VCS. Already in this phase, we had great input from more experienced VCS users, like Sean, Avinash and myself. git - a modern approach to VCS - Nayar What a tour! Nayar gave us the full round of git from start to finish, even touching some more advanced techniques. First, he started to explain about the importance of version control systems as an essential tool for software developers, even working alone on a project, and the ability to have a kind of "time machine" that allows you to inspect and revert to a previous version of source code at any time. Then he showed how easy it is to install git on an Ubuntu based system but also mentioned that git is literally available for any operating system, like Windows, Mac OS X and of course other Linux distributions. Next, he showed us how to set the initial configuration values of user name and email address which simplifies the daily usage of the git client while working with your repositories. Then he initialised and added a new repository for some local development of a blogging software. All commands were done using the command line interface (CLI) so that they can be repeated on any system as reference. The syntax and the procedure is always the same, and Nayar clearly mentioned this to the attendees. Now, having a git repository in place it was about time to work on some "important" changes on the blogging software - just for the sake of demonstrating the ease of use and power of git. One interesting question came very early: "How many commands do we have to learn? It looks quite difficult at the moment" - Well, rest assured that during daily development circles you will need less than 10 git commands on a regular base: git add, commit, push, pull, checkout, and merge And Nayar demo'd all of them. Much to the delight of everyone he also showed gitk which is the git repository browser. It's an UI tool to display changes in a repository or a selected set of commits. This includes visualizing the commit graph, showing information related to each commit, and the files in the trees of each revision. Using gitk to display and browse information of a local git repository And last but not least, we took advantage of the internet connectivity and reached out to various online portals offering git hosting for free. Nayar showed us how to push the local repository into a remote system on github. Showing the web-based git browser and history handling, and then also explained and demo'd on how to connect to existing online repositories in order to get access to either your own source code or other people's open source projects. Next to github, we also spoke about bitbucket and gitlab as potential online platforms for your projects. Have a look at the conditions and details about their free service packages and what you can get additionally as a paying customer. Usually, you already get a lot of services for up to five users for free but there might be other important aspects that might have an impact on your decision. Anyways, moving git-based repositories between systems is a piece of cake, and changing online platforms is possible at any stage of your development. Visual Studio Online (VSO) - Jochen Well, Nayar literally covered all elements of working with git during his session, including the use of external online platforms. So, what would be the advantage of talking about Visual Studio Online (VSO)? First of all, VSO is "just another" online platform for hosting and managing git repositories on remote systems, equivalent to github, bitbucket, or any other web site. At the moment (of writing), Microsoft also provides a free package of up to five users / developers on a git repository but there is more in that package. Of course, it is related to software development on the Windows systems and the bonds are tightened towards the use of Visual Studio but out of experience you are absolutely not restricted to that. Connecting a Linux or Mac OS X machine with a git client or an integrated development environment (IDE) like Eclipse or Xcode works as smooth as expected. So, why should one opt in for VSO? Well, one of the main aspects that I would like to mention here is that VSO integrates the Application Life Cycle Methodology (ALM) of Microsoft in their platform. Meaning that you get agile project management with Backlogs, Sprints, Burn-down charts as well as the ability to track tasks, bug reports and work items next to collaborative team chats. It's the whole package of agile development you'll get. And, something I mentioned briefly during the begin of our meeting, VSO gives you the possibility of an automated continuous integrated (CI) process which builds and can run tests of your source code after each commit of changes. Having a proper CI strategy is also part of the Clean Code Developer practices - on Level Green actually -, and not only simplifies your life as a software developer but also reduces the sources of potential errors. Seamless integration and automated deployment between Microsoft Azure Web Sites and git repository But my favourite feature is the seamless continuous deployment to Microsoft Azure. Especially, while working on web projects it's absolutely astounishing that as soon as you commit your chances it just takes a couple of seconds until your modifications are deployed and available on your Azure-hosted web sites. Upcoming Events and networking Due to the adjusted times, everybody was kind of hungry and we didn't follow up on networking or upcoming events - very unfortunate to my opinion and this will have an impact on future planning of our meetups. Because I rather would like to see more conversations during and at the end of our meetings than everyone just packing their laptops, bags and accessories and rush off to grab some food. I was hoping to get some information regarding this year's Code Challenge - supposedly to be organised during July? Maybe someone could leave a comment on that - but I couldn't get any updates. Well, I'll keep digging... In case that you would like to get more into git and how to use it effectively, please check out Knowledge 7's upcoming course on "Effective git". Thanks Avinash for your vital input into today's conversation and I'm looking forward to get a grip on your book title very soon. My resume of the day Do not work in IT without any kind of version control system! Seriously, without a VCS in place you're doing it wrong. It's like driving a car without seat belts attached or riding your bike without safety helmet. You don't do that! End of discussion. ;-) Nowadays, having access to free (as in cost) tools to install on your machine and numerous online platforms to host your source code for free for up to five users it's a no-brainer to get yourself familiar with VCS. Today's sessions gave a good overview on how to start using git and how to connect to various remote services like github or VSO.

    Read the article

  • C#/.NET &ndash; Finding an Item&rsquo;s Index in IEnumerable&lt;T&gt;

    - by James Michael Hare
    Sorry for the long blogging hiatus.  First it was, of course, the holidays hustle and bustle, then my brother and his wife gave birth to their son, so I’ve been away from my blogging for two weeks. Background: Finding an item’s index in List<T> is easy… Many times in our day to day programming activities, we want to find the index of an item in a collection.  Now, if we have a List<T> and we’re looking for the item itself this is trivial: 1: // assume have a list of ints: 2: var list = new List<int> { 1, 13, 42, 64, 121, 77, 5, 99, 132 }; 3:  4: // can find the exact item using IndexOf() 5: var pos = list.IndexOf(64); This will return the position of the item if it’s found, or –1 if not.  It’s easy to see how this works for primitive types where equality is well defined.  For complex types, however, it will attempt to compare them using EqualityComparer<T>.Default which, in a nutshell, relies on the object’s Equals() method. So what if we want to search for a condition instead of equality?  That’s also easy in a List<T> with the FindIndex() method: 1: // assume have a list of ints: 2: var list = new List<int> { 1, 13, 42, 64, 121, 77, 5, 99, 132 }; 3:  4: // finds index of first even number or -1 if not found. 5: var pos = list.FindIndex(i => i % 2 == 0);   Problem: Finding an item’s index in IEnumerable<T> is not so easy... This is all well and good for lists, but what if we want to do the same thing for IEnumerable<T>?  A collection of IEnumerable<T> has no indexing, so there’s no direct method to find an item’s index.  LINQ, as powerful as it is, gives us many tools to get us this information, but not in one step.  As with almost any problem involving collections, there are several ways to accomplish the same goal.  And once again as with almost any problem involving collections, the choice of the solution somewhat depends on the situation. So let’s look at a few possible alternatives.  I’m going to express each of these as extension methods for simplicity and consistency. Solution: The TakeWhile() and Count() combo One of the things you can do is to perform a TakeWhile() on the list as long as your find condition is not true, and then do a Count() of the items it took.  The only downside to this method is that if the item is not in the list, the index will be the full Count() of items, and not –1.  So if you don’t know the size of the list beforehand, this can be confusing. 1: // a collection of extra extension methods off IEnumerable<T> 2: public static class EnumerableExtensions 3: { 4: // Finds an item in the collection, similar to List<T>.FindIndex() 5: public static int FindIndex<T>(this IEnumerable<T> list, Predicate<T> finder) 6: { 7: // note if item not found, result is length and not -1! 8: return list.TakeWhile(i => !finder(i)).Count(); 9: } 10: } Personally, I don’t like switching the paradigm of not found away from –1, so this is one of my least favorites.  Solution: Select with index Many people don’t realize that there is an alternative form of the LINQ Select() method that will provide you an index of the item being selected: 1: list.Select( (item,index) => do something here with the item and/or index... ) This can come in handy, but must be treated with care.  This is because the index provided is only as pertains to the result of previous operations (if any).  For example: 1: // assume have a list of ints: 2: var list = new List<int> { 1, 13, 42, 64, 121, 77, 5, 99, 132 }; 3:  4: // you'd hope this would give you the indexes of the even numbers 5: // which would be 2, 3, 8, but in reality it gives you 0, 1, 2 6: list.Where(item => item % 2 == 0).Select((item,index) => index); The reason the example gives you the collection { 0, 1, 2 } is because the where clause passes over any items that are odd, and therefore only the even items are given to the select and only they are given indexes. Conversely, we can’t select the index and then test the item in a Where() clause, because then the Where() clause would be operating on the index and not the item! So, what we have to do is to select the item and index and put them together in an anonymous type.  It looks ugly, but it works: 1: // extensions defined on IEnumerable<T> 2: public static class EnumerableExtensions 3: { 4: // finds an item in a collection, similar to List<T>.FindIndex() 5: public static int FindIndex<T>(this IEnumerable<T> list, Predicate<T> finder) 6: { 7: // if you don't name the anonymous properties they are the variable names 8: return list.Select((item, index) => new { item, index }) 9: .Where(p => finder(p.item)) 10: .Select(p => p.index + 1) 11: .FirstOrDefault() - 1; 12: } 13: }     So let’s look at this, because i know it’s convoluted: First Select() joins the items and their indexes into an anonymous type. Where() filters that list to only the ones matching the predicate. Second Select() picks the index of the matches and adds 1 – this is to distinguish between not found and first item. FirstOrDefault() returns the first item found from the previous clauses or default (zero) if not found. Subtract one so that not found (zero) will be –1, and first item (one) will be zero. The bad thing is, this is ugly as hell and creates anonymous objects for each item tested until it finds the match.  This concerns me a bit but we’ll defer judgment until compare the relative performances below. Solution: Convert ToList() and use FindIndex() This solution is easy enough.  We know any IEnumerable<T> can be converted to List<T> using the LINQ extension method ToList(), so we can easily convert the collection to a list and then just use the FindIndex() method baked into List<T>. 1: // a collection of extension methods for IEnumerable<T> 2: public static class EnumerableExtensions 3: { 4: // find the index of an item in the collection similar to List<T>.FindIndex() 5: public static int FindIndex<T>(this IEnumerable<T> list, Predicate<T> finder) 6: { 7: return list.ToList().FindIndex(finder); 8: } 9: } This solution is simplicity itself!  It is very concise and elegant and you need not worry about anyone misinterpreting what it’s trying to do (as opposed to the more convoluted LINQ methods above). But the main thing I’m concerned about here is the performance hit to allocate the List<T> in the ToList() call, but once again we’ll explore that in a second. Solution: Roll your own FindIndex() for IEnumerable<T> Of course, you can always roll your own FindIndex() method for IEnumerable<T>.  It would be a very simple for loop which scans for the item and counts as it goes.  There’s many ways to do this, but one such way might look like: 1: // extension methods for IEnumerable<T> 2: public static class EnumerableExtensions 3: { 4: // Finds an item matching a predicate in the enumeration, much like List<T>.FindIndex() 5: public static int FindIndex<T>(this IEnumerable<T> list, Predicate<T> finder) 6: { 7: int index = 0; 8: foreach (var item in list) 9: { 10: if (finder(item)) 11: { 12: return index; 13: } 14:  15: index++; 16: } 17:  18: return -1; 19: } 20: } Well, it’s not quite simplicity, and those less familiar with LINQ may prefer it since it doesn’t include all of the lambdas and behind the scenes iterators that come with deferred execution.  But does having this long, blown out method really gain us much in performance? Comparison of Proposed Solutions So we’ve now seen four solutions, let’s analyze their collective performance.  I took each of the four methods described above and run them over 100,000 iterations of lists of size 10, 100, 1000, and 10000 and here’s the performance results.  Then I looked for targets at the begining of the list (best case), middle of the list (the average case) and not in the list (worst case as must scan all of the list). Each of the times below is the average time in milliseconds for one execution as computer over the 100,000 iterations: Searches Matching First Item (Best Case)   10 100 1000 10000 TakeWhile 0.0003 0.0003 0.0003 0.0003 Select 0.0005 0.0005 0.0005 0.0005 ToList 0.0002 0.0003 0.0013 0.0121 Manual 0.0001 0.0001 0.0001 0.0001   Searches Matching Middle Item (Average Case)   10 100 1000 10000 TakeWhile 0.0004 0.0020 0.0191 0.1889 Select 0.0008 0.0042 0.0387 0.3802 ToList 0.0002 0.0007 0.0057 0.0562 Manual 0.0002 0.0013 0.0129 0.1255   Searches Where Not Found (Worst Case)   10 100 1000 10000 TakeWhile 0.0006 0.0039 0.0381 0.3770 Select 0.0012 0.0081 0.0758 0.7583 ToList 0.0002 0.0012 0.0100 0.0996 Manual 0.0003 0.0026 0.0253 0.2514   Notice something interesting here, you’d think the “roll your own” loop would be the most efficient, but it only wins when the item is first (or very close to it) regardless of list size.  In almost all other cases though and in particular the average case and worst case, the ToList()/FindIndex() combo wins for performance, even though it is creating some temporary memory to hold the List<T>.  If you examine the algorithm, the reason why is most likely because once it’s in a ToList() form, internally FindIndex() scans the internal array which is much more efficient to iterate over.  Thus, it takes a one time performance hit (not including any GC impact) to create the List<T> but after that the performance is much better. Summary If you’re concerned about too many throw-away objects, you can always roll your own FindIndex() method, but for sheer simplicity and overall performance, using the ToList()/FindIndex() combo performs best on nearly all list sizes in the average and worst cases.    Technorati Tags: C#,.NET,Litte Wonders,BlackRabbitCoder,Software,LINQ,List

    Read the article

  • Ambiguation between multitouch geistures tap and free drag in Windows Phone 8 Emulator (Monogame)

    - by Moses Aprico
    I am making a 2d tile based tactic game. I want the map to be slided around (because it's bigger than the screen) with FreeDrag (It's perfectly done, the map can moved around, that's not the problem). And then, I want to display the character's actions, everytime it's tapped. The problem then appeared. Everytime I want to FreeDrag the map, the Tap trigger always fired first before the FreeDrag one. Is there any way to differ the map sliding than the character tapping? Below is my code. while (TouchPanel.IsGestureAvailable) { GestureSample gesture = TouchPanel.ReadGesture(); switch (gesture.GestureType) { case GestureType.FreeDrag: { //a } break; case GestureType.Tap: { //b } break; } } Every time I first want to free drag (at the first touch), it always goes to "b" first (see commented line above), and then to "a" rather than immediately goes to "a". I've tried flick, but it seems the movement produced by flick is too fast, so freedrag fits the most. Is there any way or workaround to perform FreeDrag (or similar) without firing the Tap trigger? Thanks in advance.

    Read the article

  • Fusion Middleware Summer Camps & advanced partner trainings

    - by JuergenKress
    For Specialized partners who are working on following projects & opportunities, we offer these advanced summer camps: BPM Suite 11 ADF 11g WebCenter Portal WebLogic 12c SOA Suite 11g ADF for BPM Suite 11 WebCenter Sites 11g All training sessions will be from HQ product management and our PTS team. The sessions will take place in July in Lisbon Portugal and Munich Germany. . Participation is limited to two people per company and bootcamp. Registration is handled by first come first serve, please pay attention to the skill requirements, the pre-requisitions and the follow up! We will not accept people onto the training who do not match the criteria! Lisbon: Monday, July 9th 11:00AM - Friday July 13th 16:00 PM (Lisbon time) BPM Suite 11g advanced training by David Read ADF 11g advanced training by Grant Ronald and Frank Nimphius WebCenter Portal advanced training by Stefan Krantz and Angelo Santagata WebLogic 12c training by Cosmin Tudor Munich: Monday, July 16th 11:00 AM - Wednesday July 18th 16:00 PM (CET) SOA Suite 11g advanced training by Niall Commiskey ADF for BPM Suite 11g advanced training by David Read WebCenter Sites 11g advanced training by Product Management & PTS Cost: Free of charge, cancelation or no-show fee 2.000€. Bootcamps are limited to 20 persons first come first serve. For details and registration please visit Lisbon registration page: & Munich registration page Quotes summer camps 2011 “From zero to hero with this BPM workshop" Steven Boon, Ordina. “This is the training that prepares for real projects and POCs"Jon Petter Hjulstad, eVita - blog & twitter Impressions summer camps 2011 SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: summer camps,OFM summer camps,training,education,SOA Community,Oracle SOA,Oracle BPM,BPM Community,OPN,Jürgen Kress

    Read the article

  • Fusion Middleware Summer Camps - advanced partner trainings

    - by JuergenKress
    For Specialized partners who are working on following projects & opportunities, we offer these advanced summer camps: BPM Suite 11 ADF 11g WebCenter Portal WebLogic 12c SOA Suite 11g ADF for BPM Suite 11 WebCenter Sites 11g All training sessions will be from HQ product management and our PTS team. The sessions will take place in July in Lisbon Portugal and Munich Germany. . Participation is limited to two people per company and bootcamp. Registration is handled by first come first serve, please pay attention to the skill requirements, the pre-requisitions and the follow up! We will not accept people onto the training who do not match the criteria! Lisbon: Monday, July 9th 11:00AM - Friday July 13th 16:00 PM (Lisbon time) BPM Suite 11g advanced training by David Read ADF 11g advanced training by Grant Ronald and Frank Nimphius WebCenter Portal advanced training by Stefan Krantz and Angelo Santagata WebLogic 12c training by Cosmin Tudor Munich: Monday, July 16th 11:00 AM - Wednesday July 18th 16:00 PM (CET) SOA Suite 11g advanced training by Niall Commiskey ADF for BPM Suite 11g advanced training by David Read WebCenter Sites 11g advanced training by Product Management & PTS Cost: Free of charge, cancelation or no-show fee 2.000€. Bootcamps are limited to 20 persons first come first serve. For details and registration please visit Lisbon registration page: & Munich registration page Quotes summer camps 2011 “From zero to hero with this BPM workshop" Steven Boon, Ordina. “This is the training that prepares for real projects and POCs"Jon Petter Hjulstad, eVita - blog & twitter Impressions summer camps 2011 WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: WebLogic,WebLogic Community,Java Message Service,Java Spring,Summer Camps,education

    Read the article

  • ORACLE Fusion Middleware Summer Camps in Lisbon: Includes Advanced ADF Training by Oracle Product Management

    - by Frank Nimphius
    From July 9th - JUly 13th 2012, Frank Nimphius and Grant Ronald from the JDeveloper and ADF Product Management team present an 4 1/2 day advanced ADF Training Lisbon for the EMEA Oracle SOA Community. The training runs in parallel to an advanced WebCenter training giving you a chance to network with your peers during breaks and lunch. See below announcement by the ORACLE EMEA SOA Community for details:For Specialized partners who are working on following projects & opportunities, we (Oracle) offer these advanced summer camps:    ADF 11g     WebCenter Portal     SOA Suite 11g     ADF for BPM Suite 11     WebCenter Sites 11g All training sessions will be from HQ product management and our PTS team. The sessions will take place in July in Lisbon Portugal and Munich Germany. . Participation is limited to two people per company and bootcamp. Registration is handled by first come first serve, please pay attention to the skill requirements, the pre-requisitions and the follow up! We will not accept people onto the training who do not match the criteria!Lisbon: Monday, July 9th 11:00AM - Friday July 13th 16:00 PM (Lisbon time) ADF 11g advanced training by Grant Ronald and Frank Nimphius WebCenter Portal advanced training by Stefan Krantz and Angelo Santagata Cost: Free of charge, cancelation or no-show fee 2.000€Bootcamps are limited to 20 persons first come first serveFor details and registration please visit Lisbon registration page

    Read the article

  • Project Euler 12: (Iron)Python

    - by Ben Griswold
    In my attempt to learn (Iron)Python out in the open, here’s my solution for Project Euler Problem 12.  As always, any feedback is welcome. # Euler 12 # http://projecteuler.net/index.php?section=problems&id=12 # The sequence of triangle numbers is generated by adding # the natural numbers. So the 7th triangle number would be # 1 + 2 + 3 + 4 + 5 + 6 + 7 = 28. The first ten terms # would be: # 1, 3, 6, 10, 15, 21, 28, 36, 45, 55, ... # Let us list the factors of the first seven triangle # numbers: # 1: 1 # 3: 1,3 # 6: 1,2,3,6 # 10: 1,2,5,10 # 15: 1,3,5,15 # 21: 1,3,7,21 # 28: 1,2,4,7,14,28 # We can see that 28 is the first triangle number to have # over five divisors. What is the value of the first # triangle number to have over five hundred divisors? import time start = time.time() from math import sqrt def divisor_count(x): count = 2 # itself and 1 for i in xrange(2, int(sqrt(x)) + 1): if ((x % i) == 0): if (i != sqrt(x)): count += 2 else: count += 1 return count def triangle_generator(): i = 1 while True: yield int(0.5 * i * (i + 1)) i += 1 triangles = triangle_generator() answer = 0 while True: num = triangles.next() if (divisor_count(num) >= 501): answer = num break; print answer print "Elapsed Time:", (time.time() - start) * 1000, "millisecs" a=raw_input('Press return to continue')

    Read the article

  • Welcome to the Red Gate BI Tools Team blog!

    - by Red Gate Software BI Tools Team
    Welcome to the first ever post on the brand new Red Gate Business Intelligence Tools Team blog! About the team Nick Sutherland (product manager): After many years as a software developer and project manager, Nick took an MBA and turned to product marketing. SSAS Compare is his second lean startup product (the first being SQL Connect). Follow him on Twitter. David Pond (developer): Before he joined Red Gate in 2011, David made monitoring systems for Goodyear. Follow him on Twitter. Jonathan Watts (tester): Jonathan became a tester after finishing his media degree and joining Xerox. He joined Red Gate in 2004. Follow him on Twitter. James Duffy (technical author): After a spell as a writer in the video game industry, James lived briefly in Tokyo before returning to the UK to start at Red Gate. What we’re working on We launched a beta of our first tool, SSAS Compare, last month. It works like SQL Compare but for SSAS cubes, letting you deploy just the changes you want. It’s completely free (for now), so check it out. We’re still working on it, and we’re eager to hear what you think. We hope SSAS Compare will be the first of several tools Red Gate develops for BI professionals, so keep an eye out for more from us in the future. Why we need you This is your chance to help influence the course of SSAS Compare and our future BI tools. If you’re a business intelligence specialist, we want to hear about the problems you face so we can build tools that solve them. What do you want to see? Tell us! We’ll be posting more about SSAS Compare, business intelligence and our journey into BI in the coming days and weeks. Stay tuned!

    Read the article

  • Welcome to the Red Gate BI Tools Team blog!

    - by BI Tools Team
    Welcome to the first ever post on the brand new Red Gate Business Intelligence Tools Team blog! About the team Nick Sutherland (product manager): After many years as a software developer and project manager, Nick took an MBA and turned to product marketing. SSAS Compare is his second lean startup product (the first being SQL Connect). Follow him on Twitter. David Pond (developer): Before he joined Red Gate in 2011, David made monitoring systems for Goodyear. Follow him on Twitter. Jonathan Watts (tester): Jonathan became a tester after finishing his media degree and joining Xerox. He joined Red Gate in 2004. Follow him on Twitter. James Duffy (technical author): After a spell as a writer in the video game industry, James lived briefly in Tokyo before returning to the UK to start at Red Gate. What we're working on We launched a beta of our first tool, SSAS Compare, last month. It works like SQL Compare but for SSAS cubes, letting you deploy just the changes you want. It's completely free (for now), so check it out. We're still working on it, and we're eager to hear what you think. We hope SSAS Compare will be the first of several tools Red Gate develops for BI professionals, so keep an eye out for more from us in the future. Why we need you This is your chance to help influence the course of SSAS Compare and our future BI tools. If you're a business intelligence specialist, we want to hear about the problems you face so we can build tools that solve them. What do you want to see? Tell us! We'll be posting more about SSAS Compare, business intelligence and our journey into BI in the coming days and weeks. Stay tuned!

    Read the article

  • BizTalk: History of one project architecture

    - by Leonid Ganeline
    "In the beginning God made heaven and earth. Then he started to integrate." At the very start was the requirement: integrate two working systems. Small digging up: It was one system. It was good but IT guys want to change it to the new one, much better, chipper, more flexible, and more progressive in technologies, more suitable for the future, for the faster world and hungry competitors. One thing. One small, little thing. We cannot turn off the old system (call it A, because it was the first), turn on the new one (call it B, because it is second but not the last one). The A has a hundreds users all across a country, they must study B. A still has a lot nice custom features, home-made features that cannot disappear. These features have to be moved to the B and it is a long process, months and months of redevelopment. So, the decision was simple. Let’s move not jump, let’s both systems working side-by-side several months. In this time we could teach the users and move all custom A’s special functionality to B. That automatically means both systems should work side-by-side all these months and use the same data. Data in A and B must be in sync. That’s how the integration projects get birth. Moreover, the specific of the user tasks requires the both systems must be in sync in real-time. Nightly synchronization is not working, absolutely.   First draft The first draft seems simple. Both systems keep data in SQL databases. When data changes, the Create, Update, Delete operations performed on the data, and the sync process could be started. The obvious decision is to use triggers on tables. When we are talking about data, we are talking about several entities. For example, Orders and Items [in Orders]. We decided to use the BizTalk Server to synchronize systems. Why it was chosen is another story. Second draft   Let’s take an example how it works in more details. 1.       User creates a new entity in the A system. This fires an insert trigger on the entity table. Trigger has to pass the message “Entity created”. This message includes all attributes of the new entity, but I focused on the Id of this entity in the A system. Notation for this message is id.A. System A sends id.A to the BizTalk Server. 2.       BizTalk transforms id.A to the format of the system B. This is easiest part and I will not focus on this kind of transformations in the following text. The message on the picture is still id.A but it is in slightly different format, that’s why it is changing in color. BizTalk sends id.A to the system B. 3.       The system B creates the entity on its side. But it uses different id-s for entities, these id-s are id.B. System B saves id.A+id.B. System B sends the message id.A+id.B back to the BizTalk. 4.       BizTalk sends the message id.A+id.B to the system A. 5.       System A saves id.A+id.B. Why both id-s should be saved on both systems? It was one of the next requirements. Users of both systems have to know the systems are in sync or not in sync. Users working with the entity on the system A can see the id.B and use it to switch to the system B and work there with the copy of the same entity. The decision was to store the pairs of entity id-s on both sides. If there is only one id, the entities are not in sync yet (for the Create operation). Third draft Next problem was the reliability of the synchronization. The synchronizing process can be interrupted on each step, when message goes through the wires. It can be communication problem, timeout, temporary shutdown one of the systems, the second system cannot be synchronized by some internal reason. There were several potential problems that prevented from enclosing the whole synchronization process in one transaction. Decision was to restart the whole sync process if it was not finished (in case of the error). For this purpose was created an additional service. Let’s call it the Resync service. We still keep the id pairs in both systems, but only for the fast access not for the synchronization process. For the synchronizing these id-s now are kept in one main place, in the Resync service database. The Resync service keeps record as: ·       Id.A ·       Id.B ·       Entity.Type ·       Operation (Create, Update, Delete) ·       IsSyncStarted (true/false) ·       IsSyncFinished (true/false0 The example now looks like: 1.       System A creates id.A. id.A is saved on the A. Id.A is sent to the BizTalk. 2.       BizTalk sends id.A to the Resync and to the B. id.A is saved on the Resync. 3.       System B creates id.B. id.A+id.B are saved on the B. id.A+id.B are sent to the BizTalk. 4.       BizTalk sends id.A+id.B to the Resync and to the A. id.A+id.B are saved on the Resync. 5.       id.A+id.B are saved on the B. Resync changes the IsSyncStarted and IsSyncFinished flags accordingly. The Resync service implements three main methods: ·       Save (id.A, Entity.Type, Operation) ·       Save (id.A, id.B, Entity.Type, Operation) ·       Resync () Two Save() are used to save id-s to the service storage. See in the above example, in 2 and 4 steps. What about the Resync()? It is the method that finishes the interrupted synchronization processes. If Save() is started by the trigger event, the Resync() is working as an independent process. It periodically scans the Resync storage to find out “unfinished” records. Then it restarts the synchronization processes. It tries to synchronize them several times then gives up.     One more thing, both systems A and B must tolerate duplicates of one synchronizing process. Say on the step 3 the system B was not able to send id.A+id.B back. The Resync service must restart the synchronization process that will send the id.A to B second time. In this case system B must just send back again also created id.A+id.B pair without errors. That means “tolerate duplicates”. Fourth draft Next draft was created only because of the aesthetics. As it always happens, aesthetics gave significant performance gain to the whole system. First was the stupid question. Why do we need this additional service with special database? Can we just master the BizTalk to do something like this Resync() does? So the Resync orchestration is doing the same thing as the Resync service. It is started by the Id.A and finished by the id.A+id.B message. The first works as a Start message, the second works as a Finish message.     Here is a diagram the whole process without errors. It is pretty straightforward. The Resync orchestration is waiting for the Finish message specific period of time then resubmits the Id.A message. It resubmits the Id.A message specific number of times then gives up and gets suspended. It can be resubmitted then it starts the whole process again: waiting [, resubmitting [, get suspended]], finishing. Tuning up The Resync orchestration resubmits the id.A message with special “Resubmitted” flag. The subscription filter on the Resync orchestration includes predicate as (Resubmit_Flag != “Resubmitted”). That means only the first Sync orchestration starts the Resync orchestration. Other Sync orchestration instantiated by the resubmitting can finish this Resync orchestration but cannot start another instance of the Resync   Here is a diagram where system B was inaccessible for some period of time. The Resync orchestration resubmitted the id.A two times. Then system B got the response the id.A+id.B and this finished the Resync service execution. What is interesting about this, there were submitted several identical id.A messages and only one id.A+id.B message. Because of this, the system B and the Resync must tolerate the duplicate messages. We also told about this requirement for the system B. Now the same requirement is for the Resunc. Let’s assume the system B was very slow in the first response and the Resync service had time to resubmit two id.A messages. System B responded not, as it was in previous case, with one id.A+id.B but with two id.A+id.B messages. First of them finished the Resync execution for the id.A. What about the second id.A+id.B? Where it goes? So, we have to add one more internal requirement. The whole solution must tolerate many identical id.A+id.B messages. It is easy task with the BizTalk. I added the “SinkExtraMessages” subscriber (orchestration with one receive shape), that just get these messages and do nothing. Real design Real architecture is much more complex and interesting. In reality each system can submit several id.A almost simultaneously and completely unordered. There are not only the “Create entity” operation but the Update and Delete operations. And these operations relate each other. Say the Update operation after Delete means not the same as Update after Create. In reality there are entities related each other. Say the Order and Order Items. Change on one of it could start the series of the operations on another. Moreover, the system internals are the “black boxes” and we cannot predict the exact content and order of the operation series. It worth to say, I had to spend a time to manage the zombie message problems. The zombies are still here, but this is not a problem now. And this is another story. What is interesting in the last design? One orchestration works to help another to be more reliable. Why two orchestration design is more reliable, isn’t it something strange? The Synch orchestration takes all the message exchange between systems, here is the area where most of the errors could happen. The Resync orchestration sends and receives messages only within the BizTalk server. Is there another design? Sure. All Resync functionality could be implemented inside the Sync orchestration. Hey guys, some other ideas?

    Read the article

  • Booting sequence. Ubuntu 12.04 installation and cohabitation with former OSes

    - by Stephane Rolland
    I am on the brink of installing Ubuntu 12.04 Precise Pengolin on the first primary partition of my hard-drive. (A day in History for me since I had always kept a MS windows at this first place). But I have some fears: This is my last computer available (In the past I used to have 2 or even 3 machines so I could always un/plug HDs for recovery operations and rescue) The current booting sequence is not straight forard. So as to explain the boot sequence let me briefly sum-up the history of this laptop computer. It was a dedicated Windows Vista computer. 1st and only Primary partition. Then I added Windows 7 (on the 2nd primary partition) letting the Windows Vista Boot Loader manage the boot sequence. Then I added Ubuntu 10.04 Lucid Lynx on the 1st sub-partition of the Extended Partitionm asking Grub to be the boot loader. But when I ask Grub to launch windows it launches the Vista BootLoader that manages the choice betzeen Vista and 7. So in theory Grub is on the MasterBootRecord - though I understand where the Vista BootLoader remains. Now, I will no longer use the Ubuntu 10.04 ( on extended partition) and also the Windows vista (on the first primary partition). I will install Ubuntu 12.04 on the First Primary, asking it to install a new bootloader. I want to keep the Windows 7 that is already on the Second Primary partition. And I want it to be loaded by the Ubuntu Boot loader(I don4t knoz zhich is included in this version)... And I am afraid the last point will not work.

    Read the article

  • Join Domain and Dos App

    - by Austin Lamb
    ok, So First off yes i have read all the related topics and those fixs are either out of date or dont work. i am running ubuntu 12.04 and i would like to add it to the win2008 server network, after i get that done i would like to mount the F:\ drive of the server somewhere on my linux machine where it can be identified as Drive F:\ by wine or Dosemu if i can achieve all of that i need to find out how to run a MS-Dos 16-bit Point-of-sales Graphic program in ubuntu whether that be through wine, dosemu, or dosBox. it does not matter it just has to be able to read and write to the servers F: drive, operate the dos app, and support LPt1 (i think) for printing reciepts and loading tickets. i am a decently knowledgeable windows tech, at least thats what my job description says.. but this is my first encounter with linux in a work environment, it could prove to very experience changing if i can just prove it as a practical theory and a reasonable solution, and get it to work.. the first step is to get it joined to the domain. i have likewise-open CLI and GUI versions, samba, and GADMIN-SAMBA installed in attempts to get any of them to work. any help in any area is greatly appreciated, especially with the domain joining since it is the first step and what i thought would be the easiest step..

    Read the article

  • Unity throws SynchronizationLockException while debugging

    - by pjohnson
    I've found Unity to be a great resource for writing unit-testable code, and tests targeting it. Sadly, not all those unit tests work perfectly the first time (TDD notwithstanding), and sometimes it's not even immediately apparent why they're failing. So I use Visual Studio's debugger. I then see SynchronizationLockExceptions thrown by Unity calls, when I never did while running the code without debugging. I hit F5 to continue past these distractions, the line that had the exception appears to have completed normally, and I continue on to what I was trying to debug in the first place.In settings where Unity isn't used extensively, this is just one amongst a handful of annoyances in a tool (Visual Studio) that overall makes my work life much, much easier and more enjoyable. But in larger projects, it can be maddening. Finally it bugged me enough where it was worth researching it.Amongst the first and most helpful Google results was, of course, at Stack Overflow. The first couple answers were extensive but seemed a bit more involved than I could pull off at this stage in the product's lifecycle. A bit more digging showed that the Microsoft team knows about this bug but hasn't prioritized it into any released build yet. SO users jaster and alex-g proposed workarounds that relieved my pain--just go to Debug|Exceptions..., find the SynchronizationLockException, and uncheck it. As others warned, this will skip over SynchronizationLockExceptions in your code that you want to catch, but that wasn't a concern for me in this case. Thanks, guys; I've used that dialog before, but it's been so long I'd forgotten about it.Now if I could just do the same for Microsoft.CSharp.RuntimeBinder.RuntimeBinderException... Until then, F5 it is.

    Read the article

  • Prioritize compiler functionality/tasks, when designing a new language

    - by Mahdi
    Well, the question should be so hard to ask and I expect couple of down votes, however, I'm really interested to have your ideas and recommendations. :) I've already made a very simple compiler, with a few and limited functionality. Now I'm getting more on it to make it more like a real-world compiler. I definitely need to start over 'cause I've much more experience and ideas in this area rather a few years ago. So, I want to know, right now, from the very first step again, which tasks/features for the new compiler should implement first and which tasks has lower priority rather than others? For example, I'd say, first I'd go to decide about the object-oriented structure for the new language, but you might say, hey, just go for a compiler that could define a variable, when you finished that, then start thinking about OOP designs ... I prefer to hear the pros and cons for your suggestions also. Actually I like to start from Bottom to Top, where I could add simplest tasks first, and later adding more complex ones, but I'm totally open for any new ideas, and really appreciate that. Also please consider that I'm thinking about the design concepts. Actually I expect answers like: Priority from Highest to Lowest: variables, because .... functions, because .... loops, because .... ... Not: define a syntax for your new language, and start parsing your source code ...

    Read the article

  • How do I get my graphic card to work properly?

    - by Lucas
    I been having some problems with my graphic card for a time now. But I had enough when I did'nt get Oil Rush to work on my HP Pavilion g6. The system has suggested hardware drivers for me, but the first time I installed them they pretty much fucked up the graphics. After some time I managed to get the computere to work properly (I thought) again. When the game did'nt work I tried to the hardware drivers for the graphic card anyway. First of all there was to possible choices insted of one, as the last time I installed the drivers (when it did'nt work out so good). The choices are: ATI/AMDs proprietary video drivers FGLRX (update for edition) and Proprietary FGLRX-video drivers for ATI/AMD I realized the drivers probebly are pretty much the same, so I tried the first one. But this did'nt work. Instead I was asked to "Look in to /usr/var/log/jockey.log". This did'nt helped me much. Instead I choiced the other one, wich was installed and after reboot there where some changes. First of all there was a lot more details for Unity that was'nt there before and some shortkeys are now working that did'nt before (like Ctrl + T and the Prt Sc-button). But overall everything doesn't work as it used to. Like when you browse between the workspaces it doesn't look the same. To get to the point: it doesn't work well right now even if I got some things better and now will not Oil Rush (as I mentioned in the beginning) even start. SO! Can someone give me any advice with this? I'm stuck. Can't manage to see whats wrong right now. Any help? My graphic card is AMD Radeon HD 6470M.

    Read the article

  • Special Value Sets in Oracle Applications

    - by Manoj Madhusoodanan
    Here I am going to explain Special Value Sets in Oracle Applications.I have a requirement in which I want to execute a BIP report with some parameters. The first parameter Current Month should allow only MON-YYYY format.Schedule Start Date and Schedule End Date should be with in first parameter month. Approach 1If the report is through PL/SQL Stored Procedure executable the we can do all the validation in backend. Approach 2Second approach is through Special Value Sets.This value set has events like Edit,Load and Validate.We can attach PL/SQL code snippet to each event.Here I am going to attach validation routine to Validate event to validate the user input.Validate event fires when the focus leaves from the item. Here I am going to create two special value sets ( one for first parameter and another for the second and third parameter). Value Set 1Name : XXCUST_CURRENT_MONTHList Type : List of ValuesFormat Type : CharMaximum Size : 8Validation Type : SpecialEvent : ValidateFunction : XXCUST_CURRENT_MONTH_VALIDATE_ROUTINEValue Set 2Name : XXCUST_DATESList Type : List of ValuesFormat Type : Standard DateValidation Type : SpecialEvent : ValidateFunction : XXCUST_DATES_VALIDATE_ROUTINE Note: Inside the validate routine I am using FND messages.Generate message file also using "FNDMDGEN apps/password 0 Y US XXCUST DB_TO_RUNTIME". Attach XXCUST_CURRENT_MONTH to first parameter.Also XXCUST_DATES to second and third parameter. Note: Since the program is using Special Value Sets it can be submit only through Oracle Forms.Submission through OA Framework and PL/SQL APIs are not recommended. OutputGive Current Date as 01-2012 Give Schedule Start Date out of current month.

    Read the article

  • Inside the DLR – Invoking methods

    - by Simon Cooper
    So, we’ve looked at how a dynamic call is represented in a compiled assembly, and how the dynamic lookup is performed at runtime. The last piece of the puzzle is how the resolved method gets invoked, and that is the subject of this post. Invoking methods As discussed in my previous posts, doing a full lookup and bind at runtime each and every single time the callsite gets invoked would be far too slow to be usable. The results obtained from the callsite binder must to be cached, along with a series of conditions to determine whether the cached result can be reused. So, firstly, how are the conditions represented? These conditions can be anything; they are determined entirely by the semantics of the language the binder is representing. The binder has to be able to return arbitary code that is then executed to determine whether the conditions apply or not. Fortunately, .NET 4 has a neat way of representing arbitary code that can be easily combined with other code – expression trees. All the callsite binder has to return is an expression (called a ‘restriction’) that evaluates to a boolean, returning true when the restriction passes (indicating the corresponding method invocation can be used) and false when it does’t. If the bind result is also represented in an expression tree, these can be combined easily like so: if ([restriction is true]) { [invoke cached method] } Take my example from my previous post: public class ClassA { public static void TestDynamic() { CallDynamic(new ClassA(), 10); CallDynamic(new ClassA(), "foo"); } public static void CallDynamic(dynamic d, object o) { d.Method(o); } public void Method(int i) {} public void Method(string s) {} } When the Method(int) method is first bound, along with an expression representing the result of the bind lookup, the C# binder will return the restrictions under which that bind can be reused. In this case, it can be reused if the types of the parameters are the same: if (thisArg.GetType() == typeof(ClassA) && arg1.GetType() == typeof(int)) { thisClassA.Method(i); } Caching callsite results So, now, it’s up to the callsite to link these expressions returned from the binder together in such a way that it can determine which one from the many it has cached it should use. This caching logic is all located in the System.Dynamic.UpdateDelegates class. It’ll help if you’ve got this type open in a decompiler to have a look yourself. For each callsite, there are 3 layers of caching involved: The last method invoked on the callsite. All methods that have ever been invoked on the callsite. All methods that have ever been invoked on any callsite of the same type. We’ll cover each of these layers in order Level 1 cache: the last method called on the callsite When a CallSite<T> object is first instantiated, the Target delegate field (containing the delegate that is called when the callsite is invoked) is set to one of the UpdateAndExecute generic methods in UpdateDelegates, corresponding to the number of parameters to the callsite, and the existance of any return value. These methods contain most of the caching, invoke, and binding logic for the callsite. The first time this method is invoked, the UpdateAndExecute method finds there aren’t any entries in the caches to reuse, and invokes the binder to resolve a new method. Once the callsite has the result from the binder, along with any restrictions, it stitches some extra expressions in, and replaces the Target field in the callsite with a compiled expression tree similar to this (in this example I’m assuming there’s no return value): if ([restriction is true]) { [invoke cached method] return; } if (callSite._match) { _match = false; return; } else { UpdateAndExecute(callSite, arg0, arg1, ...); } Woah. What’s going on here? Well, this resulting expression tree is actually the first level of caching. The Target field in the callsite, which contains the delegate to call when the callsite is invoked, is set to the above code compiled from the expression tree into IL, and then into native code by the JIT. This code checks whether the restrictions of the last method that was invoked on the callsite (the ‘primary’ method) match, and if so, executes that method straight away. This means that, the next time the callsite is invoked, the first code that executes is the restriction check, executing as native code! This makes this restriction check on the primary cached delegate very fast. But what if the restrictions don’t match? In that case, the second part of the stitched expression tree is executed. What this section should be doing is calling back into the UpdateAndExecute method again to resolve a new method. But it’s slightly more complicated than that. To understand why, we need to understand the second and third level caches. Level 2 cache: all methods that have ever been invoked on the callsite When a binder has returned the result of a lookup, as well as updating the Target field with a compiled expression tree, stitched together as above, the callsite puts the same compiled expression tree in an internal list of delegates, called the rules list. This list acts as the level 2 cache. Why use the same delegate? Stitching together expression trees is an expensive operation. You don’t want to do it every time the callsite is invoked. Ideally, you would create one expression tree from the binder’s result, compile it, and then use the resulting delegate everywhere in the callsite. But, if the same delegate is used to invoke the callsite in the first place, and in the caches, that means each delegate needs two modes of operation. An ‘invoke’ mode, for when the delegate is set as the value of the Target field, and a ‘match’ mode, used when UpdateAndExecute is searching for a method in the callsite’s cache. Only in the invoke mode would the delegate call back into UpdateAndExecute. In match mode, it would simply return without doing anything. This mode is controlled by the _match field in CallSite<T>. The first time the callsite is invoked, _match is false, and so the Target delegate is called in invoke mode. Then, if the initial restriction check fails, the Target delegate calls back into UpdateAndExecute. This method sets _match to true, then calls all the cached delegates in the rules list in match mode to try and find one that passes its restrictions, and invokes it. However, there needs to be some way for each cached delegate to inform UpdateAndExecute whether it passed its restrictions or not. To do this, as you can see above, it simply re-uses _match, and sets it to false if it did not pass the restrictions. This allows the code within each UpdateAndExecute method to check for cache matches like so: foreach (T cachedDelegate in Rules) { callSite._match = true; cachedDelegate(); // sets _match to false if restrictions do not pass if (callSite._match) { // passed restrictions, and the cached method was invoked // set this delegate as the primary target to invoke next time callSite.Target = cachedDelegate; return; } // no luck, try the next one... } Level 3 cache: all methods that have ever been invoked on any callsite with the same signature The reason for this cache should be clear – if a method has been invoked through a callsite in one place, then it is likely to be invoked on other callsites in the codebase with the same signature. Rather than living in the callsite, the ‘global’ cache for callsite delegates lives in the CallSiteBinder class, in the Cache field. This is a dictionary, typed on the callsite delegate signature, providing a RuleCache<T> instance for each delegate signature. This is accessed in the same way as the level 2 callsite cache, by the UpdateAndExecute methods. When a method is matched in the global cache, it is copied into the callsite and Target cache before being executed. Putting it all together So, how does this all fit together? Like so (I’ve omitted some implementation & performance details): That, in essence, is how the DLR performs its dynamic calls nearly as fast as statically compiled IL code. Extensive use of expression trees, compiled to IL and then into native code. Multiple levels of caching, the first of which executes immediately when the dynamic callsite is invoked. And a clever re-use of compiled expression trees that can be used in completely different contexts without being recompiled. All in all, a very fast and very clever reflection caching mechanism.

    Read the article

  • Entity Framework and distributed Systems

    - by Dirk Beckmann
    I need some help or maybe only a hint for the right direction. I've got a system that is sperated into two applications. An existing VB.NET desktop client using Entity Framework 5 with code first approach and a asp.net Web Api client in C# that will be refactored right yet. It should be possible to deliver OData. The system and the datamodel is still involving and so migrations will happen in undefined intervalls. So I'm now struggling how to manage my database access on the web api system. So my favourd approch would be us Entity Framework on both systems but I'm running into trouble while creating new migrations. Two solutions I've thought about: Shared Data Access dll The first idea was to separate the data access layer to a seperate project an reference from each of the systems. The context would be the same as long as the dll is up to date in each system. This way both soulutions would be able to make a migration. The main problem ist that it is much more complicate to update a web api system than it is with the client Click Once Update Solution and not every migration is important for the web api. This would couse more update trouble and out of sync libraries Database First on Web Api The second idea was just to use the database first approch an on web api side. But it seems that all annotations will be lost by each model update. Other solutions with stored procedures have been discarded because of missing OData support and maintainability. Does anyone run into same conflicts or has any advices how such a problem can be solved!

    Read the article

  • Welcome to the Red Gate BI Tools Team blog!

    - by BI Tools Team
    Welcome to the first ever post on the brand new Red Gate Business Intelligence Tools Team blog! About the team Nick Sutherland (product manager): After many years as a software developer and project manager, Nick took an MBA and turned to product marketing. SSAS Compare is his second lean startup product (the first being SQL Connect). Follow him on Twitter. David Pond (developer): Before he joined Red Gate in 2011, David made monitoring systems for Goodyear. Follow him on Twitter. Jonathan Watts (tester): Jonathan became a tester after finishing his media degree and joining Xerox. He joined Red Gate in 2004. Follow him on Twitter. James Duffy (technical author): After a spell as a writer in the video game industry, James lived briefly in Tokyo before returning to the UK to start at Red Gate. What we're working on We launched a beta of our first tool, SSAS Compare, last month. It works like SQL Compare but for SSAS cubes, letting you deploy just the changes you want. It's completely free (for now), so check it out. We're still working on it, and we're eager to hear what you think. We hope SSAS Compare will be the first of several tools Red Gate develops for BI professionals, so keep an eye out for more from us in the future. Why we need you This is your chance to help influence the course of SSAS Compare and our future BI tools. If you're a business intelligence specialist, we want to hear about the problems you face so we can build tools that solve them. What do you want to see? Tell us! We'll be posting more about SSAS Compare, business intelligence and our journey into BI in the coming days and weeks. Stay tuned!

    Read the article

  • Botting from a stick drive

    - by Zap
    Am trying to boot from a usb stick. Have carefully followed the instructions at the following link and successfully downloaded and installed version Ubuntu 12.04 desk top: http://www.ubuntu.com/download/help/create-a-usb-stick-on-windows I used the Universal-USB-Installer-1.9.0.2 as instructed and choose the "Ubuntu 12.04 desk top" option, after downloading the respective iso/zip file onto my Dell laptop from the Ubuntu site. Also modified my bios to select the usb first as boot drive instead of hard drive. Also, turned off bit blocker on my laptop and usb stick. Usb stick has the setting of "Automatically unlock this drive on this computer". When i reboot my laptop, it first boots into a black screen (i assume is the bios), but prompts saying "Remove disks or other media. Press any key to start". I press any key and regardless the laptop boots up to windows. Hence, it appears that the boot process is checking the USB first before going to the hard drive to look for it's boot disk and starting Windows 7. Is it that the USB stick is not correctly configured with Ubuntu as a boot disk? Is there anything else that i need to do besides the instructions at the following link? http://www.ubuntu.com/download/help/create-a-usb-stick-on-windows How can I ensure that USB boot stick is configured correctly? After running the Universal-USB-Installer-1.9.0.2 to "install" Ubuntu, is there additional configuration/installation steps? What is the first file that the bios would look for on this USB drive? Is this configured somewhere in the bios, or would it just look for an grub file or /boot dir? The only message i get when booting is "Remove disks or other media. Press any key to start". Any and all help would be much appreciated.. Thanks ... :)

    Read the article

  • Searching for entity awareness in 3D space algorithm and data structure

    - by Khanser
    I'm trying to do some huge AI system just for the fun and I've come to this problem. How can I let the AI entities know about each other without getting the CPU to perform redundant and costly work? Every entity has a spatial awareness zone and it has to know what's inside when it has to decide what to do. First thoughts, for every entity test if the other entities are inside the first's reach. Ok, so it was the first try and yep, that is redundant and costly. We are working with real time AI over 10000+ entities so this is not a solution. Second try, calculate some grid over the awareness zone of every entity and test whether in this zones are entities (we are working with 3D entities with float x,y,z location coordinates) testing every point in the grid with the indexed-by-coordinate entities. Well, I don't like this because is also costly, but not as the first one. Third, create some multi linked lists over the x's and y's indexed positions of the entities so when we search for an interval between x,y and z,w positions (this interval defines the square over the spatial awareness zone) over the multi linked list, we won't have 'voids'. This has the problem of finding the nearest proximity value if there isn't one at the position where we start the search. I'm not convinced with any of the ideas so I'm searching for some enlightening. Do you people have any better ideas?

    Read the article

  • Dual booted Windows 7 freezes after login screen

    - by Cathal
    First-time Linux user, using a Packard Bell Easy Note TS laptop. My problem arose after I dual boot installed Ubuntu 12.04 on Windows 7 via WUBI. I backed up all my data, and reinstalled Windows from factory settings on the recovery partition. When I first tried to install Ubuntu I mistakenly closed the lid at the start of the installation, stopping it. After that I rebooted, and my second installation attempt went without a hitch. Ubuntu works perfectly, the data on the partitions seem to be fine. My problem is I can't log back in to Windows 7. After selecting it in GRUB, and then in the Windows 7/ WUBI choice on boot, it loads up perfectly til the user log in screen. After the password is inputted, it stalls on the "Welcome" busy screen. This happens in Safe mode as well. Startup repair can't find a problem and neither can CHKDSK. System restore and Last known good config have no effect either. If anyone could help me out, I'd be real grateful. edit in response to the question below, since I don't know how to comment: Windows was installed first and its partitions are the first on the list. Should I move the windows partitions to after the Linux ones on the disk? Thanks for your help.

    Read the article

  • What do you do to balance the upper or lower case style to name file or folder between work and life? [on hold]

    - by sojyq
    I am a programmer from China. And I like to use English words to name my files and folders Whether it is for work or life. For example, suck as Movie, Work, QtProjects, Music and so on.And I keep the habit of initial the first letter for file name or folder name in Windows. But now I work on Ubuntu, and I found that all file name and folder name are lowercase in addition to the default folder such as Music, Movie and so on. And then I realize that in Linux world, most peoloe like to use all lowercase to name their files and folders for two reasons (1. Linux is Case sensitive. 2. It is fast for shell command.). And after work, when I switch from Linux to Windows, I confuse to use all lowercase or the first letter uppercase style to name my files in Windows. I'm caught in a dilemma. I think that all lowercase is more efficiency but the first letter uppercase is more readable. I thought for a long time and want to come up with a good answer to blance the two style name conversion. But I failed. I want to ask you that how you balance the uppercase or lowercase habbit in Windows, Mac, Linux between work and personal life style? Thank you very much! (My current solution is that when I am in Linux, I use all lowercase for files and folders, but when I am in Windows and Mac OS X, I couldn't find a good reason to convince me to use all lowercase ( I think in Windows and Mac OS X, the first letter uppercase style for me is more readable and beautiful).

    Read the article

  • Ubuntu server is dropping SSH connections, then not allowing me to log back on

    - by wilhil
    I have an ESX box which I have loaded with two Ubuntu Server machines. During setup, I chose no additional packages to install as I just wanted a lightweight machine for testing. The first thing I did was change the root password via sudo passwd After ESX got on my nerves through lag, I decided to install OpenSSH via apt-get install openssh-server. It did it's business, and I then opened putty and could connect in to both machines fine. The first time it connected, it asked me to add the ssh key as obviously it did not know it. Anyway, the second server is working flawlessly, but, the first seems to be giving me trouble. I was in the middle of typing a sentence when it kicked me off for no reason and when I tried to reconnect, putty gave me a warning that the ssh key had changed and it is potentially dangerous. I attempted to log in anyway and it did not work, just the standard access denied message. Using the second machine, I SSHed in to the first machine and it worked straight away, I then killed the SSH sessions (and possibly SSH server), I then reconnected via putty and I again received the security warning message, but, it allowed me to log on fine. ... I thought "glitch" and nothing more of it, but, it just happened again! I really do not understand this and was hoping someone here can help?

    Read the article

  • How do programers balance the upper or lower case style to name file or folder between work and life?

    - by sojyq
    I am a programmer from China. And I like to use English words to name my files and folders Whether it is for work or life. For example, suck as Movie, Work, QtProjects, Music and so on.And I keep the habit of initial the first letter for file name or folder name in Windows. But now I work on Ubuntu, and I found that all file name and folder name are lowercase in addition to the default folder such as Music, Movie and so on. And then I realize that in Linux world, most peoloe like to use all lowercase to name their files and folders for two reasons (1. Linux is Case sensitive. 2. It is fast for shell command.). And after work, when I switch from Linux to Windows, I confuse to use all lowercase or the first letter uppercase style to name my files in Windows. I'm caught in a dilemma. I think that all lowercase is more efficiency but the first letter uppercase is more readable. I thought for a long time and want to come up with a good answer to blance the two style name conversion. But I failed. I want to ask you that how you balance the uppercase or lowercase habbit in Windows, Mac, Linux between work and personal life style? Thank you very much! (My current solution is that when I am in Linux, I use all lowercase for files and folders, but when I am in Windows and Mac OS X, I couldn't find a good reason to convince me to use all lowercase ( I think in Windows and Mac OS X, the first letter uppercase style for me is more readable and beautiful).

    Read the article

< Previous Page | 108 109 110 111 112 113 114 115 116 117 118 119  | Next Page >