Search Results

Search found 4866 results on 195 pages for 'ngu soon hui'.

Page 79/195 | < Previous Page | 75 76 77 78 79 80 81 82 83 84 85 86  | Next Page >

  • Windows Azure and Server App Fabric &ndash; kinsmen or distant relatives?

    - by kaleidoscope
    Technorati Tags: tinu,windows azure,windows server,app fabric,caching windows azure If you are into Windows Azure then it would be rather demeaning to ask if you are aware of Windows Azure App Fabric. Just in case you are not - Windows Azure App Fabric provides a secure connectivity service by means of which developers can build distributed applications as well as services that work across network and organizational boundaries in the cloud. But some of you may have heard of another similar term floating around forums and blog posts - Windows Server App Fabric. The momentary déjà vu that you might have felt upon encountering it is not unheard of in the Cloud Computing circles - http://social.msdn.microsoft.com/Forums/en/netservices/thread/5ad4bf92-6afb-4ede-b4a8-6c2bcf8f2f3f http://forums.virtualizationtimes.com/session-state-management-using-windows-server-app-fabric Many have fallen prey to this ambiguous nomenclature but its not without a purpose. First announced at PDC 2009, Windows Server AppFabric is a set of application services focused on improving the speed, scale, and management of Web, Composite, and Enterprise applications. Initially codenamed Dublin the app fabric (oops....Windows Server App Fabric) provides add-ons like Monitoring,Tracking and Persistence into your hosted Workflow and Services without the Developer worried about these Functionalities. Alongwith this it also provides Distributed In-Memory caching features from Velocity caching. In short it is a healthy equivalent of Windows Azure App Fabric minus the cloud part. So why bring this up while talking about Windows Azure? Well, apart from their similar last names these powers are soon to be combined if Microsoft's roadmap is to be believed - "Together, Windows Server AppFabric and Windows Azure platform AppFabric provide a comprehensive set of services that help developers rapidly develop new applications spanning Windows Azure and Windows Server, and which also interoperate with other industry platforms such as Java, Ruby, and PHP." One of the most powerful features of the Windows Server App Fabric is its distributed caching mechanism which if appropriately leveraged with the Windows Azure App Fabric could very well mean a revolution in the Session Management techniques for the Azure platform. Well Microsoft, we do have our fingers crossed..... Read on... http://blogs.technet.com/windowsserver/archive/2010/03/01/windows-server-appfabric-beta-2-available.aspx

    Read the article

  • Happy New Year! Back to school :)

    - by Jim Wang
    A brand new year is upon us and it’s time to get cracking with WebMatrix again…and go back to school :).  Last year we ran a successful product walkthrough for WebMatrix Beta 2 with our students from around the world, gathering awesome feedback for the final version of WebMatrix which is coming soon!  I’d like to take this chance to thank all the students who participated in this effort…you have really helped make the final product much better than it would have been otherwise. In 2011, we’re looking, as always, at bigger and better things.  One of the ideas that has been floating around is the concept of a WebMatrix college course that you could take for actual credit.  Of course, this is going to require coordination with college educators, but we think we’re up to the challenge :) If your school is still using an antiquated language to teach their web development 101 course, and you’d like to switch to WebMatrix, we’d like to hear your voice – better yet if you have contacts from your school and would like to be one of the first to give the program a try!  Comment on this post or email wptsdrext at microsoft.com.  We look forward to partnering with you guys ^^.

    Read the article

  • Positioning a sprite in XNA: Use ClientBounds or BackBuffer?

    - by Martin Andersson
    I'm reading a book called "Learning XNA 4.0" written by Aaron Reed. Throughout most of the chapters, whenever he calculates the position of a sprite to use in his call to SpriteBatch.Draw, he uses Window.ClientBounds.Width and Window.ClientBounds.Height. But then all of a sudden, on page 108, he uses PresentationParameters.BackBufferWidth and PresentationParameters.BackBufferHeight instead. I think I understand what the Back Buffer and the Client Bounds are and the difference between those two (or perhaps not?). But I'm mighty confused about when I should use one or the other when it comes to positioning sprites. The author uses for the most part Client Bounds both for checking whenever a moving sprite is of the screen and to find a spawn point for new sprites. However, he seems to make two exceptions from this pattern in his book. The first time is when he wants some animated sprites to "move in" and cross the screen from one side to another (page 108 as mentioned). The second and last time is when he positions a texture to work as a button in the lower right corner of a Windows Phone 7 screen (page 379). Anyone got an idea? I shall provide some context if it is of any help. Here's how he usually calls SpriteBatch.Draw (code example from where he positions a sprite in the middle of the screen [page 35]): spriteBatch.Draw(texture, new Vector2( (Window.ClientBounds.Width / 2) - (texture.Width / 2), (Window.ClientBounds.Height / 2) - (texture.Height / 2)), null, Color.White, 0, Vector2.Zero, 1, SpriteEffects.None, 0); And here is the first case of four possible in a switch statement that will set the position of soon to be spawned moving sprites, this position will later be used in the SpriteBatch.Draw call (page 108): // Randomly choose which side of the screen to place enemy, // then randomly create a position along that side of the screen // and randomly choose a speed for the enemy switch (((Game1)Game).rnd.Next(4)) { case 0: // LEFT to RIGHT position = new Vector2( -frameSize.X, ((Game1)Game).rnd.Next(0, Game.GraphicsDevice.PresentationParameters.BackBufferHeight - frameSize.Y)); speed = new Vector2(((Game1)Game).rnd.Next( enemyMinSpeed, enemyMaxSpeed), 0); break;

    Read the article

  • How to "translate" interdependent object states in code?

    - by Earl Grey
    I have the following problem. My UI interace contains several buttons, labels, and other visual information. I am able to describe every possible workflow scenario that should be be allowed on that UI. That means I can describe it like this - when button A is pressed, the following should follow - In the case of that A button, there are three independent factors that influence the possible result when pushing the A button. The state of the session (blank, single, multi, multi special), the actual work that is being done by the system at the moment of pressing the A button (nothing was happening, work was being done, work was paused) and a separate UI element that has two states (on , off)..This gives me a 3 dimensional cube with 24 possible outcomes. I could write code for this using if cycles, switch cycles etc....but the problem is, I have another 7 buttons on that ui, I can enter this UI from different states..some buttons change the state, some change parameters... To sum up, the combinations are mindbogling and I am not able come up with a methodology that scales and is systematically reliable. I am able to describe EVERY workflow with words, I am sure my description is complete and without logical errors. But I am not able to translate that into code. I was trying to draw flowcharts but it soon became visually too complicated due to too many if "emafors". Can you advice how to proceeed?

    Read the article

  • J2EE or .Net Framework [closed]

    - by Kevino
    I want to learn JAVA or C#... tell me the strength and weakness of each platforms J2EE and .Net Framework today in 2012 and which is safer for the future jobs wise? I tend to prefer Java because here (Montreal, Toronto) there is like 6 Java jobs for each C# jobs and some experienced programmers advised me to go with Java because they say JVM languages are winning in the cloud and the rise of Android can't do anything except help Java in the long run. Is that true today with the release of windows 8 soon and ios devices? On the other side 1 of these programmers told me that corporation love Asp.Net Mvc3 for intranet and web dev and that tomcat/apache java jsp adoption is slowing down compared to Asp.net and ruby on rails & html5 etc. He told me too since I have a good background in system admins & networking C# would be better for me because I'll be able to do more things in the microsoft world with powershell automation and creating my own apps for all the networking stuffs (windows server, dns,dhcp, active directory, sharepoint etc). But what if windows 8 flop java and android aren't safer in the long run? because he told me mono was a joke compared to Java/android or native objective-c on ios devices. (I plan to do a full time study of 10hr's / 15hr's a day for the next 9 months of either Java or C# that's why I ask this)

    Read the article

  • Enabling support of EUS and Fusion Apps in OUD

    - by Sylvain Duloutre
    Since the 11gR2 release, OUD supports Enterprise User Security (EUS) for database authentication and also Fusion Apps. I'll plan to blog on that soon. Meanwhile, the R2 OUD graphical setup does not let you configure both EUS and FusionApps support at the same time. However, it can be done manually using the dsconfig command line. The simplest way to proceed is to select EUS from the setup tool, then manually add support for Fusion Apps using dsconfig using the commands below: - create a FA workflow element with eusWfe as next element: dsconfig create-workflow-element \           --set enabled:true \           --set next-workflow-element:Eus0 \           --type fa \           --element-name faWfe - modify the workflow so that it starts from your FA workflow element instead of Eus: dsconfig set-workflow-prop \           --workflow-name userRoot0 \           --set workflow-element:faWfe  Note: the configuration changes may slightly differ in case multiple databases/suffixes are configured on OUD.

    Read the article

  • Woman Is the World's First Computer Programmer? [closed]

    - by Sveta Bondarenko
    This week, on 10th December, we celebrate the 197th birth anniversary of Ada Lovelace, often considered as the world's first computer programmer. Ada became famous not only as a daughter of romantic poet Lord Byron but also as an outstanding 19th century mathematician. Her works on analytical engine are recognized as the first algorithm intended to be processed by a machine. Women always played a crucial role in the computer science evolution, but unfortunately, they are considered to be not so good at programming and engineering as men. Even though the fair sex makes up a growing portion of computer and Internet users, there is still a large gender gap in the field of Computer Science. But all is not lost! According to the study women's enrollment in the computer science raised from 7 percent in 1995 to 42 percent in 2000. And it is still increasing. Soon women will take a well-deserved position among the world's top computer programmers. After all, a number of notable female computer pioneers such as Ada Lovelace, Grace Hopper, and Anita Borg have proven that women make great computer scientists. But will women make great contributions to the modern technologies industry? Or successful and famous female computer programmer is just a pipe dream?

    Read the article

  • links for 2011-03-18

    - by Bob Rhubart
    Events Overview (tags: ping.fm entarch) No description available. (tags: ping.fm) Andrejus Baranovskis: SOA & E2.0 Partner Community Forum Slides Oracle ACE Director Andrejus Baranovskis shares slides from his presentation at the SOA & E2.0 Partner Community Forum in Netherlands. (tags: oracle otn oracleace soa enterprise2.0 webcenter) ODTUG Kaleidoscope 2011 - The Premier Conference for Oracle Fusion Middleware AMIS Technology blog Oracle ACE Director Lucas Jellema shares information on what he considers "the best event for anyone doing, dabbling in or considering doing Oracle Fusion Middleware." (tags: oracle otn oracleace odtug fusionmiddleware) Mark Rittman: ODTUG K-Scope 2011 Early Bird Deadline is Closing "The deadline for Early Bird registrations for Kscope is fast approaching [March 25]. If you want to attend at the discounted rate, sign up soon." - Oracle ACE Director Mark Rittman (tags: oracle otn oracleace odtug) Master Data Management and Cloud Computing (Oracle Master Data Management) "Cloud Computing has the potential to significantly degrade data quality across the enterprise over time. Deploying a Master Data Management solution prior to or in conjunction with a move to the Cloud can insure that the data flowing into the enterprise from the Cloud is clean and governed." - David Butler (tags: oracle otn mdm cloud)

    Read the article

  • [Kubuntu 14.04][Eclipse] (ADT) crashes at button OK from Project properties

    - by nouseforname
    Since i upgraded to kubuntu 14.04, my Eclipse crashes at different situations. Mostly i can "simulate" it when going to project properties and press ok. Then it always crashes. My system: DISTRIB_ID=Ubuntu DISTRIB_RELEASE=14.04 DISTRIB_CODENAME=trusty DISTRIB_DESCRIPTION="Ubuntu 14.04.1 LTS" My Java: java version "1.8.0_05" Java(TM) SE Runtime Environment (build 1.8.0_05-b13) Java HotSpot(TM) 64-Bit Server VM (build 25.5-b02, mixed mode) My ADT Version: Android Development Toolkit Version: 23.0.0.1245622 I already tried to add this in adt-bundle-linux-x86_64/eclipse/configuration/configuration.ini org.eclipse.swt.browser.DefaultType=mozilla -Dorg.eclipse.swt.browser.DefaultType=mozilla Error: # # A fatal error has been detected by the Java Runtime Environment: # # SIGSEGV (0xb) at pc=0x00007fe049eb1718, pid=5964, tid=140601811232512 # # JRE version: Java(TM) SE Runtime Environment (8.0_05-b13) (build 1.8.0_05-b13) # Java VM: Java HotSpot(TM) 64-Bit Server VM (25.5-b02 mixed mode linux-amd64 compressed oops) # Problematic frame: # C [libgobject-2.0.so.0+0x19718] g_object_get_qdata+0x18 # # Core dump written. Default location: /home/maddin/core or core.5964 # # An error report file with more information is saved as: # /home/maddin/hs_err_pid5964.log Compiled method (nm) 28866 4166 n 0 org.eclipse.swt.internal.gtk.OS::_g_object_get_qdata (native) total in heap [0x00007fe051da6790,0x00007fe051da6af0] = 864 relocation [0x00007fe051da68b0,0x00007fe051da68f8] = 72 main code [0x00007fe051da6900,0x00007fe051da6ae8] = 488 oops [0x00007fe051da6ae8,0x00007fe051da6af0] = 8 # # If you would like to submit a bug report, please visit: # http://bugreport.sun.com/bugreport/crash.jsp # The crash happened outside the Java Virtual Machine in native code. # See problematic frame for where to report the bug. # Now, as soon as i change SystemSettings - Application Apperance - GTK - GTKn-Design to something else but "oxygen-gtk" this crash doesn't happen anymore. But the application appearance also is ugly. Beside that i get a lot of errors/warnings like that: (SWT:6148): GLib-GObject-CRITICAL **: g_closure_add_invalidate_notifier: assertion 'closure->n_inotifiers < CLOSURE_MAX_N_INOTIFIERS' failed or other GTK warnings from the particular design, not having theme-engine. Which actually doesn't cause any crahs it seems so far. So i have 3 options: accept crashes accept warnings (maybe the best choice) accept ugly design What can i do to solve this issue without changing the design settings?

    Read the article

  • Should mock objects for tests be created at a high or low level

    - by Danack
    When creating unit tests for those other objects, what is the best way to create mock objects that provide data to other objects. Should they be created at a 'high level' and intercept the calls as soon as possible, or should they be done at a 'low level' and so make as much as the real code still be called? e.g. I'm writing a test for some code that requires a NoteMapper object that allows Notes to be loaded from the DB. class NoteMapper { function getNote($sqlQueryFactory, $noteID) { // Create an SQL query from $sqlQueryFactory // Run that SQL // if null // return null // else // return new Note($dataFromSQLQuery) } } I could either mock this object at a high level by creating a mock NoteMapper object, so that there are no calls to the SQL at all e.g. class MockNoteMapper { function getNote($sqlQueryFactory, $noteID) { //$mockData = {'Test Note title', "Test note text" } // return new Note($mockData); } } Or I could do it at a very low level, by creating a MockSQLQueryFactory that instead of actually querying the database just provides mock data back, and passing that to the current NoteMapper object. It seems that creating mocks at a high level would be easier in the short term, but that in the long term doing it at a low level would be more powerful and possibly allow more automation of tests e.g. by recording data in an out of a DB and then replaying that data for tests. Is there a recommended way of creating mocks? Are there any hard and fast rules about which are better, or should they both be used where appropriate?

    Read the article

  • Google Programmers

    - by seth
    As a soon-to-be software engineer, I have studied some languages during my time in college. I can name C, C++, Java, Scheme, Ruby, PHP, for example. However, one of the main principles in my college (recognized by many as the best in my country) is to teach us how to learn for ourselves and how to search the web when we have a doubt. This leads to a proactive attitude, when I need something, I go get it and this has worked so far for me. Recently, I started wondering though, how much development would I be able to do without internet access. The answer bugged me quite a bit. I know the concept of the languages, mostly I know what to do, but I was amazed by how "slow" things were without having the google to help in the development. The problem was mostly related to specific syntax but it was not without some effort that i solved some of the SPOJ problems in C++. Is this normal? Should I be worried and try to change something in my programming behaviour? UPDATE: I'll give a concrete example. Reading and writing to a file in java. I have done this about a dozen times in my life, yet every time I need to do it, I end up googling "read file java" and refreshing my memory. I completely understand the code, i fully understand what it does. But I am sure, that without google, it would take me a few tries to read and write correctly (if I had to sit in front of the screen with a blank page and write this without consulting any source whatsoever).

    Read the article

  • Podcast Show Notes: Conversations in the Cloud

    - by Bob Rhubart
    The centerpiece of every OTN Architect Day event is a panel discussion the gathers all of the session speakers togehter to respond to questions from the audience. I generally try to record these discussions, usually by stiking my iPad on top of one of the PA speakers, with mixed results. Fortunately, the A/V tech at the venue for the Los Angeles event, held on October 25, 2012, had the necessary gear to get a good-quality recording of the panel discussion. So starting this week the OTN ArchBeat Podcast will feature a short series of highlights from those discussions. Listen to Part 1: Dude, What's My Role? Members of the Architect Day panel respond to an audience question about what happens to traditional IT roles in a cloud environment. Listen to Part 2: Migrating Mission-Critical Applications to the Cloud (Nov 21) The panel offers advice and examples in response to an audience question about dealing with mission-critical applications. Listen to Part 3: All Clouds Are Not Equal (Nov 28) The panel responds to a challenging question about cloud strategy with a discussion of enterprise-grade cloud services. Listen to Part 4: Cloud Security and Auditing (Dec 5) The last segment in the series is short discussion in response to an audience question about auditing and security in the cloud. The Panelists (Listed alphabetically) Ashok Aletty, Senior Director of Product Management, Oracle Cloud Application Foundation Dr. James Baty, Vice President, Oracle Global Enterprise Architecture Program Dave Chappelle, Enterprise Architect, Oracle Global Enterprise Architecture Program Jeff Davies, Senior Principal Product Manager, Oracle Corporation Anbu Krishnaswamy, Enterprise Architect, Oracle Global Enterprise Architecture Program Dhanraj Pondicherry, Sales Consulting Manager, Oracle Exadata Perren Walker, Senior Principal Product Manager, Oracle Enterprise Manager Coming Soon Upcoming programs will focus on DevOps and Continuous Integration, and on Oracle's Java Cloud and Developer Cloud services. Stay tuned: RSS

    Read the article

  • MMORPG design for time-limited players

    - by Philipp
    I believe that there is a significant market of players who would enjoy the exploration and interaction aspects of MMORPGs, but simply don't have the time for the endless grinding marathons which are part of the average MMORPG. MMORPGs are all about interaction between players. But when different players have different amounts of time to invest into a game, those with less time to spend will soon lack behind their power-leveling friends and won't be able to interact with them anymore. One way to solve this would be to limit the progress a player can achieve per day, so that it simply doesn't make sense to play more than one or two hours a day. But even the busiest casual players sometimes like to spend a whole sunday afternoon playing a video game. Just stopping them after two hours would be really frustrating. It also creates a pressure to use the daily progress limit every day, because otherwise the player would feel like wasting something. This pressure would be detrimental for casual gamers. What else could be done to level the playing field between those players who play 40+ hours a week and those who can't play more than 10?

    Read the article

  • Why unhandled exceptions are useful

    - by Simon Cooper
    It’s the bane of most programmers’ lives – an unhandled exception causes your application or webapp to crash, an ugly dialog gets displayed to the user, and they come complaining to you. Then, somehow, you need to figure out what went wrong. Hopefully, you’ve got a log file, or some other way of reporting unhandled exceptions (obligatory employer plug: SmartAssembly reports an application’s unhandled exceptions straight to you, along with the entire state of the stack and variables at that point). If not, you have to try and replicate it yourself, or do some psychic debugging to try and figure out what’s wrong. However, it’s good that the program crashed. Or, more precisely, it is correct behaviour. An unhandled exception in your application means that, somewhere in your code, there is an assumption that you made that is actually invalid. Coding assumptions Let me explain a bit more. Every method, every line of code you write, depends on implicit assumptions that you have made. Take this following simple method, that copies a collection to an array and includes an item if it isn’t in the collection already, using a supplied IEqualityComparer: public static T[] ToArrayWithItem( ICollection<T> coll, T obj, IEqualityComparer<T> comparer) { // check if the object is in collection already // using the supplied comparer foreach (var item in coll) { if (comparer.Equals(item, obj)) { // it's in the collection already // simply copy the collection to an array // and return it T[] array = new T[coll.Count]; coll.CopyTo(array, 0); return array; } } // not in the collection // copy coll to an array, and add obj to it // then return it T[] array = new T[coll.Count+1]; coll.CopyTo(array, 0); array[array.Length-1] = obj; return array; } What’s all the assumptions made by this fairly simple bit of code? coll is never null comparer is never null coll.CopyTo(array, 0) will copy all the items in the collection into the array, in the order defined for the collection, starting at the first item in the array. The enumerator for coll returns all the items in the collection, in the order defined for the collection comparer.Equals returns true if the items are equal (for whatever definition of ‘equal’ the comparer uses), false otherwise comparer.Equals, coll.CopyTo, and the coll enumerator will never throw an exception or hang for any possible input and any possible values of T coll will have less than 4 billion items in it (this is a built-in limit of the CLR) array won’t be more than 2GB, both on 32 and 64-bit systems, for any possible values of T (again, a limit of the CLR) There are no threads that will modify coll while this method is running and, more esoterically: The C# compiler will compile this code to IL according to the C# specification The CLR and JIT compiler will produce machine code to execute the IL on the user’s computer The computer will execute the machine code correctly That’s a lot of assumptions. Now, it could be that all these assumptions are valid for the situations this method is called. But if this does crash out with an exception, or crash later on, then that shows one of the assumptions has been invalidated somehow. An unhandled exception shows that your code is running in a situation which you did not anticipate, and there is something about how your code runs that you do not understand. Debugging the problem is the process of learning more about the new situation and how your code interacts with it. When you understand the problem, the solution is (usually) obvious. The solution may be a one-line fix, the rewrite of a method or class, or a large-scale refactoring of the codebase, but whatever it is, the fix for the crash will incorporate the new information you’ve gained about your own code, along with the modified assumptions. When code is running with an assumption or invariant it depended on broken, then the result is ‘undefined behaviour’. Anything can happen, up to and including formatting the entire disk or making the user’s computer sentient and start doing a good impression of Skynet. You might think that those can’t happen, but at Halting problem levels of generality, as soon as an assumption the code depended on is broken, the program can do anything. That is why it’s important to fail-fast and stop the program as soon as an invariant is broken, to minimise the damage that is done. What does this mean in practice? To start with, document and check your assumptions. As with most things, there is a level of judgement required. How you check and document your assumptions depends on how the code is used (that’s some more assumptions you’ve made), how likely it is a method will be passed invalid arguments or called in an invalid state, how likely it is the assumptions will be broken, how expensive it is to check the assumptions, and how bad things are likely to get if the assumptions are broken. Now, some assumptions you can assume unless proven otherwise. You can safely assume the C# compiler, CLR, and computer all run the method correctly, unless you have evidence of a compiler, CLR or processor bug. You can also assume that interface implementations work the way you expect them to; implementing an interface is more than simply declaring methods with certain signatures in your type. The behaviour of those methods, and how they work, is part of the interface contract as well. For example, for members of a public API, it is very important to document your assumptions and check your state before running the bulk of the method, throwing ArgumentException, ArgumentNullException, InvalidOperationException, or another exception type as appropriate if the input or state is wrong. For internal and private methods, it is less important. If a private method expects collection items in a certain order, then you don’t necessarily need to explicitly check it in code, but you can add comments or documentation specifying what state you expect the collection to be in at a certain point. That way, anyone debugging your code can immediately see what’s wrong if this does ever become an issue. You can also use DEBUG preprocessor blocks and Debug.Assert to document and check your assumptions without incurring a performance hit in release builds. On my coding soapbox… A few pet peeves of mine around assumptions. Firstly, catch-all try blocks: try { ... } catch { } A catch-all hides exceptions generated by broken assumptions, and lets the program carry on in an unknown state. Later, an exception is likely to be generated due to further broken assumptions due to the unknown state, causing difficulties when debugging as the catch-all has hidden the original problem. It’s much better to let the program crash straight away, so you know where the problem is. You should only use a catch-all if you are sure that any exception generated in the try block is safe to ignore. That’s a pretty big ask! Secondly, using as when you should be casting. Doing this: (obj as IFoo).Method(); or this: IFoo foo = obj as IFoo; ... foo.Method(); when you should be doing this: ((IFoo)obj).Method(); or this: IFoo foo = (IFoo)obj; ... foo.Method(); There’s an assumption here that obj will always implement IFoo. If it doesn’t, then by using as instead of a cast you’ve turned an obvious InvalidCastException at the point of the cast that will probably tell you what type obj actually is, into a non-obvious NullReferenceException at some later point that gives you no information at all. If you believe obj is always an IFoo, then say so in code! Let it fail-fast if not, then it’s far easier to figure out what’s wrong. Thirdly, document your assumptions. If an algorithm depends on a non-trivial relationship between several objects or variables, then say so. A single-line comment will do. Don’t leave it up to whoever’s debugging your code after you to figure it out. Conclusion It’s better to crash out and fail-fast when an assumption is broken. If it doesn’t, then there’s likely to be further crashes along the way that hide the original problem. Or, even worse, your program will be running in an undefined state, where anything can happen. Unhandled exceptions aren’t good per-se, but they give you some very useful information about your code that you didn’t know before. And that can only be a good thing.

    Read the article

  • Import SSIS Project in Denali CTP1

    For years Analysis Services has had the ability to take an existing database from a server and reverse engineer it into a BIDS project.  This is extremely useful when all you have is the running instance of the database and the project that created it has long since disappeared.  Reverse engineering has never been a feature of SSIS until now. Let me walk you through the simple steps. The first step is that you obviously have to have a project deployed to an SSIS Catalog.  I will do a video on this soon but in case you can’t wait then my good buddy Jamie Thomson has written it up here As you can see I have a project called imaginatively “Denali1” with one package “Package.dtsx” The next thing we need to do is fire up BIDS and choose the right project type (Integration Services Import Project) Now we just follow the wizard.  We make sure we specify on which server to find the Catalog and in which folder to look for the project. Next the setting are validated and we are greeted with the familiar review screen before the creation of our new project from the deployed project happens Hit Import and away we go The result is just what we wanted.

    Read the article

  • Recruiters intentionally present one good candidate for an available job

    - by Jeff O
    Maybe they do it without realizing. The recruiter's goal is to fill the job as soon as possible. I even think they feel it is in their best interest that the candidate be qualified, so I'm not trying to knock recruiters. Aren't they better off presenting 3 candidates, but one clearly stands out? The last thing they want from their client is a need to extend the interview process because they can't decide. If the client doesn't like any of them, you just bring on your next good candidate. This way they hedge their bet a little. Any experience, insight or ever heard of a head-hunter admit this? Does it make sense? There has to be a reason why the choose such unqualified people. I've seen jobs posted that clearly state they want someone with a CS degree and the recruiter doesn't take it literally. I don't have a CS degree or Java experience and still they think I'm a possible fit.

    Read the article

  • At which point is a continuous integration server interesting?

    - by Cedric Martin
    I've been reading a bit about CI servers like Jenkins and I'm wondering: at which point is it useful? Because surely for a tiny project where you'd have only 5 classes and 10 unit tests, there's no real need. Here we've got about 1500 unit tests and they pass (on old Core 2 Duo workstations) in about 90 seconds (because they're really testing "units" and hence are very fast). The rule we have is that we cannot commit code when a test fail. So each developers launches all his tests to prevent regression. Obviously, because all the developers always launch all the test we catch errors due to conflicting changes as soon as one developer pulls the change of another (when any). It's still not very clear to me: should I set up a CI server like Jenkins? What would it bring? Is it just useful for the speed gain? (not an issue in our case) Is it useful because old builds can be recreated? (but we can do this to with Mercurial, by checking out old revs) Basically I understand it can be useful but I fail to see exactly why. Any explanation taking into account the points I raised above would be most welcome.

    Read the article

  • Multiple instances of Intellitrace.exe process

    - by Vincent Grondin
    Not so long ago I was confronted with a very bizarre problem… I was using visual studio 2010 and whenever I opened up the Test Impact view I would suddenly see my pc perf go down drastically…  Investigating this problem, I found out that hundreds of “Intellitrace.exe” processes had been started on my system and I could not close them as they would re-start as soon as I would close one.  That was very weird.  So I knew it had something to do with the Test Impact but how can this feature and Intellitrace.exe going crazy be related?  After a bit of thinking I remembered that a teammate (Etienne Tremblay, ALM MVP) had told me once that he had seen this issue before just after installing a MOCKING FRAMEWORK that uses the .NET Profiler API…  Apparently there’s a conflict between the test impact features of Visual Studio and some mocking products using the .NET profiler API…  Maybe because VS 2010 also uses this feature for Test Impact purposes, I don’t know… Anyways, here’s the fix…  Go to your VS 2010 and click the “Test” menu.  Then go to the “Edit Test Settings” and choose EACH test setting file applying the following actions (normally 2 files being “Local” and TraceAndTestImpact”: -          Select the Data And Diagnostic option on the left -          Make sure that the ASP.NET Client Proxy for Intellitrace and Test Impact option is NOT SELECTED -          Make sure that the Test Impact option is NOT SELECTED -          Save and close   Edit Test Settings   Problem solved…  For me having to choose between the “Test Impact” features and the “Mocking Framework” was a no brainer, bye bye test impact…  I did not investigate much on this subject but I feel there might be a way to have them both working by enabling one after the other in a precise sequence…  Feel free to leave a comment if you know how to make them both work at the same time!   Hope this helps someone out there !

    Read the article

  • Split a 2D scene in layers or have a z coordinate

    - by Bane
    I am in the process of writing a 2D game engine, and a dilemma emerged. Let me explain the situation... I have a Scene class, to which various objects can be added (Drawable, ParticleEmitter, Light2D, etc), and as this is a 2D scene, things will obviously be drawn over each other. My first thought was that I could have basic add and remove methods, but I soon realized that then there would be no way for the programmer to control the order in which things were drawn. So I can up with two options, each with its pros and cons. A) Would be to split the scene in layers. By that I mean instead of having the scene be a container of objects, have it be a container of layers, which are in turn the containers of objects. B) Would require to have some kind of z-coordinate, and then have the scene sorted so objects with lower z get drawn first. Option A is pretty solid, but the problem is with the lights. In what layer do I add it? Does it work cross-layer? On all bottom layers? And I still need the Z coordinate to calculate the shadow! Option B would require me to change all my code from having Vector2D positions, to some kind of class that inherits from Vector2D and adds a z coordinate to it (I don't want it to be a Vector3D because I still need all the same methods the 2D kind has, just with .z clamped on). Am I missing something? Is there an alternative to these methods? I'm working in Javascript, if that makes a difference.

    Read the article

  • github team workflow - to fork or not?

    - by aporat
    We're a small team of web developers currently using subversion but soon we're making a switch to github. I'm looking at different types of github workflows, and we're not sure if the whole forking concept in github for each developer is such a good idea for us. If we use forks, I understand each developer will have his own private remote & local repositories. I'm worried it will make pushing changesets hard and too complex. Also, my biggest concern is that it will force each developer to have 2 remotes: origin (which is the remote fork) and an upstream (which is used to "sync" changes from the main repository). Not sure if it's such a easy way to do things. This is similar to the workflow explained here: https://github.com/usm-data-analysis/usm-data-analysis.github.com/wiki/Git-workflow If we don't use forks, we can probably get by fine by using a central repo creating a branch for each task we're working on, and merge them into the development branch on the same repository. It means we won't be able to restrict merging of branches and might be a little messy to have many branches on the central repository. Any suggestions from teams who tried both workflow?

    Read the article

  • Distributing an Android game with plugins via the market

    - by Peter Serwylo
    I'm new to Android development, and was wondering how the following could be achieved within the confines of the Android market as a distribution channel: One main application, which handles the main menu, networking, high scores, etc. Several games which can be launched from the main menu, which all work within the same eco system. The main application is not just a pseudo launcher for other games, these different games will share high scores and other achievements/preferences. In a traditional package management system such as apt, pacman or yum, this could be handled quite happily through dependencies. This does not appear to be possible via the Android market. The closest I've seen is when apps scan to check if the required app is installed, and if not, launches the market and asks the user to download the app. This sounds like a very messy solution. It also begs the question, would they download the game (plugin) first, which then downloads the main shell application? Or would they download the main shell application, and when they navigate to a menu item which says "Play game", then it scans for any installed games, and if none exist, redirects to the market? Also, I'm not even sure if it is possible to dig up the package from another application on the device, and start invoking classes from within (e.g. when you want to launch the game (plugin)) A final option is just to have a 3rd component which is a .jar that each game includes, which effectively contains the entire shell application. Then each game would appear to have the same menu, but it would become a nightmare as soon as you want to update the menu component and have to re-release each game. It would be especially worse if other people released games (plugins) based on the same framework and didn't update them. Is there any other options which I haven't thought of? Has anyone else solved this or seen a solution in any apps they've installed (doesn't have to be games)? cheers.

    Read the article

  • What to do when you're the interviewer and you don't like your job?

    - by emcb
    I'm in a sorta strange predicament, and I could use some advice. When I was interviewing for my current job, the job description I was given seemed pretty darn nice to me. Without going into the details, the job hasn't quite turned out the way it was advertised. The company is great and takes care of its employees, but for someone who cares about the code they write and the work they do, it's a bad environment - effectively, we operate between 0.5 and 1.0 on the Joel test, and due to political issues we're not going to move beyond that any time soon. Bitter? Maybe. OK...so I'm in the market for a new job. But that's not where my dilemma is. The problem that I see coming is that I will be participating in interviewing some candidates for a position on my team, and I'm not sure what to do. I've heard through the grapevine that we have some really solid, promising, fresh-out-of-college prospects coming in to interview, and I honestly dread the thought of somebody having their first experience of engineering in this department. So I'm wondering: what should I do if/when the interviewee asks me "Do you like your job?" (no) "What kind of projects would I be working on?" (mostly static HTML/CSS changes) Anything else that would elicit a negative answer if told truthfully Do I tell the truth, to give the candidate a real picture of the job? What if this scares them away, and what if it gets blamed on me? Do I fib or lie, saying we work on exciting projects with lots of flexibility, like the pitch my boss will give when the reality is quite different? Should I feel any kind of moral responsibility to let a promising young developer know that this isn't the job for them, or should I shut up and be loyal 100% to the company? Any approaches or advice is appreciated. I hope I don't come across as overly dramatic - I honestly struggle with this question.

    Read the article

  • Disqus ads are disqusting and here is how you turn them off

    - by Gopinath
    After couple of months I spent sometime yesterday reviewing my blog and coziie.com to see if everything is fine. Disqus, the best commenting system and an unusual suspect was looking weird. Commenting sections of my sites are displayed links of third party sites which I was not aware of. The content is annoying to me and I believe my site users are also annoyed. I don’t remember configuring something in disqus to display ads or earn money by promoting other’s content. Why on earth I would like to shows content of someone else’s website right inside comments section and annoy readers? Here is a screen grab of comment section that shows ads.   It turns to be disqus automatically enabled a feature called as “Discovery” to all publishers who upgraded the commenting system to the latest release. I remember upgrading commenting system to the latest release couple of months ago but I don’t remember specifically allowing disqus to spam my comment section!! I’m extremely unhappy with the way disqus automatically enabled spamming comment sections in the name of so called new features that benefits bloggers. How to turn of Discovery or Ads in Disqus I turned them off as soon as I noticed them and it’s very easy to do that. Here are the steps to be followed to turn off ads in comments Login in to disqus Switch to Settings tab Click on Discovery tab Choose the option Just comments Save the settings.  Though it’s easy to turn off the ads, it would have been nice if disqus did not enable them by default. Hey guys at disqus, you lost my trust and from now onwards I’ll double check before opting in to any new features.

    Read the article

  • Transform coordinates from 3d to 2d without matrix or built in methods

    - by Thomas
    Not to long ago i started to create a small 3D engine in javascript to combine this with an html5 canvas. One of the issues I run into is how can you transform 3d to 2d coords. Since I cannot use matrices or built in transformation methods I need another way. I've tried implementing the next explanation + pseudo code: http://freespace.virgin.net/hugo.elias/routines/3d_to_2d.htm Unfortunately no luck there. I've replace all the input variables with data from my own camera and object classes. I have the following data: An object with a rotation, position vector and an array of 4 3d coords (its just a plane) a camera with a position and rotation vector the viewport - a square 600 x 600 surface. The example uses a zoom factor which I've set as 1 Most hits on google use either matrix calculations or don't implement camera rotation. Basic transformation should be like this: screen.x = x / z * zoom screen.y = y / z * zoom Can anyone point me in the right direction or explain to me howto achieve this? edit: Thanks for all your posts, I haven't been able to apply all this to my project yet but I hope to do this soon.

    Read the article

  • Where do I start in regards to making a Gnome/Unity Form Application

    - by JMK
    Ok so I am familiar with developing Form and Console applications on Windows using Visual Studio .Net with C#, but where do I start when it comes to Linux distro's like Ubuntu, is there an equivalent? How would one go about matching what they can do in a Windows environment with .Net and C# in a Linux environment without .Net coding in something like Java or C/C++? I am aware of Eclipse, does eclipse have a form designer or do you have to code the design of any Gnome/Unity forms manually? Can I use eclipse to write the Linux equivalent of a console application, that you just double click on to run? I also know about Mono, but the idea is that I want to learn how to develop software without using anything in the Microsoft stack and am not sure where to start. What is the standard language/framework used to develop these types of applications on Linux? As I become more proficient with Visual Studio, C# and .Net, it has struck me that without these Microsoft tools, I am nothing. I am only capable of developing for the Microsoft OS and this scares me. This isn't some anti Microsoft thing, Microsoft makes some incredible Software/Hardware/Operating Systems/IDE's, but it is generally a bad idea to put all of your eggs in one basket so if I want to learn how to develop Terminal and Gnome/Unity form applications where in the world do I start? I have used Linux on and off for years, but Windows has been my primary OS. However I have watched Linux get better and better and as much as I love Windows 7, I am dubious about Windows 8 (I for one will sorely miss my start menu)! Obviously MS aren't going anywhere anytime soon and I could spend the the next couple of decades developing for .Net without any issues but just because you can get away with something doesn't always mean it's a good idea. Thanks

    Read the article

< Previous Page | 75 76 77 78 79 80 81 82 83 84 85 86  | Next Page >