Search Results

Search found 5064 results on 203 pages for 'automatic ref counting'.

Page 174/203 | < Previous Page | 170 171 172 173 174 175 176 177 178 179 180 181  | Next Page >

  • Risky Business with LINQ to SQL and OR Designer?

    - by Toadmyster
    I have two tables with a one to many relationship in SQL 2008. The first table (BBD) PK | BBDataID | int       | Floor_Qty | tinyint       | Construct_Year | char(4)       | etc, etc describes the data common to all buildings and the second (BBDCerts) PK | BBDCertsID | int       | BBDataID | int       | Certification_Type | varchar(20)       | etc, etc is a collection of certifications for a particular building. Thus, the primary key in BBD (BBDataID) is mapped to the corresponding field in BBDCerts via an FK relationship, but BBDCertsID is the second table's primary key and BBDataID is not because it will not be unique. My problem is that I want to be able to use the OR generated data context to get at the list of certs when I access a particular record in the BBD table. For instance: Dim vals = (From q in db.BBD Where q.BBDataID = x Select q.Floor_Qty, q.Construct_Year, q.BBDCerts).SingleOrDefault and later be able to access a particular certification like this: vals.BBDCerts.Certification_Type.First Now, the automatic associations created when the SQL tables are dropped on the design surface don't generate the EntityRef associations that are needed to access the other table using the dot notation. So, I have to use the OR designer to make the BBDCerts BBDataID a primary key (this doesn't affect the actual database), and then manually change the association properties to the appropriate OneToMany settings. There might be a better way to approach this solution but my question is, is the way I've done it safe? I've done a barrage of tests and the correct cert is referenced or updated every time. Frankly, the whole thing makes me nervous.

    Read the article

  • JQuery get id or class, not value

    - by Celia Tan
    I'm new at JQuery, what I wanted to ask is how to select an option, then another option will automatic selected that have property of first option. I've given code like this: <select name="kendaraan"> <option value="" selected>pilih kendaraan!</option> <option value="B 2011 DR" class="B2011DR">B 2011 DR</option> <option value="R 3333 OKI" class="R3333OKI">R 3333 OKI</option> <option value="k03">jazz</option> <option value="k04">innova</option> </select> <select name="driver"> <option value="" selected>pilih kendaraan!</option> <option value="s02" car="B2011DR" style="display:none">jojon</option> <option value="s01" car="B2011DR" style="display:none">mamat</option> <option value="s04" car="R3333OKI" style="display:none">tukul</option> <option value="s03" car="R3333OKI" style="display:none">mamat</option> <option value="s07" car="k03" style="display:none">bejo</option> <option value="s05" car="k03" style="display:none">mamat</option> <option value="s06" car="k03" style="display:none">tukul</option> <option value="s08" car="k04" style="display:none">budi</option> <option value="s09" car="">komeng</option> </select> $('select[name=kendaraan]').change(function() { //hide all option $('select[name=driver] option').css('display','none'); //display option only for matched driver var isCar = $('select[name=driver] option[car='+$(this).val()+']'); isCar.css('display','block'); //auto select first matched diriver $('select[name=driver]').val( $(isCar[0]).val() ) }) But the jquery code is for getting the value of "kendaraan", how to match it with the class, not the value?

    Read the article

  • Append to the end of a Char array in C++

    - by Taylor Huston
    Is there a command that can append one array of char onto another? Something that would theoretically work like this: //array1 has already been set to "The dog jumps " //array2 has already been set to "over the log" append(array2,array1); cout << array1; //would output "The dog jumps over the log"; This is a pretty easy function to make I would think, I am just surprised there isn't a built in command for it. *Edit I should have been more clear, I didn't mean changing the size of the array. If array1 was set to 50 characters, but was only using 10 of them, you would still have 40 characters to work with. I was thinking an automatic command that would essentially do: //assuming array1 has 10 characters but was declared with 25 and array2 has 5 characters int i=10; int z=0; do{ array1[i] = array2[z]; ++i; ++z; }while(array[z] != '\0'); I am pretty sure that syntax would work, or something similar.

    Read the article

  • how to edit controls in a system::thread

    - by Ian Lundberg
    I need to be able to add items to a listbox inside of a thread. Code is below. 1. ref class Work 2. { 3. public: 4. static void RecieveThread() 5. { 6. while (true) 7. { 8. ZeroMemory(cID, 64); 9. ZeroMemory(message, 256); 10. if(recv(sConnect, message, 256, NULL) != SOCKET_ERROR && recv(sConnect, cID, 64, NULL) != SOCKET_ERROR) 11. { 12. ID = atoi(cID); 13. String^ meep = gcnew String(message); 14. lbxMessages->Items->Add(meep); 15. check = 1; 16. } 17. } 18. } 19. }; I get the error Error: a nonstatic member reference must be relative to a specific object on line 14. Is there any way to get it to let me do that? Because if I try to use String^ meep; outside of that Thread it doesn't contain anything. It works PERFECT when I use it within the thread but not outside of it. I need to be able to add that message to the listbox. If anyone can help I would appreciate it.

    Read the article

  • C# XNA: What can cause SpriteBatch.End() to throw a NPE?

    - by Rosarch
    I don't understand what I'm doing wrong here: public void Draw(GameTime gameTime) // in ScreenManager { SpriteBatch.Begin(SpriteBlendMode.AlphaBlend); for (int i = 0; i < Screens.Count; i++) { if (Screens[i].State == Screen.ScreenState.HIDDEN) continue; Screens[i].Draw(gameTime); } SpriteBatch.End(); // null ref exception } SpriteBatch itself is not null. Some more context: public class MasterEngine : Microsoft.Xna.Framework.Game { public MasterEngine() { graphicsDeviceManager = new GraphicsDeviceManager(this); Components.Add(new GamerServicesComponent(this)); // ... spriteBatch = new SpriteBatch(graphicsDeviceManager.GraphicsDevice); screenManager = new ScreenManager(assets, gameEngine, graphicsDeviceManager.GraphicsDevice, spriteBatch); } //... protected override void Draw(GameTime gameTime) { screenManager.Draw(gameTime); // calls the problematic method base.Draw(gameTime); } } Am I failing to initialize something properly? UPDATE: As an experiment, I tried this to the constructor of MasterEngine: spriteBatch = new SpriteBatch(graphicsDeviceManager.GraphicsDevice); spriteBatch.Begin(); spriteBatch.DrawString(assets.GetAsset<SpriteFont>("calibri"), "ftw", new Vector2(), Color.White); spriteBatch.End(); This does not cause a NRE. hmm....

    Read the article

  • Exposing a C++ API to C#

    - by Siyfion
    So what I have is a C++ API contained within a *.dll and I want to use a C# application to call methods within the API. So far I have created a C++ / CLR project that includes the native C++ API and managed to create a "bridge" class that looks a bit like the following: ManagedBridge.h namespace ManagedAPIWrapper { public ref class Bridge { public: int bridge_test(void); int bridge_test2(api_struct* temp); } } ManagedBridge.cpp int Bridge::bridge_test(void) { return test(); } int Bridge::bridge_test2(api_struct* temp) { return test2(temp); } I also have a C# application that has a reference to the C++/CLR "Bridge.dll" and then uses the methods contained within. I have a number of problems with this: I can't figure out how to call bridge_test2 within the C# program, as it has no knowledge of what a api_struct actually is. I know that I need to marshal the object somewhere, but do I do it in the C# program or the C++/CLR bridge? This seems like a very long-winded way of exposing all of the methods in the API, is there not an easier way that I'm missing out? (That doesn't use P/Invoke!)

    Read the article

  • Optimize SQL query (Facebook-like application)

    - by fabriciols
    My application is similar to Facebook, and I'm trying to optimize the query that get user records. The user records are that he as src ou dst. The src is in usermuralentry directly, the dst list are in usermuralentry_user. So, a entry can have one src and many dst. I have those tables: mysql> desc usermuralentry ; +-----------------+------------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +-----------------+------------------+------+-----+---------+----------------+ | id | int(11) | NO | PRI | NULL | auto_increment | | user_src_id | int(11) | NO | MUL | NULL | | | private | tinyint(1) | NO | | NULL | | | content | longtext | NO | | NULL | | | date | datetime | NO | | NULL | | | last_update | datetime | NO | | NULL | | +-----------------+------------------+------+-----+---------+----------------+ 10 rows in set (0.10 sec) mysql> desc usermuralentry_user ; +-------------------+---------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +-------------------+---------+------+-----+---------+----------------+ | id | int(11) | NO | PRI | NULL | auto_increment | | usermuralentry_id | int(11) | NO | MUL | NULL | | | userinfo_id | int(11) | NO | MUL | NULL | | +-------------------+---------+------+-----+---------+----------------+ 3 rows in set (0.00 sec) And the following query to retrieve information from two users. mysql> explain SELECT * FROM usermuralentry AS a , usermuralentry_user AS b WHERE a.user_src_id IN ( 1, 2 ) OR ( a.id = b.usermuralentry_id AND b.userinfo_id IN ( 1, 2 ) ); +----+-------------+-------+------+-------------------------------------------------------------------------------------------+------+---------+------+---------+------------------------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+------+-------------------------------------------------------------------------------------------+------+---------+------+---------+------------------------------------------------+ | 1 | SIMPLE | b | ALL | usermuralentry_id,usermuralentry_user_bcd7114e,usermuralentry_user_6b192ca7 | NULL | NULL | NULL | 147188 | | | 1 | SIMPLE | a | ALL | PRIMARY | NULL | NULL | NULL | 1371289 | Range checked for each record (index map: 0x1) | +----+-------------+-------+------+-------------------------------------------------------------------------------------------+------+---------+------+---------+------------------------------------------------+ 2 rows in set (0.00 sec) but it is taking A LOT of time... Some tips to optimize? Can the table schema be better in my application?

    Read the article

  • Temporarily disable vim plugin without relaunching

    - by simont
    I'm using c-support in Vim. One of it's features is the automatic comment expansion. When I'm pasting code into Vim from an external editor, the comments are expanded (which gives me double-comments and messes up the paste - see below for example). I'd like to be able to disable the plugin, paste, then re-enable it, without relaunching Vim. I'm not sure if this is possible. The SO questions here, here and here all describe methods to disable plugins, but they all require me to close Vim, mess with my .vimrc or similar, and relaunch; if I have to close Vim, I might as well cat file1 >> myfile; vim myfile, then shift the lines internally, which will be just as quick. Is it possible to disable a plugin while running vim without relaunching, preferably in a way which allows me to map a hot-key toggle-plugin (so re-sourcing ~/.vimrc is alright; that's mappable to a hotkey [I imagine, haven't tried it yet])? Messed up comments: /* * * Authors: * * A Name * * * * Copyright: * * A Name, 2012 * */ EDIT: It turns out you can :set paste, :set nopaste (which, to quote :help paste, will "avoid unexpected effects [while pasting]". (See the comments). However, I'm still curious whether you can disable/enable a plugin as per the original question, so I shall leave the question open.

    Read the article

  • How to create reference tables using fluent nhibernate

    - by Akk
    How can i create a 3 table schema from the following model classes. public class Product { public int Id {get; set;} public string Name {get; set;} public IList<Photo> Photos {get; set;} } public class Photo { public int Id {get; set;} public string Path {get; set;} } I want to create the following table structure in the database: Product ------- Id Name ProductPhotos ------------- ProductId (FK Products.Id) PhotoId (FK Photos.Id) Photos ------ Id Path How i can express the above Database Schema using Fluent NHibernate? I could only manage the following the Mapping but this does not get me the 3rd Photo ref table. public class ProductMap : ClassMap<Product> { public CityMap() { Id(x => x.Id); Map(x => x.Name); Table("Products"); HasMany(x => x.Photos).Table("ProductPhotos").KeyColumn("ProductId"); } }

    Read the article

  • What's a good way to organize a large collection of personal scripts using git?

    - by spooky note
    I have a large collection of my personal scripts that I would like to start versioning using Git. I've previously organized my code as follows: ~/code/python/projects/ (for large stuff, each project contained in an individual folder) ~/code/python/scripts/ (single file scripts all contained in this directory) ~/code/python/sandbox/ (my testing area) ~/code/python/docs/ (downloaded documentation) ~/code/java/... (as above) Now i'm going to start versioning my code using git, so that I can have history and backup all my code to a remote server. I know if I were using SVN I would just keep my entire "~/code/" directory in a large repository, but I understand this is not a good way to do things with Git. Most info I've seen online suggests keeping all my project folders in a single place (as in, no separate directories for python or java) with each project containing it's own git repository, and simply having a "snippets" directory containing all single-file scripts/experiments that can be converted into projects at a later date. But I'm not sure how I feel about consolidating all of my code directories into one area. Is there a good way to keep my separate code directories intact, or is it not worth the effort? Maybe I'm just attached to the separate code directories because I've never known anything else... Also (as a side note), I'd like to quickly be able to see a chronological history of all my projects and scripts. So I can see which projects I created most recently. I used to do this by keeping a number at the beginning of all my projects, 002project, 003project. Is there automatic or easy way to do this in git without having to add a number to all of the project names? I'm open to any practical or philosophical code organizing advice you have. Thanks!!!

    Read the article

  • How can I automate new system provisioning with scripts under Mac OS X 10.6?

    - by deeviate
    I've been working on this for days but simply cannot find the correct references to make it work. The idea is to have a script that will baseline newly purchased Macs that comes into the company with basic stuffs like set autologin to off, create a new admin user (for remote admins to access for support, set password to unlock screensaver and etc) . Sample list for baseline that admins have to do on each new machine: Click the Login Options button Set Automatic Login: OFF Check: Show the Restart, Sleep, and Shutdown buttons Uncheck: Show input menu in login window Uncheck: Show password hints Uncheck: Use voice over in the login window Check: Show fast user switching menu as Short Name (note: this is only part of a long list to do on each machine) I've managed to find some references to make some parts work. Like autologin can be unset with: defaults write /Library/Preferences/.GlobalPreferences com.apple.userspref.DisableAutoLogin -bool TRUE and I've kinda found ways to muscle in a new user creation (including prompts) with AppleScript and shell commands. But generally its tough finding ways to do somewhat simple things like turn on password to get out of screensaver or to allow fast user switching. References are either too limited or just no where to be seen (e.g. I can unset autologin via cli but the very next setting on the system preference "show restart, sleep and shutdown buttons" is somewhere else and I can't find any command line to make it set) Does anyone have any ideas on a list, document, reference or anything of where each setting on the system resides so that I can be pointed to make it work? or maybe sample scripts for the above example... My thanks for reading thus far—a huge thank you for whoever that has any info on the above.

    Read the article

  • Vectors or Java arrays for Tetris?

    - by StackedCrooked
    I'm trying to create a Tetris-like game with Clojure and I'm having some trouble deciding the data structure for the playing field. I want to define the playing field as a mutable grid. The individual blocks are also grids, but don't need to be mutable. My first attempt was to define a grid as a vector of vectors. For example an S-block looks like this: :s-block { :grids [ [ [ 0 1 1 ] [ 1 1 0 ] ] [ [ 1 0 ] [ 1 1 ] [ 0 1 ] ] ] } But that turns out to be rather tricky for simple things like iterating and painting (see the code below). For making the grid mutable my initial idea was to make each row a reference. But then I couldn't really figure out how to change the value of a specific cell in a row. One option would have been to create each individual cell a ref instead of each row. But that feels like an unclean approach. I'm considering using Java arrays now. Clojure's aget and aset functions will probably turn out to be much simpler. However before digging myself in a deeper mess I want to ask ideas/insights. How would you recommend implementing a mutable 2d grid? Feel free to share alternative approaches as well. Source code current state: Tetris.clj (rev452)

    Read the article

  • how can I make a "downstream only" copy of a file in TFS

    - by jcollum
    I've got this SQL script that needs to exist in two places in source control. I want to have only one real copy of this file and keep a virtual copy of the file in the other solution. One is needed for a unit test and the other for a development tool. The files, should, by definition, always be the same. If they have differences then there's a problem with our process. In Sourcegear I could make a virtual copy of a specific version of a file and keep it somewhere else in the source tree. That doesn't seem to be possible in TFS. Is it possible in SVN? So what are my options here? Branching/merging -- which is what the TFS team says I should be doing here -- means just another step that I have to remember to do. Plus it isn't automatic and I would prefer that this be automated. Is there some way to run an exe on checkin of a specific file? I'm thinking if I could do that then I could do a checkout-edit-checkin of the downstream copy of the file.

    Read the article

  • C# Passing objects and list of objects by reference

    - by David Liddle
    I have a delegate that modifies an object. I pass an object to the delegate from a calling method, however the calling method does not pickup these changes. The same code works if I pass a List as the object. I thought all objects were passed by reference so any modifications would be reflected in the calling method? I can modify my code to pass a ref object to the delegate but am wondering why this is necessary? public class Binder { protected delegate int MyBinder<T>(object reader, T myObject); public void BindIt<T>(object reader, T myObject) { //m_binders is a hashtable of binder objects MyBinder<T> binder = m_binders["test"] as MyBinder<T>; int i = binder(reader, myObject); } } public class MyObjectBinder { public MyObjectBinder() { m_delegates["test"] = new MyBinder<MyObject>(BindMyObject); } private int BindMyObject(object reader, MyObject obj) { //make changes to obj in here } } ///calling method in some other class public void CallingMethod() { MyObject obj = new MyObject(); MyBinder binder = new MyBinder(); binder.BindIt(myReader, obj); //don't worry about myReader //obj should show reflected changes }

    Read the article

  • swap! alter and alike

    - by mekka
    Hello, I am having a problem understanding how these functions update the underlying ref, atom etc. The docs say: (apply f current-value-of-identity args) (def one (atom 0)) (swap! one inc) ;; => 1 So I am wondering how it got "expanded" to the apply form. It's not mentioned what exactly 'args' in the apply form is. Is it a sequence of arguments or are these separate values? Was it "expanded" to: (apply inc 0) ; obviously this wouldnt work, so that leaves only one possibility (apply inc 0 '()) (swap! one + 1 2 3) ;; #=> 7 Was it: (apply + 1 1 2 3 '()) ;or (apply + 1 [1 2 3]) (def two (atom [])) (swap! two conj 10 20) ;; #=> [10 20] Was it: (apply conj [] [10 20]) ;or (apply conj [] 10 20 '()) If I swap with a custom function like this: (def three (atom 0)) (swap! three (fn [current elem] (println (class elem))) 10) ;;#=> java.Lang.Integer Which means that the value '10' doesnt magically get changed into a seq '(10) and leads me to the conclusion, that it gets "expanded" to: (apply f current-value-of-identity arg1 arg2 arg3... '()) Is that a correct assumption and the docs are simply lacking a better description?

    Read the article

  • Is it possible to reference remote content from chrome.manifest? (XULRunner)

    - by siemaa
    Hi, I have a xulrunner application and I've been trying to reference remote content from chrome.manifest file. Tt's an application for the company I work in; it's run on a number of computers (most of them are used by other employees as well) as a kind of an internet monitoring service. The problem I'd like to solve is this: updating the code of such application usually requires me to manually copy the modified files to every computer that the application is running on (I've had no luck trying to make automatic updates via xulrunner platform). This process has become very tedious. What I'd like to have is a web server, where all of the xul and js files would be accessible, so that every application could reference them from there. This would require me only to update the code on that server, and the applications (when restarted) would automatically get the latest code. What I managed to do: I can reference js scripts from a xul file using http based urls and everything works fine (I can use local, binary components etc.), although the xul file has to be local - that I'd like to change. But when I write in chrome.manifest a line like: content my_app http://path/to/app/files/ and then use the line in default/preferences/pref.js pref("toolkit.defaultChromeURI", "chrome://my_app/content/my_app.xul"); it just opens a console window (to test I manually run the application with the -console option) and no code gets executed. The file can be downloaded remotely using wget so I guess this isn't the web server issue. The applications work on Windows machines. Is there some kind of security issue causing such behavior or am I doing something wrong? Is it even possible to register remote, http based content as chrome?

    Read the article

  • Timer C#. Start, stop, and get the amount of time between the calls

    - by user1886060
    I'm writing UDP chat with reliable data transfer. I need to start a timer when a packet is sent, and stop it as soon it receives an answer from the server(ACK- acknowledgment). Here is my code: private void sendButton_Click(object sender, EventArgs e) { Packet snd = new Packet(ack, textBox1.Text.Trim()); textBox1.Text = string.Empty; Smsg = snd.GetDataStream();//convert message into array of bytes to send. while (true) { try { // Here I need to Start a timer! clientSock.SendTo(Smsg, servEP); clientSock.ReceiveFrom(Rmsg, ref servEP); //Here I need to stop a timer and get elapsed amount of time. Packet rcv = new Packet(Rmsg); if (Rmsg != null && rcv.ACK01 != ack) continue; if (Rmsg != null && rcv.ACK01 == ack) { this.displayMessageDelegate("ack is received :"+ack); ChangeAck(ack); break; } Thank you.

    Read the article

  • When my object is no longer referenced why do its events continue to run?

    - by Ryan Peschel
    Say I am making a game and have a base Buff class. I may also have a subclass named ThornsBuff which subscribes to the Player class's Damaged event. The Player may have a list of Buffs and one of them may be the ThornsBuff. For example: Test Class Player player = new Player(); player.ActiveBuffs.Add(new ThornsBuff(player)); ThornsBuff Class public ThornsBuff(Player player) { player.DamageTaken += player_DamageTaken; } private void player_DamageTaken(MessagePlayerDamaged m) { m.Assailant.Stats.Health -= (int)(m.DamageAmount * .25); } This is all to illustrate an example. If I were to remove the buff from the list, the event is not detached. Even though the player no longer has the buff, the event is still executed as if he did. Now, I could have a Dispel method to unregister the event, but that forces the developer to call Dispel in addition to removing the Buff from the list. What if they forget, increased coupling, etc. What I don't understand is why the event doesn't detatch itself automatically when the Buff is removed from the list. The Buff only existed in the list and that is its one reference. After it is removed shouldn't the event be detached? I tried adding the detaching code to the finalizer of the Buff but that didn't fix it either. The event is still running even after it has 0 references. I suppose it is because the garbage collector had not run yet? Is there any way to make it automatic and instant so that when the object has no references all its events are unregisted? Thanks.

    Read the article

  • Shell halts while looping and 'transforming' values in dictionary (Python 2.7.5)

    - by Gus
    I'm building a program that will sum digits in a given list in a recursive way. Say, if the source list has 10 elements, the second list will have 9, the third 8 and so on until the last list that will have only one element. This is done by adding the first element to the second, then the second to the third and so on. I'm stuck without feedback from the shell. It halts without throwing any errors, then in a couple of seconds the fan is spinning like crazy. I've read quite a few posts here and changed my approach, but I'm not sure that what have so far can produce the results I'm looking for. Thanks in advance: #--------------------------------------------------- #functions #--------------------------------------------------- #sum up pairs in a list def reduce(inputList): i = 0 while (i < len(inputList)): #ref to current and next item j = i + 1 #don't go for the last item if j != len(inputList): #new number eq current + next number newNumber = inputList[i] + inputList[j] if newNumber >= 10: #reduce newNumber to single digit newNumber = sum(map(int, str(newNumber))) #collect into temp list outputList.append(newNumber) i = i + 1 return outputList; #--------------------------------------------------- #program starts here #--------------------------------------------------- outputList = [] sourceList = [7, 3, 1, 2, 1, 4, 6] counter = len(sourceList) dict = {} dict[0] = sourceList print '-------------' print 'Level 0:', dict[0] for i in range(counter): j = i + 1 if j != counter: baseList = dict.get(i) #check function to understand what it does newList = reduce(baseList) #new key and value from previous/transformed value dict[j] = newList print 'Level %d: %s' % (j, dict[j])

    Read the article

  • The Incremental Architect&acute;s Napkin - #1 - It&acute;s about the money, stupid

    - by Ralf Westphal
    Originally posted on: http://geekswithblogs.net/theArchitectsNapkin/archive/2014/05/24/the-incremental-architectacutes-napkin---1---itacutes-about-the.aspx Software development is an economic endeavor. A customer is only willing to pay for value. What makes a software valuable is required to become a trait of the software. We as software developers thus need to understand and then find a way to implement requirements. Whether or in how far a customer really can know beforehand what´s going to be valuable for him/her in the end is a topic of constant debate. Some aspects of the requirements might be less foggy than others. Sometimes the customer does not know what he/she wants. Sometimes he/she´s certain to want something - but then is not happy when that´s delivered. Nevertheless requirements exist. And developers will only be paid if they deliver value. So we better focus on doing that. Although is might sound trivial I think it´s important to state the corollary: We need to be able to trace anything we do as developers back to some requirement. You decide to use Go as the implementation language? Well, what´s the customer´s requirement this decision is linked to? You decide to use WPF as the GUI technology? What´s the customer´s requirement? You decide in favor of a layered architecture? What´s the customer´s requirement? You decide to put code in three classes instead of just one? What´s the customer´s requirement behind that? You decide to use MongoDB over MySql? What´s the customer´s requirement behind that? etc. I´m not saying any of these decisions are wrong. I´m just saying whatever you decide be clear about the requirement that´s driving your decision. You have to be able to answer the question: Why do you think will X deliver more value to the customer than the alternatives? Customers are not interested in romantic ideals of hard working, good willing, quality focused craftsmen. They don´t care how and why you work - as long as what you deliver fulfills their needs. They want to trust you to recognize this as your top priority - and then deliver. That´s all. Fundamental aspects of requirements If you´re like me you´re probably not used to such scrutinization. You want to be trusted as a professional developer - and decide quite a few things following your gut feeling. Or by relying on “established practices”. That´s ok in general and most of the time - but still… I think we should be more conscious about our decisions. Which would make us more responsible, even more professional. But without further guidance it´s hard to reason about many of the myriad decisions we´ve to make over the course of a software project. What I found helpful in this situation is structuring requirements into fundamental aspects. Instead of one large heap of requirements then there are smaller blobs. With them it´s easier to check if a decisions falls in their scope. Sure, every project has it´s very own requirements. But all of them belong to just three different major categories, I think. Any requirement either pertains to functionality, non-functional aspects or sustainability. For short I call those aspects: Functionality, because such requirements describe which transformations a software should offer. For example: A calculator software should be able to add and multiply real numbers. An auction website should enable you to set up an auction anytime or to find auctions to bid for. Quality, because such requirements describe how functionality is supposed to work, e.g. fast or secure. For example: A calculator should be able to calculate the sinus of a value much faster than you could in your head. An auction website should accept bids from millions of users. Security of Investment, because functionality and quality need not just be delivered in any way. It´s important to the customer to get them quickly - and not only today but over the course of several years. This aspect introduces time into the “requrements equation”. Security of Investments (SoI) sure is a non-functional requirement. But I think it´s important to not subsume it under the Quality (Q) aspect. That´s because SoI has quite special properties. For one, SoI for software means something completely different from what it means for hardware. If you buy hardware (a car, a hair blower) you find that a worthwhile investment, if the hardware does not change it´s functionality or quality over time. A car still running smoothly with hardly any rust spots after 10 years of daily usage would be a very secure investment. So for hardware (or material products, if you like) “unchangeability” (in the face of usage) is desirable. With software you want the contrary. Software that cannot be changed is a waste. SoI for software means “changeability”. You want to be sure that the software you buy/order today can be changed, adapted, improved over an unforseeable number of years so as fit changes in its usage environment. But that´s not the only reason why the SoI aspect is special. On top of changeability[1] (or evolvability) comes immeasurability. Evolvability cannot readily be measured by counting something. Whether the changeability is as high as the customer wants it, cannot be determined by looking at metrics like Lines of Code or Cyclomatic Complexity or Afferent Coupling. They may give a hint… but they are far, far from precise. That´s because of the nature of changeability. It´s different from performance or scalability. Also it´s because a customer cannot tell upfront, “how much” evolvability he/she wants. Whether requirements regarding Functionality (F) and Q have been met, a customer can tell you very quickly and very precisely. A calculation is missing, the calculation takes too long, the calculation time degrades with increased load, the calculation is accessible to the wrong users etc. That´s all very or at least comparatively easy to determine. But changeability… That´s a whole different thing. Nevertheless over time the customer will develop a feedling if changeability is good enough or degrading. He/she just has to check the development of the frequency of “WTF”s from developers ;-) F and Q are “timeless” requirement categories. Customers want us to deliver on them now. Just focusing on the now, though, is rarely beneficial in the long run. So SoI adds a counterweight to the requirements picture. Customers want SoI - whether they know it or not, whether they state if explicitly or not. In closing A customer´s requirements are not monolithic. They are not all made the same. Rather they fall into different categories. We as developers need to recognize these categories when confronted with some requirement - and take them into account. Only then can we make true professional decisions, i.e. conscious and responsible ones. I call this fundamental trait of software “changeability” and not “flexibility” to distinguish to whom it´s a concern. “Flexibility” to me means, software as is can easily be adapted to a change in its environment, e.g. by tweaking some config data or adding a library which gets picked up by a plug-in engine. “Flexibiltiy” thus is a matter of some user. “Changeability”, on the other hand, to me means, software can easily be changed in its structure to adapt it to new requirements. That´s a matter of the software developer. ?

    Read the article

  • 202 blog articles

    - by mprove
    All my blog articles under blogs.oracle.com since August 2005: 202 blog articles Apr 2012 blogs.oracle.com design patch Mar 2012 Interaction 12 - Critique Mar 2012 Typing. Clicking. Dancing. Feb 2012 Desktop Mobility in Hospitals with Oracle VDI /video Feb 2012 Interaction 12 in Dublin - Highlights of Day 3 Feb 2012 Interaction 12 in Dublin - Highlights of Day 2 Feb 2012 Interaction 12 in Dublin - Highlights of Day 1 Feb 2012 Shit Interaction Designers Say Feb 2012 Tips'n'Tricks for WebCenter #3: How to display custom page titles in Spaces Jan 2012 Tips'n'Tricks for WebCenter #2: How to create an Admin menu in Spaces and save a lot of time Jan 2012 Tips'n'Tricks for WebCenter #1: How to apply custom resources in Spaces Jan 2012 Merry XMas and a Happy 2012! Dec 2011 One Year Oracle SocialChat - The Movie Nov 2011 Frank Ludolph's Last Working Day Nov 2011 Hans Rosling at TED Oct 2011 200 Countries x 200 Years Oct 2011 Blog Aggregation for Desktop Virtualization Oct 2011 Oracle VDI at OOW 2011 Sep 2011 Design for Conversations & Conversations for Design Sep 2011 All Oracle UX Blogs Aug 2011 Farewell Loriot Aug 2011 Oracle VDI 3.3 Overview Aug 2011 Sutherland's Closing Remarks at HyperKult Aug 2011 Surface and Subface Aug 2011 Back to Childhood in UI Design Jul 2011 The Art of Engineering and The Engineering of Art Jul 2011 Oracle VDI Seminar - June-30 Jun 2011 SGD White Paper May 2011 TEDxHamburg Live Feed May 2011 Oracle VDI in 3 Minutes May 2011 Space Ship Earth 2011 May 2011 blog moving times Apr 2011 Frozen tag cloud Apr 2011 Oracle: Hardware Software Complete in 1953 Apr 2011 Interaction Design with Wireframes Apr 2011 A guide to closing down a project Feb 2011 Oracle VDI 3.2.2 Jan 2011 free VDI charts Jan 2011 Sun Founders Panel 2006 Dec 2010 Sutherland on Leadership Dec 2010 SocialChat: Efficiency of E20 Dec 2010 ALWAYS ON Desktop Virtualization Nov 2010 12,000 Desktops at JavaOne Nov 2010 SocialChat on Sharing Best Practices Oct 2010 Globe of Visitors Oct 2010 SocialChat about the Next Big Thing Oct 2010 Oracle VDI UX Story - Wireframes Oct 2010 What's a PC anyway? Oct 2010 SocialChat on Getting Things Done Oct 2010 SocialChat on Infoglut Oct 2010 IT Twenty Twenty Oct 2010 Desktop Virtualization Webcasts from OOW Oct 2010 Oracle VDI 3.2 Overview Sep 2010 Blog Usability Top 7 Sep 2010 100 and counting Aug 2010 Oracle'izing the VDI Blogs Aug 2010 SocialChat on Apple Aug 2010 SocialChat on Video Conferencing Aug 2010 Oracle VDI 3.2 - Features and Screenshots Aug 2010 SocialChat: Don't stop making waves Aug 2010 SocialChat: Giving Back to the Community Aug 2010 SocialChat on Learning in Meetings Aug 2010 iPAD's Natural User Interface Jul 2010 Last day for Sun Microsystems GmbH Jun 2010 SirValUse Celebration Snippets Jun 2010 10 years SirValUse - Happy Birthday! Jun 2010 Wim on Virtualization May 2010 New Home for Oracle VDI Apr 2010 Renaissance Slide Sorter Comments Apr 2010 Unboxing Sun Ray 3 Plus Apr 2010 Desktop Virtualisierung mit Sun VDI 3.1 Apr 2010 Blog Relaunch Mar 2010 Social Messaging Slides from CeBIT Mar 2010 Social Messaging Talk at CeBIT Feb 2010 Welcome Oracle Jan 2010 My last presentation at Sun Jan 2010 Ivan Sutherland on Leadership Jan 2010 Learning French with Sun VDI Jan 2010 Learning Danish with Sun Ray Jan 2010 VDI workshop in Nieuwegein Jan 2010 Happy New Year 2010 Jan 2010 On Creating Slides Dec 2009 Best VDI Ever Nov 2009 How to store the Big Bang Nov 2009 Social Enterprise Tools. Beipiel Sun. Nov 2009 Nov-19 Nov 2009 PDF and ODF links on your blog Nov 2009 Q&A on VDI and MySQL Cluster Nov 2009 Zürich next week: Swiss Intranet Summit 09 Nov 2009 Designing for a Sustainable World - World Usabiltiy Day, Nov-12 Nov 2009 How to export a desktop from VDI 3 Nov 2009 Virtualisation Roadshow in the UK Nov 2009 Project Wonderland at EDUCAUSE 09 Nov 2009 VDI Roadshow in Dublin, Nov-26, 2009 Nov 2009 Sun VDI at EDUCAUSE 09 Nov 2009 Sun VDI 3.1 Architecture and New Features Oct 2009 VDI 3.1 is Early-Access Sep 2009 Virtualization for MySQL on VMware Sep 2009 Silpion & 13. Stock Sommerparty Sep 2009 Sun Ray and VMware View 3.1.1 2009-08-31 New Set of Sun Ray Status Icons 2009-08-25 Virtualizing the VDI Core? 2009-08-23 World Usability Day Hamburg 2009 - CfP 2009-07-16 Rising Sun 2009-07-15 featuring twittermeme 2009-06-19 ISC09 Student Party on June-20 /Hamburg 2009-06-18 Before and behind the curtain of JavaOne 2009-06-09 20k desktops at JavaOne 2009-06-01 sweet microblogging 2009-05-25 VDI 3 - Why you need 3 VDI hosts and what you can do about that? 2009-05-21 IA Konferenz 2009 2009-05-20 Sun VDI 3 UX Story - Power of the Web 2009-05-06 Planet of Sun and Oracle User Experience Design 2009-04-22 Sun VDI 3 UX Story - User Research 2009-04-08 Sun VDI 3 UX Story - Concept Workshops 2009-04-06 Localized documentation for Sun Ray Connector for VMware View Manager 1.1 2009-04-03 Sun VDI 3 Press Release 2009-03-25 Sun VDI 3 launches today! 2009-03-25 Sun Ray Connector for VMware View Manager 1.1 Update 2009-03-11 desktop virtualization wiki relaunch 2009-03-06 VDI 3 at CeBIT hall 6, booth E36 2009-03-02 Keyboard layout problems with Sun Ray Connector for VMware VDM 2009-02-23 wikis.sun.com tips & tricks 2009-02-23 Sun VDI 3 is in Early Access 2009-02-09 VirtualCenter unable to decrypt passwords 2009-02-02 Sun & VMware Desktop Training 2009-01-30 VDI at next09? 2009-01-16 Sun VDI: How to use virtual machines with multiple network adapters 2009-01-07 Sun Ray and VMware View 2009-01-07 Hamburg World Usability Day 2008 - Webcasts 2009-01-06 Sun Ray Connector for VMware VDM slides 2008-12-15 mother of all demos 2008-12-08 Build your own Thumper 2008-12-03 Troubleshooting Sun Ray Connector for VMware VDM 2008-12-02 My Roller Tag Cloud 2008-11-28 Sun Ray Connector: SSL connection to VDM 2008-11-25 Setting up SSL and Sun Ray Connector for VMware VDM 2008-11-13 Inspiration for Today and Tomorrow 2008-10-23 Sun Ray Connector for VMware VDM released 2008-10-14 From Sketchpad to ILoveSketch 2008-10-09 Desktop Virtualization on Xing 2008-10-06 User Experience Forum on Xing 2008-10-06 Sun Ray Connector for VMware VDM certified 2008-09-17 Virtual Clouds over Las Vegas 2008-09-14 Bill Verplank sketches metaphors 2008-09-04 End of Early Access - Sun Ray Connector for VMware 2008-08-27 Early Access: Sun Ray Connector for VMware Virtual Desktop Manager 2008-08-12 Sun Virtual Desktop Connector - Insides on Recycling Part 2 2008-07-20 Sun Virtual Desktop Connector - Insides on Recycling Part 3 2008-07-20 Sun Virtual Desktop Connector - Insides on Recycling 2008-07-20 lost in wiki space 2008-07-07 Evolution of the Desktop 2008-06-17 Virtual Desktop Webcast 2008-06-16 Woodstock 2008-06-16 What's a Desktop PC anyway? 2008-06-09 Virtual-T-Box 2008-06-05 Virtualization Glossary 2008-05-06 Five User Experience Principles 2008-04-25 Virtualization News Feed 2008-04-21 Acetylcholinesterase - Second Season 2008-04-18 Acetylcholinesterase - End of Signal 2007-12-31 Produkt-Management ist... 2007-10-22 Usability Verbände, Verteiler und Netzwerke. 2007-10-02 The Meaning is the Message 2007-09-28 Visualization Methods 2007-09-10 Inhouse und Open Source Projekte – Usability verankern und Synergien nutzen 2007-09-03 Der Schwabe Darth Vader entdeckt das Virale Marketing 2007-08-29 Dick Hardt 3.0 on Identity 2.0 2007-08-27 quality of written text depends on the tool 2007-07-27 podcasts for reboot9 2007-06-04 It is the user's itch that need to be scratched 2007-05-25 A duel at reboot9 2007-05-14 Taxonomien und Folksonomien - Tagging als neues HCI-Element 2007-05-10 Dueling Interaction Models of Personal-Computing and Web-Computing 2007-03-01 22.März: Weizenbaum. Rebel at Work. /Filmpremiere Hamburg 2007-02-25 Bruce Sterling at UbiComp 2006 /webcast 2006-11-12 FSOSS 2006 /webcasts 2006-11-10 Highway 101 2006-11-09 User Experience Roundtable Hamburg: EuroGEL 2006 2006-11-08 Douglas Adams' Hyperland (BBC 1990) 2006-10-08 Taxonomien und Folksonomien – Tagging als neues HCI-Element 2006-09-13 Usability im Unternehmen 2006-09-13 Doug does HyperScope 2006-08-26 TED Talks and TechTalks 2006-08-21 Kai Krause über seine Freundschaft zu Douglas Adams 2006-07-20 Rebel At Work: Film Portrait on Weizenbaum 2006-07-04 Gabriele Fischer, mp3 2006-06-07 Dick Hardt at ETech 06 2006-06-05 Weinberger: From Control to Conversation 2006-04-16 Eye Tracking at User Experience Roundtable Hamburg 2006-04-14 dropping knowledge 2006-04-09 GEL 2005 2006-03-13 slide photos of reboot7 2006-03-04 Dick Hardt on Identity 2.0 2006-02-28 User Experience Newsletter #13: Versioning 2006-02-03 Ester Dyson on Choice and Happyness 2006-02-02 Requirements-Engineering im Spannungsfeld von Individual- und Produktsoftware 2006-01-15 User Experience Newsletter #12: Intuition Quiz 2005-11-30 User Experience und Requirements-Engineering für Software-Projekte 2005-10-31 Ivan Sutherland on "Research and Fun" 2005-10-18 Ars Electronica / Mensch und Computer 2005 2005-09-14 60 Jahre nach Memex: Über die Unvereinbarkeit von Desktop- und Web-Paradigma 2005-08-31 reboot 7 2005-06-30

    Read the article

  • Observations in Migrating from JavaFX Script to JavaFX 2.0

    - by user12608080
    Observations in Migrating from JavaFX Script to JavaFX 2.0 Introduction Having been available for a few years now, there is a decent body of work written for JavaFX using the JavaFX Script language. With the general availability announcement of JavaFX 2.0 Beta, the natural question arises about converting the legacy code over to the new JavaFX 2.0 platform. This article reflects on some of the observations encountered while porting source code over from JavaFX Script to the new JavaFX API paradigm. The Application The program chosen for migration is an implementation of the Sudoku game and serves as a reference application for the book JavaFX – Developing Rich Internet Applications. The design of the program can be divided into two major components: (1) A user interface (ideally suited for JavaFX design) and (2) the puzzle generator. For the context of this article, our primary interest lies in the user interface. The puzzle generator code was lifted from a sourceforge.net project and is written entirely in Java. Regardless which version of the UI we choose (JavaFX Script vs. JavaFX 2.0), no code changes were required for the puzzle generator code. The original user interface for the JavaFX Sudoku application was written exclusively in JavaFX Script, and as such is a suitable candidate to convert over to the new JavaFX 2.0 model. However, a few notable points are worth mentioning about this program. First off, it was written in the JavaFX 1.1 timeframe, where certain capabilities of the JavaFX framework were as of yet unavailable. Citing two examples, this program creates many of its own UI controls from scratch because the built-in controls were yet to be introduced. In addition, layout of graphical nodes is done in a very manual manner, again because much of the automatic layout capabilities were in flux at the time. It is worth considering that this program was written at a time when most of us were just coming up to speed on this technology. One would think that having the opportunity to recreate this application anew, it would look a lot different from the current version. Comparing the Size of the Source Code An attempt was made to convert each of the original UI JavaFX Script source files (suffixed with .fx) over to a Java counterpart. Due to language feature differences, there are a small number of source files which only exist in one version or the other. The table below summarizes the size of each of the source files. JavaFX Script source file Number of Lines Number of Character JavaFX 2.0 Java source file Number of Lines Number of Characters ArrowKey.java 6 72 Board.fx 221 6831 Board.java 205 6508 BoardNode.fx 446 16054 BoardNode.java 723 29356 ChooseNumberNode.fx 168 5267 ChooseNumberNode.java 302 10235 CloseButtonNode.fx 115 3408 CloseButton.java 99 2883 ParentWithKeyTraversal.java 111 3276 FunctionPtr.java 6 80 Globals.java 20 554 Grouping.fx 8 140 HowToPlayNode.fx 121 3632 HowToPlayNode.java 136 4849 IconButtonNode.fx 196 5748 IconButtonNode.java 183 5865 Main.fx 98 3466 Main.java 64 2118 SliderNode.fx 288 10349 SliderNode.java 350 13048 Space.fx 78 1696 Space.java 106 2095 SpaceNode.fx 227 6703 SpaceNode.java 220 6861 TraversalHelper.fx 111 3095 Total 2,077 79,127 2531 87,800 A few notes about this table are in order: The number of lines in each file was determined by running the Unix ‘wc –l’ command over each file. The number of characters in each file was determined by running the Unix ‘ls –l’ command over each file. The examination of the code could certainly be much more rigorous. No standard formatting was performed on these files.  All comments however were deleted. There was a certain expectation that the new Java version would require more lines of code than the original JavaFX script version. As evidenced by a count of the total number of lines, the Java version has about 22% more lines than its FX Script counterpart. Furthermore, there was an additional expectation that the Java version would be more verbose in terms of the total number of characters.  In fact the preceding data shows that on average the Java source files contain fewer characters per line than the FX files.  But that's not the whole story.  Upon further examination, the FX Script source files had a disproportionate number of blank characters.  Why?  Because of the nature of how one develops JavaFX Script code.  The object literal dominates FX Script code.  Its not uncommon to see object literals indented halfway across the page, consuming lots of meaningless space characters. RAM consumption Not the most scientific analysis, memory usage for the application was examined on a Windows Vista system by running the Windows Task Manager and viewing how much memory was being consumed by the Sudoku version in question. Roughly speaking, the FX script version, after startup, had a RAM footprint of about 90MB and remained pretty much the same size. The Java version started out at about 55MB and maintained that size throughout its execution. What About Binding? Arguably, the most striking observation about the conversion from JavaFX Script to JavaFX 2.0 concerned the need for data synchronization, or lack thereof. In JavaFX Script, the primary means to synchronize data is via the bind expression (using the “bind” keyword), and perhaps to a lesser extent it’s “on replace” cousin. The bind keyword does not exist in Java, so for JavaFX 2.0 a Data Binding API has been introduced as a replacement. To give a feel for the difference between the two versions of the Sudoku program, the table that follows indicates how many binds were required for each source file. For JavaFX Script files, this was ascertained by simply counting the number of occurrences of the bind keyword. As can be seen, binding had been used frequently in the JavaFX Script version (and does not take into consideration an additional half dozen or so “on replace” triggers). The JavaFX 2.0 program achieves the same functionality as the original JavaFX Script version, yet the equivalent of binding was only needed twice throughout the Java version of the source code. JavaFX Script source file Number of Binds JavaFX Next Java source file Number of “Binds” ArrowKey.java 0 Board.fx 1 Board.java 0 BoardNode.fx 7 BoardNode.java 0 ChooseNumberNode.fx 11 ChooseNumberNode.java 0 CloseButtonNode.fx 6 CloseButton.java 0 CustomNodeWithKeyTraversal.java 0 FunctionPtr.java 0 Globals.java 0 Grouping.fx 0 HowToPlayNode.fx 7 HowToPlayNode.java 0 IconButtonNode.fx 9 IconButtonNode.java 0 Main.fx 1 Main.java 0 Main_Mobile.fx 1 SliderNode.fx 6 SliderNode.java 1 Space.fx 0 Space.java 0 SpaceNode.fx 9 SpaceNode.java 1 TraversalHelper.fx 0 Total 58 2 Conclusions As the JavaFX 2.0 technology is so new, and experience with the platform is the same, it is possible and indeed probable that some of the observations noted in the preceding article may not apply across other attempts at migrating applications. That being said, this first experience indicates that the migrated Java code will likely be larger, though not extensively so, than the original Java FX Script source. Furthermore, although very important, it appears that the requirements for data synchronization via binding, may be significantly less with the new platform.

    Read the article

  • C#: LINQ vs foreach - Round 1.

    - by James Michael Hare
    So I was reading Peter Kellner's blog entry on Resharper 5.0 and its LINQ refactoring and thought that was very cool.  But that raised a point I had always been curious about in my head -- which is a better choice: manual foreach loops or LINQ?    The answer is not really clear-cut.  There are two sides to any code cost arguments: performance and maintainability.  The first of these is obvious and quantifiable.  Given any two pieces of code that perform the same function, you can run them side-by-side and see which piece of code performs better.   Unfortunately, this is not always a good measure.  Well written assembly language outperforms well written C++ code, but you lose a lot in maintainability which creates a big techncial debt load that is hard to offset as the application ages.  In contrast, higher level constructs make the code more brief and easier to understand, hence reducing technical cost.   Now, obviously in this case we're not talking two separate languages, we're comparing doing something manually in the language versus using a higher-order set of IEnumerable extensions that are in the System.Linq library.   Well, before we discuss any further, let's look at some sample code and the numbers.  First, let's take a look at the for loop and the LINQ expression.  This is just a simple find comparison:       // find implemented via LINQ     public static bool FindViaLinq(IEnumerable<int> list, int target)     {         return list.Any(item => item == target);     }         // find implemented via standard iteration     public static bool FindViaIteration(IEnumerable<int> list, int target)     {         foreach (var i in list)         {             if (i == target)             {                 return true;             }         }           return false;     }   Okay, looking at this from a maintainability point of view, the Linq expression is definitely more concise (8 lines down to 1) and is very readable in intention.  You don't have to actually analyze the behavior of the loop to determine what it's doing.   So let's take a look at performance metrics from 100,000 iterations of these methods on a List<int> of varying sizes filled with random data.  For this test, we fill a target array with 100,000 random integers and then run the exact same pseudo-random targets through both searches.                       List<T> On 100,000 Iterations     Method      Size     Total (ms)  Per Iteration (ms)  % Slower     Any         10       26          0.00046             30.00%     Iteration   10       20          0.00023             -     Any         100      116         0.00201             18.37%     Iteration   100      98          0.00118             -     Any         1000     1058        0.01853             16.78%     Iteration   1000     906         0.01155             -     Any         10,000   10,383      0.18189             17.41%     Iteration   10,000   8843        0.11362             -     Any         100,000  104,004     1.8297              18.27%     Iteration   100,000  87,941      1.13163             -   The LINQ expression is running about 17% slower for average size collections and worse for smaller collections.  Presumably, this is due to the overhead of the state machine used to track the iterators for the yield returns in the LINQ expressions, which seems about right in a tight loop such as this.   So what about other LINQ expressions?  After all, Any() is one of the more trivial ones.  I decided to try the TakeWhile() algorithm using a Count() to get the position stopped like the sample Pete was using in his blog that Resharper refactored for him into LINQ:       // Linq form     public static int GetTargetPosition1(IEnumerable<int> list, int target)     {         return list.TakeWhile(item => item != target).Count();     }       // traditionally iterative form     public static int GetTargetPosition2(IEnumerable<int> list, int target)     {         int count = 0;           foreach (var i in list)         {             if(i == target)             {                 break;             }               ++count;         }           return count;     }   Once again, the LINQ expression is much shorter, easier to read, and should be easier to maintain over time, reducing the cost of technical debt.  So I ran these through the same test data:                       List<T> On 100,000 Iterations     Method      Size     Total (ms)  Per Iteration (ms)  % Slower     TakeWhile   10       41          0.00041             128%     Iteration   10       18          0.00018             -     TakeWhile   100      171         0.00171             88%     Iteration   100      91          0.00091             -     TakeWhile   1000     1604        0.01604             94%     Iteration   1000     825         0.00825             -     TakeWhile   10,000   15765       0.15765             92%     Iteration   10,000   8204        0.08204             -     TakeWhile   100,000  156950      1.5695              92%     Iteration   100,000  81635       0.81635             -     Wow!  I expected some overhead due to the state machines iterators produce, but 90% slower?  That seems a little heavy to me.  So then I thought, well, what if TakeWhile() is not the right tool for the job?  The problem is TakeWhile returns each item for processing using yield return, whereas our for-loop really doesn't care about the item beyond using it as a stop condition to evaluate. So what if that back and forth with the iterator state machine is the problem?  Well, we can quickly create an (albeit ugly) lambda that uses the Any() along with a count in a closure (if a LINQ guru knows a better way PLEASE let me know!), after all , this is more consistent with what we're trying to do, we're trying to find the first occurence of an item and halt once we find it, we just happen to be counting on the way.  This mostly matches Any().       // a new method that uses linq but evaluates the count in a closure.     public static int TakeWhileViaLinq2(IEnumerable<int> list, int target)     {         int count = 0;         list.Any(item =>             {                 if(item == target)                 {                     return true;                 }                   ++count;                 return false;             });         return count;     }     Now how does this one compare?                         List<T> On 100,000 Iterations     Method         Size     Total (ms)  Per Iteration (ms)  % Slower     TakeWhile      10       41          0.00041             128%     Any w/Closure  10       23          0.00023             28%     Iteration      10       18          0.00018             -     TakeWhile      100      171         0.00171             88%     Any w/Closure  100      116         0.00116             27%     Iteration      100      91          0.00091             -     TakeWhile      1000     1604        0.01604             94%     Any w/Closure  1000     1101        0.01101             33%     Iteration      1000     825         0.00825             -     TakeWhile      10,000   15765       0.15765             92%     Any w/Closure  10,000   10802       0.10802             32%     Iteration      10,000   8204        0.08204             -     TakeWhile      100,000  156950      1.5695              92%     Any w/Closure  100,000  108378      1.08378             33%     Iteration      100,000  81635       0.81635             -     Much better!  It seems that the overhead of TakeAny() returning each item and updating the state in the state machine is drastically reduced by using Any() since Any() iterates forward until it finds the value we're looking for -- for the task we're attempting to do.   So the lesson there is, make sure when you use a LINQ expression you're choosing the best expression for the job, because if you're doing more work than you really need, you'll have a slower algorithm.  But this is true of any choice of algorithm or collection in general.     Even with the Any() with the count in the closure it is still about 30% slower, but let's consider that angle carefully.  For a list of 100,000 items, it was the difference between 1.01 ms and 0.82 ms roughly in a List<T>.  That's really not that bad at all in the grand scheme of things.  Even running at 90% slower with TakeWhile(), for the vast majority of my projects, an extra millisecond to save potential errors in the long term and improve maintainability is a small price to pay.  And if your typical list is 1000 items or less we're talking only microseconds worth of difference.   It's like they say: 90% of your performance bottlenecks are in 2% of your code, so over-optimizing almost never pays off.  So personally, I'll take the LINQ expression wherever I can because they will be easier to read and maintain (thus reducing technical debt) and I can rely on Microsoft's development to have coded and unit tested those algorithm fully for me instead of relying on a developer to code the loop logic correctly.   If something's 90% slower, yes, it's worth keeping in mind, but it's really not until you start get magnitudes-of-order slower (10x, 100x, 1000x) that alarm bells should really go off.  And if I ever do need that last millisecond of performance?  Well then I'll optimize JUST THAT problem spot.  To me it's worth it for the readability, speed-to-market, and maintainability.

    Read the article

  • .NET to iOS: From WinForms to the iPad

    - by RobertChipperfield
    One of the great things about working at Red Gate is getting to play with new technology - and right now, that means mobile. A few weeks ago, we decided that a little research into the tablet computing arena was due, and purely from a numbers point of view, that suggested the iPad as a good target device. A quick trip to iPhoneDevCon in San Diego later, and Marine and I came back full of ideas, and with some concept of how iOS development was meant to work. Here's how we went from there to the release of Stacks & Heaps, our geeky take on the classic "Snakes & Ladders" game. Step 1: Buy a Mac I've played with many operating systems in my time: from the original BBC Model B, through DOS, Windows, Linux, and others, but I'd so far managed to avoid buying fruit-flavoured computer hardware! If you want to develop for the iPhone, iPad or iPod Touch, that's the first thing that needs to change. If you've not used OS X before, the first thing you'll realise is that everything is different! In the interests of avoiding a flame war in the comments section, I'll only go so far as to say that a lot of my Windows-flavoured muscle memory no longer worked. If you're in the UK, you'll also realise your keyboard is lacking a # key, and that " and @ are the other way around from normal. The wonderful Ukelele keyboard layout editor restores some sanity here, as long as you don't look at the keyboard when you're typing. I couldn't give up the PC entirely, but a handy application called Synergy comes to the rescue - it lets you share a single keyboard and mouse between multiple machines. There's a few limitations: Alt-Tab always seems to go to the Mac, and Windows 7's UAC dialogs require the local mouse for security reasons, but it gets you a long way at least. Step 2: Register as an Apple Developer You can register as an Apple Developer free of charge, and that lets you download XCode and the iOS SDK. You also get the iPhone / iPad emulator, which is handy, since you'll need to be a paid member before you can deploy your apps to a real device. You can either enroll as an individual, or as a company. They both cost the same ($99/year), but there's a few differences between them. If you register as a company, you can add multiple developers to your team (all for the same $99 - not $99 per developer), and you get to use your company name in the App Store. However, you'll need to send off significantly more documentation to Apple, and I suspect the process takes rather longer than for an individual, where they just need to verify some credit card details. Here's a tip: if you're registering as a company, do so as early as possible. The approval process can take a while to complete, so get the application in in plenty of time. Step 3: Learn to love the square brackets! Objective-C is the language of the iPad. C and C++ are also supported, and if you're doing some serious game development, you'll probably spend most of your time in C++ talking OpenGL, but for forms-based apps, you'll be interacting with a lot of the Objective-C SDK. Like shifting from Ctrl-C to Cmd-C, it feels a little odd at first, with the familiar string.format(.) turning into: NSString *myString = [NSString stringWithFormat:@"Hello world, it's %@", [NSDate date]]; Thankfully XCode's auto-complete is normally passable, if not up to Visual Studio's standards, which coupled with a huge amount of content on Stack Overflow means you'll soon get to grips with the API. You'll need to get used to some terminology changes, though; here's an incomplete approximation: Coming from a .NET background, there's some luxuries you no longer have developing Objective C in XCode: Generics! Remember back in .NET 1.1, when all collections were just objects? Yup, we're back there now. ReSharper. Or, more generally, very much refactoring support. The not-many-keystrokes to rename a class, its file, and al references to it in Visual Studio turns into a much more painful experience in XCode. Garbage collection. This is actually rather less of an issue than you might expect: if you follow the rules, the reference counting provided by Objective C gets you a long way without too much pain. Circular references are their usual problematic self, though. Decent exception handling. You do have exceptions, but they're nowhere near as widely used. Generally, if something goes wrong, you get nil (see translation table above) back. Which brings me on to. Calling a method on a nil object isn't a failure - it just returns nil itself! There's many arguments for and against this, but personally I fall into the "stuff should fail as quickly and explicitly as possible" camp. Less specifically, I found that there's more chance of code failing at runtime rather than getting caught at compile-time: using the @selector(.) syntax to pass a method signature isn't (can't be) checked at compile-time, so the first you know about a typo is a crash when you try and call it. The solution to this is of course lots of great testing, both automated and manual, but I still find comfort in provably correct type safety being enforced in addition to testing. Step 4: Submit to the App Store Assuming you want to distribute to more than a handful of devices, you're going to need to submit your app to the Apple App Store. There's a few gotchas in terms of getting builds signed with the right certificates, and you'll be bouncing around between XCode and iTunes Connect a fair bit, but eventually you get everything checked off the to-do list, and are ready to upload your first binary! With some amount of anticipation, I pressed the Upload button in XCode, ready to release our creation into the world, but was instead greeted by an error informing me my XML file was malformed. Uh. A little Googling later, and it turned out that a simple rename from "Stacks&Heaps.app" to "StacksAndHeaps.app" worked around an XML escaping bug, and we were good to go. The next step is to wait for approval (or otherwise). After a couple of weeks of intensive development, this part is agonising. Did we make it? The Apple jury is still out at the moment, but our fingers are firmly crossed! In the meantime, you can see some screenshots and leave us your email address if you'd like us to get in touch when it does go live at the MobileFoo website. Step 5: Profit! Actually, that wasn't the idea here: Stacks & Heaps is free; there's no adverts, and we're not going to sell all your data either. So why did we do it? We wanted to get an idea of what it's like to move from coding for a desktop environment, to something completely different. We don't know whether in a year's time, the iPad will still be the dominant force, or whether Android will have smoothed out some bugs, tweaked the performance, and polished the UI, but I think it's a fairly sure bet that the tablet form factor is here to stay. We want to meet people who are using it, start chatting to them, and find out about some of the pain they're feeling. What better way to do that than do it ourselves, and get to write a cool game in the process?

    Read the article

  • The blocking nature of aggregates

    - by Rob Farley
    I wrote a post recently about how query tuning isn’t just about how quickly the query runs – that if you have something (such as SSIS) that is consuming your data (and probably introducing a bottleneck), then it might be more important to have a query which focuses on getting the first bit of data out. You can read that post here.  In particular, we looked at two operators that could be used to ensure that a query returns only Distinct rows. and The Sort operator pulls in all the data, sorts it (discarding duplicates), and then pushes out the remaining rows. The Hash Match operator performs a Hashing function on each row as it comes in, and then looks to see if it’s created a Hash it’s seen before. If not, it pushes the row out. The Sort method is quicker, but has to wait until it’s gathered all the data before it can do the sort, and therefore blocks the data flow. But that was my last post. This one’s a bit different. This post is going to look at how Aggregate functions work, which ties nicely into this month’s T-SQL Tuesday. I’ve frequently explained about the fact that DISTINCT and GROUP BY are essentially the same function, although DISTINCT is the poorer cousin because you have less control over it, and you can’t apply aggregate functions. Just like the operators used for Distinct, there are different flavours of Aggregate operators – coming in blocking and non-blocking varieties. The example I like to use to explain this is a pile of playing cards. If I’m handed a pile of cards and asked to count how many cards there are in each suit, it’s going to help if the cards are already ordered. Suppose I’m playing a game of Bridge, I can easily glance at my hand and count how many there are in each suit, because I keep the pile of cards in order. Moving from left to right, I could tell you I have four Hearts in my hand, even before I’ve got to the end. By telling you that I have four Hearts as soon as I know, I demonstrate the principle of a non-blocking operation. This is known as a Stream Aggregate operation. It requires input which is sorted by whichever columns the grouping is on, and it will release a row as soon as the group changes – when I encounter a Spade, I know I don’t have any more Hearts in my hand. Alternatively, if the pile of cards are not sorted, I won’t know how many Hearts I have until I’ve looked through all the cards. In fact, to count them, I basically need to put them into little piles, and when I’ve finished making all those piles, I can count how many there are in each. Because I don’t know any of the final numbers until I’ve seen all the cards, this is blocking. This performs the aggregate function using a Hash Match. Observant readers will remember this from my Distinct example. You might remember that my earlier Hash Match operation – used for Distinct Flow – wasn’t blocking. But this one is. They’re essentially doing a similar operation, applying a Hash function to some data and seeing if the set of values have been seen before, but before, it needs more information than the mere existence of a new set of values, it needs to consider how many of them there are. A lot is dependent here on whether the data coming out of the source is sorted or not, and this is largely determined by the indexes that are being used. If you look in the Properties of an Index Scan, you’ll be able to see whether the order of the data is required by the plan. A property called Ordered will demonstrate this. In this particular example, the second plan is significantly faster, but is dependent on having ordered data. In fact, if I force a Stream Aggregate on unordered data (which I’m doing by telling it to use a different index), a Sort operation is needed, which makes my plan a lot slower. This is all very straight-forward stuff, and information that most people are fully aware of. I’m sure you’ve all read my good friend Paul White (@sql_kiwi)’s post on how the Query Optimizer chooses which type of aggregate function to apply. But let’s take a look at SQL Server Integration Services. SSIS gives us a Aggregate transformation for use in Data Flow Tasks, but it’s described as Blocking. The definitive article on Performance Tuning SSIS uses Sort and Aggregate as examples of Blocking Transformations. I’ve just shown you that Aggregate operations used by the Query Optimizer are not always blocking, but that the SSIS Aggregate component is an example of a blocking transformation. But is it always the case? After all, there are plenty of SSIS Performance Tuning talks out there that describe the value of sorted data in Data Flow Tasks, describing the IsSorted property that can be set through the Advanced Editor of your Source component. And so I set about testing the Aggregate transformation in SSIS, to prove for sure whether providing Sorted data would let the Aggregate transform behave like a Stream Aggregate. (Of course, I knew the answer already, but it helps to be able to demonstrate these things). A query that will produce a million rows in order was in order. Let me rephrase. I used a query which produced the numbers from 1 to 1000000, in a single field, ordered. The IsSorted flag was set on the source output, with the only column as SortKey 1. Performing an Aggregate function over this (counting the number of rows per distinct number) should produce an additional column with 1 in it. If this were being done in T-SQL, the ordered data would allow a Stream Aggregate to be used. In fact, if the Query Optimizer saw that the field had a Unique Index on it, it would be able to skip the Aggregate function completely, and just insert the value 1. This is a shortcut I wouldn’t be expecting from SSIS, but certainly the Stream behaviour would be nice. Unfortunately, it’s not the case. As you can see from the screenshots above, the data is pouring into the Aggregate function, and not being released until all million rows have been seen. It’s not doing a Stream Aggregate at all. This is expected behaviour. (I put that in bold, because I want you to realise this.) An SSIS transformation is a piece of code that runs. It’s a physical operation. When you write T-SQL and ask for an aggregation to be done, it’s a logical operation. The physical operation is either a Stream Aggregate or a Hash Match. In SSIS, you’re telling the system that you want a generic Aggregation, that will have to work with whatever data is passed in. I’m not saying that it wouldn’t be possible to make a sometimes-blocking aggregation component in SSIS. A Custom Component could be created which could detect whether the SortKeys columns of the input matched the Grouping columns of the Aggregation, and either call the blocking code or the non-blocking code as appropriate. One day I’ll make one of those, and publish it on my blog. I’ve done it before with a Script Component, but as Script components are single-use, I was able to handle the data knowing everything about my data flow already. As per my previous post – there are a lot of aspects in which tuning SSIS and tuning execution plans use similar concepts. In both situations, it really helps to have a feel for what’s going on behind the scenes. Considering whether an operation is blocking or not is extremely relevant to performance, and that it’s not always obvious from the surface. In a future post, I’ll show the impact of blocking v non-blocking and synchronous v asynchronous components in SSIS, using some of LobsterPot’s Script Components and Custom Components as examples. When I get that sorted, I’ll make a Stream Aggregate component available for download.

    Read the article

< Previous Page | 170 171 172 173 174 175 176 177 178 179 180 181  | Next Page >